|
|
Created:
5 years, 7 months ago by Kimmo Kinnunen Modified:
5 years, 7 months ago CC:
reviews_skia.org Base URL:
https://skia.googlesource.com/skia.git@master Target Ref:
refs/heads/master Project:
skia Visibility:
Public. |
DescriptionMake GrResourceCache perf less sensitive to key length change
Make GrResourceCache performance less sensitive to key length change.
The memcmp in GrResourceKey is called when SkTDynamicHash jumps the
slots to find the hash by a index. Avoid most of the memcmps by
comparing the hash first.
This is important because small changes in key data length can cause
big performance regressions. The theory is that key length change causes
different hash values. These hash values might trigger memcmps that
originally weren't there, causing the regression.
Adds few specialized benches to grresourcecache_add to test different
key lengths. The tests are run only on release, because on debug the
SkTDynamicHash validation takes too long, and adding many such delays
to development test runs would be unproductive. On release the tests
are quite fast.
Effect of this patch to the added tests on amd64:
grresourcecache_find_10 738us -> 768us 1.04x
grresourcecache_find_2 472us -> 476us 1.01x
grresourcecache_find_25 841us -> 845us 1x
grresourcecache_find_4 565us -> 531us 0.94x
grresourcecache_find_54 1.18ms -> 1.1ms 0.93x
grresourcecache_find_5 834us -> 749us 0.9x
grresourcecache_find_3 620us -> 542us 0.87x
grresourcecache_add_25 2.74ms -> 2.24ms 0.82x
grresourcecache_add_56 3.23ms -> 2.56ms 0.79x
grresourcecache_add_54 3.34ms -> 2.62ms 0.78x
grresourcecache_add_5 2.68ms -> 2.1ms 0.78x
grresourcecache_add_10 2.7ms -> 2.11ms 0.78x
grresourcecache_add_2 1.85ms -> 1.41ms 0.76x
grresourcecache_add 1.84ms -> 1.4ms 0.76x
grresourcecache_add_4 1.99ms -> 1.49ms 0.75x
grresourcecache_add_3 2.11ms -> 1.55ms 0.73x
grresourcecache_add_55 39ms -> 13.9ms 0.36x
grresourcecache_find_55 23.2ms -> 6.21ms 0.27x
On arm64 the results are similar.
On arm_v7_neon, the results lack the discontinuity at 55:
grresourcecache_add 4.06ms -> 4.26ms 1.05x
grresourcecache_add_2 4.05ms -> 4.23ms 1.05x
grresourcecache_find 1.28ms -> 1.3ms 1.02x
grresourcecache_find_56 3.35ms -> 3.32ms 0.99x
grresourcecache_find_2 1.31ms -> 1.29ms 0.99x
grresourcecache_find_54 3.28ms -> 3.24ms 0.99x
grresourcecache_add_5 6.38ms -> 6.26ms 0.98x
grresourcecache_add_55 8.44ms -> 8.24ms 0.98x
grresourcecache_add_25 7.03ms -> 6.86ms 0.98x
grresourcecache_find_25 2.7ms -> 2.59ms 0.96x
grresourcecache_find_4 1.45ms -> 1.38ms 0.95x
grresourcecache_find_10 2.52ms -> 2.39ms 0.95x
grresourcecache_find_55 3.54ms -> 3.33ms 0.94x
grresourcecache_find_5 2.5ms -> 2.32ms 0.93x
grresourcecache_find_3 1.57ms -> 1.43ms 0.91x
The extremely slow case, 55, is postulated to be due to the index jump
collisions running the memcmp. This is not visible on arm_v7_neon probably due
to hash function producing different results for 32 bit architectures.
This change is needed for extending path cache key in Gr
NV_path_rendering codepath. Extending is needed in order to add dashed
paths to the path cache.
Committed: https://skia.googlesource.com/skia/+/54b8511189bb5da6bfd248fa63f5c4156e9e2bd6
Patch Set 1 #Patch Set 2 : #
Total comments: 1
Patch Set 3 : Remove an unused variable in the test #
Messages
Total messages: 25 (6 generated)
kkinnunen@nvidia.com changed reviewers: + bsalomon@google.com
mtklein@google.com changed reviewers: + mtklein@google.com
This seems like a good idea to me. In most of these hash table use cases, we've stored the hash and key length first in the key so that 0 == memcmp(&key1, &key2, &len1) is logically the same as len1 == len2 && hash1 == hash2 && 0 == memcmp(&key1, &key2, len1). You guys might want to try SkTHashMap for GrResourceCache. It's got this idea baked in, storing the hash directly in the table, which has the slight further advantage of not having to dereference pointers to a large blob of data to get at that hash.
This seems like a good idea to me. In most of these hash table use cases, we've stored the hash and key length first in the key so that 0 == memcmp(&key1, &key2, &len1) is logically the same as len1 == len2 && hash1 == hash2 && 0 == memcmp(&key1, &key2, len1). You guys might want to try SkTHashMap for GrResourceCache. It's got this idea baked in, storing the hash directly in the table, which has the slight further advantage of not having to dereference pointers to a large blob of data to get at that hash.
https://codereview.chromium.org/1132723003/diff/20001/bench/GrResourceCacheBe... File bench/GrResourceCacheBench.cpp (right): https://codereview.chromium.org/1132723003/diff/20001/bench/GrResourceCacheBe... bench/GrResourceCacheBench.cpp:165: // Only on release because on debug the SkTDynamicHash validation is too slow. I'd be okay with #if-0'ing out the validation code in SkTDynamicHash. It's important when making changes to SkTDynamicHash, but that code has been practically unchanged for about a year now.
On 2015/05/15 12:38:50, mtklein wrote: > This seems like a good idea to me. > > In most of these hash table use cases, we've stored the hash and key length > first in the key so that 0 == memcmp(&key1, &key2, &len1) is logically the same > as len1 == len2 && hash1 == hash2 && 0 == memcmp(&key1, &key2, len1). FWIW, I think this is what GrResourceKey did, too. Still, it is slow if majority of the cases hash1 != hash2.. > You guys might want to try SkTHashMap for GrResourceCache. It's got this idea > baked in Thanks, I could look into this.
On 2015/05/15 13:08:39, Kimmo Kinnunen wrote: > On 2015/05/15 12:38:50, mtklein wrote: > > This seems like a good idea to me. > > > > In most of these hash table use cases, we've stored the hash and key length > > first in the key so that 0 == memcmp(&key1, &key2, &len1) is logically the > same > > as len1 == len2 && hash1 == hash2 && 0 == memcmp(&key1, &key2, len1). > > FWIW, I think this is what GrResourceKey did, too. > Still, it is slow if majority of the cases hash1 != hash2.. So the memcmp does exit early when the first words mismatch? > > > You guys might want to try SkTHashMap for GrResourceCache. It's got this idea > > baked in > > Thanks, I could look into this.
On 2015/05/15 13:58:23, bsalomon wrote: > On 2015/05/15 13:08:39, Kimmo Kinnunen wrote: > > On 2015/05/15 12:38:50, mtklein wrote: > > > This seems like a good idea to me. > > > > > > In most of these hash table use cases, we've stored the hash and key length > > > first in the key so that 0 == memcmp(&key1, &key2, &len1) is logically the > > same > > > as len1 == len2 && hash1 == hash2 && 0 == memcmp(&key1, &key2, len1). > > > > FWIW, I think this is what GrResourceKey did, too. > > Still, it is slow if majority of the cases hash1 != hash2.. > > So the memcmp does exit early when the first words mismatch? > > > > > > You guys might want to try SkTHashMap for GrResourceCache. It's got this > idea > > > baked in > > > > Thanks, I could look into this. The hash is first in GrResourceKey, and memcmp() definitely returns early when not equal. Must be this change demonstrates the overhead of calling memcmp() when we expect it to return early?
On 2015/05/15 14:00:59, mtklein wrote: > On 2015/05/15 13:58:23, bsalomon wrote: > > On 2015/05/15 13:08:39, Kimmo Kinnunen wrote: > > > On 2015/05/15 12:38:50, mtklein wrote: > > > > This seems like a good idea to me. > > > > > > > > In most of these hash table use cases, we've stored the hash and key > length > > > > first in the key so that 0 == memcmp(&key1, &key2, &len1) is logically the > > > same > > > > as len1 == len2 && hash1 == hash2 && 0 == memcmp(&key1, &key2, len1). > > > > > > FWIW, I think this is what GrResourceKey did, too. > > > Still, it is slow if majority of the cases hash1 != hash2.. > > > > So the memcmp does exit early when the first words mismatch? > > > > > > > > > You guys might want to try SkTHashMap for GrResourceCache. It's got this > > idea > > > > baked in > > > > > > Thanks, I could look into this. > > The hash is first in GrResourceKey, and memcmp() definitely returns early when > not equal. > Must be this change demonstrates the overhead of calling memcmp() when we expect > it to return early? Taking this logic one step further, we shouldn't be expecting memcmp() to return early. That may mean we have a hash table that's too full, or a poor hash function that's colliding too often. It looks like GrResourceKeyHash() is using SkChecksum::Compute(), which is not a very good hash. Why not try SkChecksum::Murmur3()?
On 2015/05/15 14:00:59, mtklein wrote: > On 2015/05/15 13:58:23, bsalomon wrote: > > On 2015/05/15 13:08:39, Kimmo Kinnunen wrote: > > > On 2015/05/15 12:38:50, mtklein wrote: > > > > This seems like a good idea to me. > > > > > > > > In most of these hash table use cases, we've stored the hash and key > length > > > > first in the key so that 0 == memcmp(&key1, &key2, &len1) is logically the > > > same > > > > as len1 == len2 && hash1 == hash2 && 0 == memcmp(&key1, &key2, len1). > > > > > > FWIW, I think this is what GrResourceKey did, too. > > > Still, it is slow if majority of the cases hash1 != hash2.. > > > > So the memcmp does exit early when the first words mismatch? > > > > > > > > > You guys might want to try SkTHashMap for GrResourceCache. It's got this > > idea > > > > baked in > > > > > > Thanks, I could look into this. > > The hash is first in GrResourceKey, and memcmp() definitely returns early when > not equal. > Must be this change demonstrates the overhead of calling memcmp() when we expect > it to return early? Right, maybe memcmp isn't getting inlined. I kind of assumed it would be here.
On 2015/05/15 14:04:34, bsalomon wrote: > On 2015/05/15 14:00:59, mtklein wrote: > > On 2015/05/15 13:58:23, bsalomon wrote: > > > On 2015/05/15 13:08:39, Kimmo Kinnunen wrote: > > > > On 2015/05/15 12:38:50, mtklein wrote: > > > > > This seems like a good idea to me. > > > > > > > > > > In most of these hash table use cases, we've stored the hash and key > > length > > > > > first in the key so that 0 == memcmp(&key1, &key2, &len1) is logically > the > > > > same > > > > > as len1 == len2 && hash1 == hash2 && 0 == memcmp(&key1, &key2, len1). > > > > > > > > FWIW, I think this is what GrResourceKey did, too. > > > > Still, it is slow if majority of the cases hash1 != hash2.. > > > > > > So the memcmp does exit early when the first words mismatch? > > > > > > > > > > > > You guys might want to try SkTHashMap for GrResourceCache. It's got > this > > > idea > > > > > baked in > > > > > > > > Thanks, I could look into this. > > > > The hash is first in GrResourceKey, and memcmp() definitely returns early when > > not equal. > > Must be this change demonstrates the overhead of calling memcmp() when we > expect > > it to return early? > > Right, maybe memcmp isn't getting inlined. I kind of assumed it would be here. That'll be pretty compiler-specific.
On 2015/05/15 14:00:59, mtklein wrote: > On 2015/05/15 13:58:23, bsalomon wrote: > > On 2015/05/15 13:08:39, Kimmo Kinnunen wrote: > > > On 2015/05/15 12:38:50, mtklein wrote: > > > > This seems like a good idea to me. > > > > > > > > In most of these hash table use cases, we've stored the hash and key > length > > > > first in the key so that 0 == memcmp(&key1, &key2, &len1) is logically the > > > same > > > > as len1 == len2 && hash1 == hash2 && 0 == memcmp(&key1, &key2, len1). > > > > > > FWIW, I think this is what GrResourceKey did, too. > > > Still, it is slow if majority of the cases hash1 != hash2.. > > > > So the memcmp does exit early when the first words mismatch? > The hash is first in GrResourceKey, and memcmp() definitely returns early when > not equal. > Must be this change demonstrates the overhead of calling memcmp() when we expect > it to return early? Yeah. This is mentioned in the commit message, even. Maybe the compilers do not inline memcmps of arbitrary lengths. Even if they did, arbitrary lenght memcmp needs to setup the counters, etc. In optimal case, the hash should probably be already in a register, because the lookup loops of /find/ and /add/ already load it in order to determine the hash index. I haven't verified if this would actually happen in this implementation, though. (from mtklein, comment #10) > Taking this logic one step further, we shouldn't be expecting memcmp() to return > early. That may mean we have a hash table that's too full, or a poor hash > function that's colliding too often Hash function could be better, maybe. It does not solve the issue. With paths, the resource cache ends up being full all the time. The hashes will always collide. The hashtable indexes will definitively collide. Also: the key for resource cache is 12 bits. It is hard to see that murmur would provide significantly less checks in this case. If this patch cannot go without murmur, I'm of course happy to check that too. I think the options are: 1) the container ensures equality is called only when hash differs. In other words, the expectation is that equality comparison is always slow. If I understand the implicit reasoning, this is the case you think about when you say "we shouldn't be expecting memcmp() to return early". The rationale would be something like: "The equality is called only when it is highly probable that the objects match, and almost never otherwise. If you expect memcmp to return early, you make the wrong assumption that the operation is called often. This is incorrect, as we try to minimize this operation, and optimizing it should not be a primary concern." 2) the container expects the equality operator be fast. In other words, the container is designed to support both complex objects and simple objects. Complex objects would be like in this patch, where the hash value would be compared first. Simple objects would be like ints, where the extra hash value comparison actually has an effect to the overall comparison performance. This is how the TDynamicHash is implemented today. So, what about this patch? Is there a way I can find out concrete problems to fix? Do we need more perf data or something? I see this pattern in other places that use TDynamicHash, and their reviews do not seem to have risen similar concerns, even without the perf data.
On 2015/05/15 14:03:42, mtklein wrote: > It looks like GrResourceKeyHash() is using SkChecksum::Compute(), which is not a > very good hash. Why not try SkChecksum::Murmur3()? Numbers for SkChecksum::Compute switched to Murmur3 on todays master x86 grresourcecache_find_24 813us -> 985us 1.21x grresourcecache_find 485us -> 583us 1.2x grresourcecache_find_2 488us -> 586us 1.2x grresourcecache_find_25 845us -> 995us 1.18x grresourcecache_find_10 745us -> 875us 1.17x grresourcecache_find_59 1.19ms -> 1.37ms 1.15x grresourcecache_find_58 1.16ms -> 1.33ms 1.15x grresourcecache_find_57 1.17ms -> 1.32ms 1.13x grresourcecache_find_56 1.18ms -> 1.32ms 1.11x grresourcecache_add_24 2.2ms -> 2.45ms 1.11x grresourcecache_find_22 886us -> 966us 1.09x grresourcecache_find_21 879us -> 955us 1.09x grresourcecache_add_61 2.66ms -> 2.89ms 1.09x grresourcecache_find_54 1.2ms -> 1.3ms 1.09x grresourcecache_add_25 2.29ms -> 2.48ms 1.08x grresourcecache_add_57 2.65ms -> 2.85ms 1.08x grresourcecache_add_58 2.65ms -> 2.85ms 1.07x grresourcecache_add_56 2.64ms -> 2.83ms 1.07x grresourcecache_add_59 2.69ms -> 2.87ms 1.07x grresourcecache_find_4 579us -> 614us 1.06x grresourcecache_add_10 2.21ms -> 2.28ms 1.03x grresourcecache_add_22 2.34ms -> 2.42ms 1.03x grresourcecache_add_21 2.34ms -> 2.42ms 1.03x grresourcecache_add_54 2.74ms -> 2.81ms 1.03x grresourcecache_add 1.55ms -> 1.59ms 1.03x grresourcecache_add_2 1.56ms -> 1.6ms 1.02x grresourcecache_find_5 838us -> 814us 0.97x grresourcecache_find_3 630us -> 610us 0.97x grresourcecache_add_4 1.68ms -> 1.62ms 0.96x grresourcecache_add_5 2.27ms -> 2.18ms 0.96x grresourcecache_add_3 1.74ms -> 1.62ms 0.93x grresourcecache_find_60 1.8ms -> 1.35ms 0.75x grresourcecache_add_60 3.89ms -> 2.87ms 0.74x grresourcecache_find_23 1.45ms -> 975us 0.67x grresourcecache_add_23 3.89ms -> 2.44ms 0.63x grresourcecache_add_55 38.5ms -> 2.82ms 0.073x grresourcecache_find_55 23ms -> 1.3ms 0.056x arm_v7_neon grresourcecache_find_57 3.36ms -> 4.12ms 1.23x grresourcecache_find_56 3.39ms -> 4.05ms 1.2x grresourcecache_find_59 3.52ms -> 4.18ms 1.19x grresourcecache_find_54 3.29ms -> 3.89ms 1.18x grresourcecache_find_55 3.43ms -> 4.06ms 1.18x grresourcecache_find 1.28ms -> 1.46ms 1.15x grresourcecache_find_58 3.54ms -> 4.03ms 1.14x grresourcecache_add 4.18ms -> 4.73ms 1.13x grresourcecache_find_21 2.56ms -> 2.87ms 1.12x grresourcecache_find_25 2.7ms -> 3.02ms 1.12x grresourcecache_add_2 4.33ms -> 4.82ms 1.11x grresourcecache_find_60 3.69ms -> 4.08ms 1.11x grresourcecache_find_23 2.66ms -> 2.94ms 1.1x grresourcecache_find_2 1.36ms -> 1.49ms 1.1x grresourcecache_find_61 3.69ms -> 4.01ms 1.09x grresourcecache_find_24 2.71ms -> 2.94ms 1.08x grresourcecache_add_55 8.44ms -> 9.07ms 1.07x grresourcecache_find_22 2.59ms -> 2.78ms 1.07x grresourcecache_add_4 4.48ms -> 4.81ms 1.07x grresourcecache_add_57 8.38ms -> 8.93ms 1.07x grresourcecache_add_56 8.42ms -> 8.97ms 1.07x grresourcecache_find_4 1.45ms -> 1.55ms 1.07x grresourcecache_add_21 6.86ms -> 7.25ms 1.06x grresourcecache_add_59 8.66ms -> 9.08ms 1.05x grresourcecache_add_60 8.94ms -> 9.36ms 1.05x grresourcecache_add_54 8.58ms -> 8.97ms 1.04x grresourcecache_add_58 8.84ms -> 9.2ms 1.04x grresourcecache_add_23 6.94ms -> 7.22ms 1.04x grresourcecache_add_10 6.68ms -> 6.94ms 1.04x grresourcecache_add_22 7.14ms -> 7.41ms 1.04x grresourcecache_add_24 7.11ms -> 7.36ms 1.04x grresourcecache_find_10 2.48ms -> 2.56ms 1.03x grresourcecache_add_25 7.05ms -> 7.21ms 1.02x grresourcecache_add_3 4.67ms -> 4.73ms 1.01x grresourcecache_find_5 2.42ms -> 2.41ms 1x grresourcecache_find_3 1.56ms -> 1.51ms 0.97x c --config nvprmsaa4 --match skp on x68 for the skp images that I have access to: desk_mapsvg.skp_1 6.29ms -> 6.4ms 1.02x desk_tigersvg.skp_1 3ms -> 3.03ms 1.01x desk_googleplus.skp_1 10.4ms -> 10.5ms 1.01x desk_css3gradients.skp_1 3.11ms -> 3.13ms 1.01x desk_sfgate.skp_1 3.87ms -> 3.9ms 1.01x desk_samoasvg.skp_1 12.1ms -> 12.2ms 1.01x desk_yahooanswers.skp_1 6.77ms -> 6.82ms 1.01x tabl_mlb.skp_1 9.62ms -> 9.69ms 1.01x tabl_cnet.skp_1 2.28ms -> 2.3ms 1.01x desk_silkfinance.skp_1 2.37ms -> 2.38ms 1.01x desk_jsfiddlehumperclip.skp_1 1.3ms -> 1.3ms 1.01x tabl_ukwsj.skp_1 7.6ms -> 7.65ms 1.01x tabl_cnn.skp_1 6.84ms -> 6.87ms 1.01x desk_pokemonwiki.skp_1 8.7ms -> 8.74ms 1x desk_pinterest.skp_1 8.82ms -> 8.86ms 1x desk_fontwipe.skp_1 972us -> 976us 1x desk_linkedin.skp_1 7.64ms -> 7.67ms 1x tabl_nofolo.skp_1 5.47ms -> 5.49ms 1x desk_wowwiki.skp_1 8.45ms -> 8.48ms 1x tabl_frantzen.skp_1 4.52ms -> 4.54ms 1x desk_blogger.skp_1 10.1ms -> 10.1ms 1x tabl_techmeme.skp_1 2.87ms -> 2.88ms 1x desk_weather.skp_1 4.53ms -> 4.55ms 1x desk_gws.skp_1 5.12ms -> 5.14ms 1x tabl_culturalsolutions.skp_1 6.18ms -> 6.2ms 1x tabl_deviantart.skp_1 1.92ms -> 1.92ms 1x tabl_cuteoverload.skp_1 3.23ms -> 3.24ms 1x desk_jsfiddlebigcar.skp_1 977us -> 979us 1x desk_mobilenews.skp_1 21.9ms -> 21.9ms 1x desk_baidu.skp_1 8.92ms -> 8.94ms 1x desk_twitter.skp_1 12.1ms -> 12.1ms 1x desk_forecastio.skp_1 9.42ms -> 9.44ms 1x desk_googlespreadsheetdashed.skp_1 25.7ms -> 25.7ms 1x tabl_googleblog.skp_1 5.95ms -> 5.96ms 1x tabl_transformice.skp_1 4.9ms -> 4.9ms 1x tabl_mozilla.skp_1 1.31ms -> 1.31ms 1x E.g. using murmur at this moment is a smallish regression. The desk_mapsvg.skp_1 is actually the testcase that I try to fix. And by fix I mean, that the future dashing patches will regress this test case, unless the issue discussed in this bug gets solved. After applying the original patch this issue tries to address, murmur3 is actually smallish win for the path content. However, if possible and if murmur is required, I would still prefer to discuss it in another patch and not block this one. The rationale being that it probably regresses non-path content, and thus is even harder to get in. In the commit message of this patch I've tried to show that this is a progression for all backends.
On 2015/05/18 05:51:28, Kimmo Kinnunen wrote: > Hash function could be better, maybe. It does not solve the issue. With paths, > the resource cache ends up being full all the time. The hashes will always > collide. The hashtable indexes will definitively collide. > > Also: the key for resource cache is 12 bits. It is hard to see that murmur would > provide significantly less checks in this case. If this patch cannot go without > murmur, I'm of course happy to check that too. The above was actually incorrect and a bit vague. It might cause confusion with one murmur related comment. Rephrasing: So resource cache contains 2^13 entries. If path object -related code is ever enabled to run in Chromium, this will always be fully after nontrivial amount of browser usage. I believe then the "index space" is 2^14 for TDynamicHash. So every other index will be occupied. This probably causes very many index collisions and thus many full key comparisons. The quality of hash function may not be that visible in practice, if the comparison dominates.
Kimmo, I didn't mean to hold up this patch while we look at further enhancements to the hash or the cache. lgtm. Personally I'd prefer removing the validation in the sktdynamichash to running different tests in debug and release but I don't feel very strongly. Also, the resource limit in the cache is somewhat arbitrary. At one point, several years ago, I tried to dramatically increase it but it caused a perf problem on some Mac driver's texture id->object lookup scheme so I put it back to a safe number. We can try increasing it across the board or just when NVPR is present.
On 2015/05/18 13:17:35, bsalomon wrote: > Also, the resource limit in the cache is somewhat arbitrary. At one point, > several years ago, I tried to dramatically increase it but it caused a perf > problem on some Mac driver's texture id->object lookup scheme so I put it back > to a safe number. We can try increasing it across the board or just when NVPR is > present. There's no problem with current resource limit, even for nvpr. For the tests that we have, that is. I don't know about real world :) DynamicHash is sized always based on the number of items in the map, so changing the cache limit doesn't fundamentally change anything with respect to hash map index collisions
The CQ bit was checked by kkinnunen@nvidia.com
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1132723003/20001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: Build-Ubuntu-Clang-x86_64-Debug-Trybot on client.skia.compile (JOB_FAILED, http://build.chromium.org/p/client.skia.compile/builders/Build-Ubuntu-Clang-x...)
The CQ bit was checked by kkinnunen@nvidia.com
The patchset sent to the CQ was uploaded after l-g-t-m from bsalomon@google.com Link to the patchset: https://codereview.chromium.org/1132723003/#ps40001 (title: "Remove an unused variable in the test")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1132723003/40001
Message was sent while issue was closed.
Committed patchset #3 (id:40001) as https://skia.googlesource.com/skia/+/54b8511189bb5da6bfd248fa63f5c4156e9e2bd6 |