|
|
DescriptionAdd a version of gpu_fuzzer that is backed by the NULL ANGLE backend
BUG=None
CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel
Committed: https://crrev.com/450231b447b5de66e4f735e98d858b3ec2d292b1
Cr-Commit-Position: refs/heads/master@{#437429}
Patch Set 1 #
Total comments: 7
Patch Set 2 : rebase + remove unneeded flags #
Messages
Total messages: 45 (14 generated)
Description was changed from ========== Add a version of gpu_fuzzer that is backed by the NULL ANGLE backend BUG=None ========== to ========== Add a version of gpu_fuzzer that is backed by the NULL ANGLE backend BUG=None CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
The CQ bit was checked by piman@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
piman@chromium.org changed reviewers: + geofflang@chromium.org
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
lgtm
mmoroz@chromium.org changed reviewers: + mmoroz@chromium.org
https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn File gpu/BUILD.gn (right): https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode428 gpu/BUILD.gn:428: "rss_limit_mb=4096", How important is this memory volume? Our current VMs have 4 GB of RAM each, so running fuzzers with rss_limit_mb of 4096 MB doesn't seem to be a good idea at the moment.
https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn File gpu/BUILD.gn (right): https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode427 gpu/BUILD.gn:427: "use_traces=1", That flag seems to be out-of-date (or may be it is ON by default now?): WARNING: unrecognized flag '-use_traces=1'; use -help=1 to list all flags
https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn File gpu/BUILD.gn (right): https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode427 gpu/BUILD.gn:427: "use_traces=1", On 2016/12/01 15:28:31, mmoroz wrote: > That flag seems to be out-of-date (or may be it is ON by default now?): > > WARNING: unrecognized flag '-use_traces=1'; use -help=1 to list all flags Thanks, I'll take it out (including above) https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode428 gpu/BUILD.gn:428: "rss_limit_mb=4096", On 2016/12/01 15:26:05, mmoroz wrote: > How important is this memory volume? Our current VMs have 4 GB of RAM each, so > running fuzzers with rss_limit_mb of 4096 MB doesn't seem to be a good idea at > the moment. Right now it needs around 3.5GB just to run the existing corpus. I was planning on looking at memory usage in details afterwards, but I guess I need to do that homework first.
https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn File gpu/BUILD.gn (right): https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode428 gpu/BUILD.gn:428: "rss_limit_mb=4096", On 2016/12/01 18:19:28, piman wrote: > On 2016/12/01 15:26:05, mmoroz wrote: > > How important is this memory volume? Our current VMs have 4 GB of RAM each, so > > running fuzzers with rss_limit_mb of 4096 MB doesn't seem to be a good idea at > > the moment. > > Right now it needs around 3.5GB just to run the existing corpus. I was planning > on looking at memory usage in details afterwards, but I guess I need to do that > homework first. Hm, yeah, we probably need to shrink it if that's possible. Have you tried to minimize the corpus? mkdir minimized_corpus_dir && ./fuzzer minimized_corpus_dir initial_corpus_dir -merge=1 Maybe after that it won't be using so much memory...
On Thu, Dec 1, 2016 at 10:22 AM, <mmoroz@chromium.org> wrote: > > https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn > File gpu/BUILD.gn (right): > > https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode428 > gpu/BUILD.gn:428: "rss_limit_mb=4096", > On 2016/12/01 18:19:28, piman wrote: > > On 2016/12/01 15:26:05, mmoroz wrote: > > > How important is this memory volume? Our current VMs have 4 GB of > RAM each, so > > > running fuzzers with rss_limit_mb of 4096 MB doesn't seem to be a > good idea at > > > the moment. > > > > Right now it needs around 3.5GB just to run the existing corpus. I was > planning > > on looking at memory usage in details afterwards, but I guess I need > to do that > > homework first. > > Hm, yeah, we probably need to shrink it if that's possible. Have you > tried to minimize the corpus? > > mkdir minimized_corpus_dir && ./fuzzer minimized_corpus_dir > initial_corpus_dir -merge=1 > > Maybe after that it won't be using so much memory... > This is the same corpus as gpu_fuzzer, coming from the ClusterFuzz store. I believe it is regularly minimized, and last time I tried, minimization didn't really change much. I can give it another go. But given the amount of state this tries to exercise, the corpus doesn't seem crazy out of proportions. But yeah, just running the fuzzer on the corpus while doing nothing (return 0) in LLVMFuzzerTestOneInput, it already eats 1100 MB (which is ~3x the size of the corpus itself). I still suspect there's some savings to be had somewhere - even with the above I think we should be able to fit in 2GB. > > https://codereview.chromium.org/2541843004/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2016/12/01 18:48:15, piman wrote: > On Thu, Dec 1, 2016 at 10:22 AM, <mailto:mmoroz@chromium.org> wrote: > > > > > https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn > > File gpu/BUILD.gn (right): > > > > https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode428 > > gpu/BUILD.gn:428: "rss_limit_mb=4096", > > On 2016/12/01 18:19:28, piman wrote: > > > On 2016/12/01 15:26:05, mmoroz wrote: > > > > How important is this memory volume? Our current VMs have 4 GB of > > RAM each, so > > > > running fuzzers with rss_limit_mb of 4096 MB doesn't seem to be a > > good idea at > > > > the moment. > > > > > > Right now it needs around 3.5GB just to run the existing corpus. I was > > planning > > > on looking at memory usage in details afterwards, but I guess I need > > to do that > > > homework first. > > > > Hm, yeah, we probably need to shrink it if that's possible. Have you > > tried to minimize the corpus? > > > > > mkdir minimized_corpus_dir && ./fuzzer minimized_corpus_dir > > initial_corpus_dir -merge=1 > > > > Maybe after that it won't be using so much memory... > > > > This is the same corpus as gpu_fuzzer, coming from the ClusterFuzz store. I > believe it is regularly minimized, and last time I tried, minimization > didn't really change much. I can give it another go. But given the amount > of state this tries to exercise, the corpus doesn't seem crazy out of > proportions. > But yeah, just running the fuzzer on the corpus while doing nothing (return > 0) in LLVMFuzzerTestOneInput, it already eats 1100 MB (which is ~3x the > size of the corpus itself). > I still suspect there's some savings to be had somewhere - even with the > above I think we should be able to fit in 2GB. > > > > > > https://codereview.chromium.org/2541843004/ > > > > -- > You received this message because you are subscribed to the Google Groups > "Chromium-reviews" group. > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org. Kostya, Abhishek, Oliver -- do you have any thoughts what can we do here to minimize memory consumption OR to fit our bots to work with that larger value?
On Thu, Dec 1, 2016 at 10:22 AM, <mmoroz@chromium.org> wrote: > > https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn > File gpu/BUILD.gn (right): > > https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode428 > gpu/BUILD.gn:428: "rss_limit_mb=4096", > On 2016/12/01 18:19:28, piman wrote: > > On 2016/12/01 15:26:05, mmoroz wrote: > > > How important is this memory volume? Our current VMs have 4 GB of > RAM each, so > > > running fuzzers with rss_limit_mb of 4096 MB doesn't seem to be a > good idea at > > > the moment. > > > > Right now it needs around 3.5GB just to run the existing corpus. I was > planning > > on looking at memory usage in details afterwards, but I guess I need > to do that > > homework first. > > Hm, yeah, we probably need to shrink it if that's possible. Have you > tried to minimize the corpus? > > mkdir minimized_corpus_dir && ./fuzzer minimized_corpus_dir > initial_corpus_dir -merge=1 > > Maybe after that it won't be using so much memory... > After minimizing the corpus, it looks like it would fit in 3GB. Would this be acceptable? Antoine > > https://codereview.chromium.org/2541843004/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
3Gb for a corpus might be too much... Are the files very large, or there are so many files? Is this a monolothic target for fuzzing, or it can be split into multiple smaller fuzz targets?
On Thu, Dec 1, 2016 at 12:25 PM, <kcc@chromium.org> wrote: > 3Gb for a corpus might be too much... > The corpus itself is ~50MB. It just blows up to 3GB in memory. > Are the files very large, or there are so many files? > Is 20K files many? The size limit is at 16k. We may be able to reduce a bit, but not dramatically so without losing coverage. > Is this a monolothic target for fuzzing, or it can be split into multiple > smaller fuzz targets? > I don't think it's possible to split into smaller targets. GL is a large API with lots of state combinations and interactions between these states, and we really want to fuzz the whole thing. > > https://codereview.chromium.org/2541843004/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2016/12/01 20:43:14, piman wrote: > On Thu, Dec 1, 2016 at 12:25 PM, <mailto:kcc@chromium.org> wrote: > > > 3Gb for a corpus might be too much... > > > > The corpus itself is ~50MB. It just blows up to 3GB in memory. > > > > Are the files very large, or there are so many files? > > > > Is 20K files many? > The size limit is at 16k. 20K files <16K each is totally fine and is pretty usual. Now need to figure out why it uses 3Gb in memory. (I am totally under water now. otherwise would have tried it myself now...) Maybe just commit this and let me look at it next week? > We may be able to reduce a bit, but not > dramatically so without losing coverage. > > > > Is this a monolothic target for fuzzing, or it can be split into multiple > > smaller fuzz targets? > > > > I don't think it's possible to split into smaller targets. GL is a large > API with lots of state combinations and interactions between these states, > and we really want to fuzz the whole thing. > > > > > > https://codereview.chromium.org/2541843004/ > > > > -- > You received this message because you are subscribed to the Google Groups > "Chromium-reviews" group. > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org.
On Thu, Dec 1, 2016 at 12:47 PM, <kcc@chromium.org> wrote: > On 2016/12/01 20:43:14, piman wrote: > > On Thu, Dec 1, 2016 at 12:25 PM, <mailto:kcc@chromium.org> wrote: > > > > > 3Gb for a corpus might be too much... > > > > > > > The corpus itself is ~50MB. It just blows up to 3GB in memory. > > > > > > > Are the files very large, or there are so many files? > > > > > > > Is 20K files many? > > The size limit is at 16k. > > 20K files <16K each is totally fine and is pretty usual. > Now need to figure out why it uses 3Gb in memory. > Some of it is the fuzzer infra (it gets to ~1GB if I just return 0 from the test function), the rest is caused by the test function which keeps a bit of state across runs (to be able to run fast enough). I'm trying to investigate if there's a logical leak, but so far I haven't found anything obvious, and it may just end up being fragmentation (it seems to reach an upper bound at some point). > (I am totally under water now. otherwise would have tried it myself now...) > > Maybe just commit this and let me look at it next week? > > > > > > > > > We may be able to reduce a bit, but not > > dramatically so without losing coverage. > > > > > > > Is this a monolothic target for fuzzing, or it can be split into > multiple > > > smaller fuzz targets? > > > > > > > I don't think it's possible to split into smaller targets. GL is a large > > API with lots of state combinations and interactions between these > states, > > and we really want to fuzz the whole thing. > > > > > > > > > > https://codereview.chromium.org/2541843004/ > > > > > > > -- > > You received this message because you are subscribed to the Google Groups > > "Chromium-reviews" group. > > To unsubscribe from this group and stop receiving emails from it, send an > email > > to mailto:chromium-reviews+unsubscribe@chromium.org. > > > > https://codereview.chromium.org/2541843004/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
How are you planning to add corpus. Please don't add 3gb as seed corpus since it inflates the build zip, but we can then copy it directly into gcs directory.
inferno@chromium.org changed reviewers: + inferno@chromium.org
On Thu, Dec 1, 2016 at 12:58 PM, <inferno@chromium.org> wrote: > How are you planning to add corpus. Please don't add 3gb as seed corpus > since it > inflates the build zip, but we can then copy it directly into gcs > directory. > I was going to put it directly on GCS - copying from the gpu_fuzzer corpus Again, it's 50MB (on disk), not 3GB. > https://codereview.chromium.org/2541843004/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
LGTM, let's land this as Kostya suggested and see what we can do then.
On Fri, Dec 2, 2016 at 1:57 AM, <mmoroz@chromium.org> wrote: > LGTM, let's land this as Kostya suggested and see what we can do then. > Actually, I found a source of a small leak, and if I disable that part of the code it stays below 2GB. I haven't found the precise leak yet, but let me try to do that first. > > https://codereview.chromium.org/2541843004/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2016/12/02 23:08:00, piman wrote: > On Fri, Dec 2, 2016 at 1:57 AM, <mailto:mmoroz@chromium.org> wrote: > > > LGTM, let's land this as Kostya suggested and see what we can do then. > > > > Actually, I found a source of a small leak, and if I disable that part of > the code it stays below 2GB. I haven't found the precise leak yet, but let > me try to do that first. > > > > > > https://codereview.chromium.org/2541843004/ > > > > -- > You received this message because you are subscribed to the Google Groups > "Chromium-reviews" group. > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org. That's interesting! Doesn't ASAN_OPTIONS=detect_leaks=1 report anything?
On Sat, Dec 3, 2016 at 2:04 AM, <mmoroz@chromium.org> wrote: > On 2016/12/02 23:08:00, piman wrote: > > On Fri, Dec 2, 2016 at 1:57 AM, <mailto:mmoroz@chromium.org> wrote: > > > > > LGTM, let's land this as Kostya suggested and see what we can do then. > > > > > > > Actually, I found a source of a small leak, and if I disable that part of > > the code it stays below 2GB. I haven't found the precise leak yet, but > let > > me try to do that first. > > > > > > > > > > https://codereview.chromium.org/2541843004/ > > > > > > > -- > > You received this message because you are subscribed to the Google Groups > > "Chromium-reviews" group. > > To unsubscribe from this group and stop receiving emails from it, send an > email > > to mailto:chromium-reviews+unsubscribe@chromium.org. > > That's interesting! Doesn't ASAN_OPTIONS=detect_leaks=1 report anything? > Circling back here... turns out that leak was most likely a dud, and is probably at best a tiny logical leak. The RSS difference was mostly a side effect of how I was measuring things, in particular not shuffling the input corpus (-shuffle=0), but with a regular run I still get to around 2.5G. I did some fairly involved investigation into the ASAN runtime innards, and my conclusion is that the root cause of the blown up RSS is the ASAN allocator, which (by default) never releases memory to the OS. We quickly get into states where the allocator stats shows ~200MB used but >2GB mapped. Looking at the size buckets, most of them, including the biggest ones (~128k) show 90-95% waste. I think the allocation patterns (among others, running a compiler which is very malloc intensive) cause each bucket size to blow up. Running with ASAN_OPTIONS='allocator_release_to_os=1' makes RSS stay around 1GB, so that's really the long term solution I think. The question is, how can I make this happen on ClusterFuzz? Also, is it a good idea given that the flag is marked experimental? > https://codereview.chromium.org/2541843004/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2016/12/08 22:41:24, piman wrote: > On Sat, Dec 3, 2016 at 2:04 AM, <mailto:mmoroz@chromium.org> wrote: > > > On 2016/12/02 23:08:00, piman wrote: > > > On Fri, Dec 2, 2016 at 1:57 AM, <mailto:mmoroz@chromium.org> wrote: > > > > > > > LGTM, let's land this as Kostya suggested and see what we can do then. > > > > > > > > > > Actually, I found a source of a small leak, and if I disable that part of > > > the code it stays below 2GB. I haven't found the precise leak yet, but > > let > > > me try to do that first. > > > > > > > > > > > > > > https://codereview.chromium.org/2541843004/ > > > > > > > > > > -- > > > You received this message because you are subscribed to the Google Groups > > > "Chromium-reviews" group. > > > To unsubscribe from this group and stop receiving emails from it, send an > > email > > > to mailto:chromium-reviews+unsubscribe@chromium.org. > > > > That's interesting! Doesn't ASAN_OPTIONS=detect_leaks=1 report anything? > > > > Circling back here... turns out that leak was most likely a dud, and is > probably at best a tiny logical leak. The RSS difference was mostly a side > effect of how I was measuring things, in particular not shuffling the input > corpus (-shuffle=0), but with a regular run I still get to around 2.5G. > I did some fairly involved investigation into the ASAN runtime innards, and > my conclusion is that the root cause of the blown up RSS is the ASAN > allocator, which (by default) never releases memory to the OS. We quickly > get into states where the allocator stats shows ~200MB used but >2GB > mapped. Looking at the size buckets, most of them, including the biggest > ones (~128k) show 90-95% waste. I think the allocation patterns (among > others, running a compiler which is very malloc intensive) cause each > bucket size to blow up. > Running with ASAN_OPTIONS='allocator_release_to_os=1' makes RSS stay around > 1GB, so that's really the long term solution I think. The question is, how > can I make this happen on ClusterFuzz? Also, is it a good idea given that > the flag is marked experimental? > > > > https://codereview.chromium.org/2541843004/ > > > > -- > You received this message because you are subscribed to the Google Groups > "Chromium-reviews" group. > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org. Wow. I am sorry you've spent so much effort investigating this. IIRC, we already use allocator_release_to_os=1 on ClusterFuzz, and I hope to make it the default in Q1'17.
On Thu, Dec 8, 2016 at 2:44 PM, <kcc@google.com> wrote: > On 2016/12/08 22:41:24, piman wrote: > > > On Sat, Dec 3, 2016 at 2:04 AM, <mailto:mmoroz@chromium.org> wrote: > > > > > On 2016/12/02 23:08:00, piman wrote: > > > > On Fri, Dec 2, 2016 at 1:57 AM, <mailto:mmoroz@chromium.org> wrote: > > > > > > > > > LGTM, let's land this as Kostya suggested and see what we can do > then. > > > > > > > > > > > > > Actually, I found a source of a small leak, and if I disable that > part of > > > > the code it stays below 2GB. I haven't found the precise leak yet, > but > > > let > > > > me try to do that first. > > > > > > > > > > > > > > > > > > https://codereview.chromium.org/2541843004/ > > > > > > > > > > > > > -- > > > > You received this message because you are subscribed to the Google > Groups > > > > "Chromium-reviews" group. > > > > To unsubscribe from this group and stop receiving emails from it, > send an > > > email > > > > to mailto:chromium-reviews+unsubscribe@chromium.org. > > > > > > That's interesting! Doesn't ASAN_OPTIONS=detect_leaks=1 report > anything? > > > > > > > Circling back here... turns out that leak was most likely a dud, and is > > probably at best a tiny logical leak. The RSS difference was mostly a > side > > effect of how I was measuring things, in particular not shuffling the > input > > corpus (-shuffle=0), but with a regular run I still get to around 2.5G. > > I did some fairly involved investigation into the ASAN runtime innards, > and > > my conclusion is that the root cause of the blown up RSS is the ASAN > > allocator, which (by default) never releases memory to the OS. We quickly > > get into states where the allocator stats shows ~200MB used but >2GB > > mapped. Looking at the size buckets, most of them, including the biggest > > ones (~128k) show 90-95% waste. I think the allocation patterns (among > > others, running a compiler which is very malloc intensive) cause each > > bucket size to blow up. > > Running with ASAN_OPTIONS='allocator_release_to_os=1' makes RSS stay > around > > 1GB, so that's really the long term solution I think. The question is, > how > > can I make this happen on ClusterFuzz? Also, is it a good idea given that > > the flag is marked experimental? > > > > > > > https://codereview.chromium.org/2541843004/ > > > > > > > -- > > You received this message because you are subscribed to the Google Groups > > "Chromium-reviews" group. > > To unsubscribe from this group and stop receiving emails from it, send an > email > > to mailto:chromium-reviews+unsubscribe@chromium.org. > > Wow. > I am sorry you've spent so much effort investigating this. > IIRC, we already use allocator_release_to_os=1 on ClusterFuzz, > and I hope to make it the default in Q1'17. > Ok, so maybe I should just land this, without the rss_limit_mb flag? > > https://codereview.chromium.org/2541843004/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
I think so, yes, if the memory consumption is the only remaining problem...
https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn File gpu/BUILD.gn (right): https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode427 gpu/BUILD.gn:427: "use_traces=1", On 2016/12/01 18:19:28, piman wrote: > On 2016/12/01 15:28:31, mmoroz wrote: > > That flag seems to be out-of-date (or may be it is ON by default now?): > > > > WARNING: unrecognized flag '-use_traces=1'; use -help=1 to list all flags > > Thanks, I'll take it out (including above) Done. https://codereview.chromium.org/2541843004/diff/1/gpu/BUILD.gn#newcode428 gpu/BUILD.gn:428: "rss_limit_mb=4096", Removed flag, as discussed.
The CQ bit was checked by piman@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from mmoroz@chromium.org, geofflang@chromium.org Link to the patchset: https://codereview.chromium.org/2541843004/#ps20001 (title: "rebase + remove unneeded flags")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: cast_shell_android on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/cast_shell_a...)
The CQ bit was checked by piman@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 20001, "attempt_start_ts": 1481242772632540, "parent_rev": "4261f902ffe0ca0ec0064ffc3522b5830d920ace", "commit_rev": "c11468a3d78fc273e773d7a4e503f1b42fc7ca8c"}
Message was sent while issue was closed.
Committed patchset #2 (id:20001)
Message was sent while issue was closed.
Description was changed from ========== Add a version of gpu_fuzzer that is backed by the NULL ANGLE backend BUG=None CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== Add a version of gpu_fuzzer that is backed by the NULL ANGLE backend BUG=None CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Committed: https://crrev.com/450231b447b5de66e4f735e98d858b3ec2d292b1 Cr-Commit-Position: refs/heads/master@{#437429} ==========
Message was sent while issue was closed.
Patchset 2 (id:??) landed as https://crrev.com/450231b447b5de66e4f735e98d858b3ec2d292b1 Cr-Commit-Position: refs/heads/master@{#437429}
Message was sent while issue was closed.
On 2016/12/09 02:16:28, commit-bot: I haz the power wrote: > Patchset 2 (id:??) landed as > https://crrev.com/450231b447b5de66e4f735e98d858b3ec2d292b1 > Cr-Commit-Position: refs/heads/master@{#437429} Thanks for the investigation piman@, that's a good thing to know! I've just verified - we did not use 'allocator_release_to_os=1' on ClusterFuzz (I see it for corpus pruning task only), but I've just added 'allocator_release_to_os=1' to libfuzzer_asan job template. Thanks everyone. Now waiting for the fuzzer to be picked up by CF :)
Message was sent while issue was closed.
On 2016/12/09 11:53:19, mmoroz wrote: > On 2016/12/09 02:16:28, commit-bot: I haz the power wrote: > > Patchset 2 (id:??) landed as > > https://crrev.com/450231b447b5de66e4f735e98d858b3ec2d292b1 > > Cr-Commit-Position: refs/heads/master@{#437429} > > Thanks for the investigation piman@, that's a good thing to know! > > I've just verified - we did not use 'allocator_release_to_os=1' on ClusterFuzz > (I see it for corpus pruning task only), but I've just added > 'allocator_release_to_os=1' to libfuzzer_asan job template. > > Thanks everyone. Now waiting for the fuzzer to be picked up by CF :) Got it to work after renaming in https://codereview.chromium.org/2570493002/, everything seems to be fine: https://cluster-fuzz.appspot.com/fuzzerstats?fuzzer_name=libfuzzer_gpu_angle_... |