|
|
Created:
7 years, 10 months ago by sullivan Modified:
7 years, 9 months ago CC:
chromium-reviews, xusydoc+watch_chromium.org, cmp-cc_chromium.org, ilevy+cc_chromium.org, kjellander+cc_chromium.org Base URL:
https://chromium.googlesource.com/chromium/tools/build.git@master Visibility:
Public. |
DescriptionSends test results to new perf dashboard. Enabled when --results-url is specified on the command line.
BUG=
Committed: http://src.chromium.org/viewvc/chrome?view=rev&revision=185969
Patch Set 1 #
Total comments: 1
Patch Set 2 : Moves the code to call results_dashboard.SendResults() to its own function and cleans up some small… #
Total comments: 2
Patch Set 3 : Get results url from factory options #Patch Set 4 : First pass at implementing a cache file (still need a real location for the file) #
Total comments: 1
Patch Set 5 : Adds beginnings of unit test (for feedback) #Patch Set 6 : Ready for review #
Total comments: 4
Patch Set 7 : Addressed comments about unit test #
Total comments: 2
Patch Set 8 : fix permissions on unit test #
Messages
Total messages: 28 (0 generated)
Hi Mike, I had a chance to write a first pass at posting results to the new perf dashboard and I had a lot of questions! I posted this to rietvald since I think it would be easier to answer the questions if you can see my code. On to the questions: * In addition to the "system" and "test" parameters, I'll probably want to pass a "master" (e.g. ChromiumPerf) and also the stdio, and possibly more as we add new features. Is passing all this stuff through annotate() to results_dashboard.SendResults() the best way to go? Or is there a different structure you'd prefer for calling my function with a lot of parameters? * When we get a server error, we will hopefully have the results saved to the cache, and retry next time. In addition to that and writing to stderr, is there any other error handling code I should write? * Any opinions on where to put a cache on disk of requests that need to be retried? * I had some trouble getting some of the data I wanted. Here is a list of things I didn't know how to get: 1) I would love to send the master (ChromiumPerf, ChromiumGPU, etc) with the data. 2) It would be great to have a link to the stdio for the build, e.g. "http://build.chromium.org/p/chromium.perf/builders/Linux%20Perf%20%281%29/builds/20560/steps/scrolling_benchmark/logs/stdio" 3) In the stdio that I pass into runtest.py, I see json like "rev": "183839", "webkit_rev": "143559". I want those numbers. Instead, when the lines are sent to SendResults(), I see "rev": "86b4bcab8439135621f6dfea4192eed24fb476c5", "webkit_rev": "undefined". Any idea what's going wrong?
On 2013/02/21 21:43:21, sullivan wrote: > Hi Mike, > > I had a chance to write a first pass at posting results to the new perf > dashboard and I had a lot of questions! I posted this to rietvald since I think > it would be easier to answer the questions if you can see my code. > > On to the questions: > * In addition to the "system" and "test" parameters, I'll probably want to pass > a "master" (e.g. ChromiumPerf) and also the stdio, and possibly more as we add > new features. Is passing all this stuff through annotate() to > results_dashboard.SendResults() the best way to go? Or is there a different > structure you'd prefer for calling my function with a lot of parameters? I wrote this in the file, but you should pass this to runtest from the master, then pass it into SendResults from runtest.py. annotate() should have no concept of what master it's on. > > * When we get a server error, we will hopefully have the results saved to the > cache, and retry next time. In addition to that and writing to stderr, is there > any other error handling code I should write? As long as you write something to the log, I'll leave this up to you. If it fails after repeated attempts, you should annotate a STEP_WARNINGS, STEP_FAILURE, or STEP_EXCEPTION. I'm not sure how severe you'd like this to be, but it sounds like a STEP_EXCEPTION to me. > > * Any opinions on where to put a cache on disk of requests that need to be > retried? Do you need to cache it on disk? Why not in memory? > > * I had some trouble getting some of the data I wanted. Here is a list of things > I didn't know how to get: > 1) I would love to send the master (ChromiumPerf, ChromiumGPU, etc) with the > data. > 2) It would be great to have a link to the stdio for the build, e.g. > "http://build.chromium.org/p/chromium.perf/builders/Linux%20Perf%20%281%29/builds/20560/steps/scrolling_benchmark/logs/stdio" pass it through from the master > 3) In the stdio that I pass into runtest.py, I see json like "rev": "183839", > "webkit_rev": "143559". I want those numbers. Instead, when the lines are sent > to SendResults(), I see "rev": "86b4bcab8439135621f6dfea4192eed24fb476c5", > "webkit_rev": "undefined". Any idea what's going wrong? runtest.py uses GetSvnRevision() instead of reading from factory properties. The decision was made to make sure the directory really sync'd up with what revision you were outputting, but it causes problems sometimes. I'm guessing your actual build checkout is git and has that revision. If this is the case, then it should work once on production machines.
https://codereview.chromium.org/12317053/diff/1/scripts/slave/results_dashboa... File scripts/slave/results_dashboard.py (right): https://codereview.chromium.org/12317053/diff/1/scripts/slave/results_dashboa... scripts/slave/results_dashboard.py:21: # TODO(sullivan): Where to get the master (e.g. "ChromiumPerf") from? I would pass this from the master to runtest.py as a factory_property, then runtest can pass it here.
+cmp for question about cache below. On Thu, Feb 21, 2013 at 6:12 PM, <xusydoc@chromium.org> wrote: > On 2013/02/21 21:43:21, sullivan wrote: > >> Hi Mike, >> > > I had a chance to write a first pass at posting results to the new perf >> dashboard and I had a lot of questions! I posted this to rietvald since I >> > think > >> it would be easier to answer the questions if you can see my code. >> > > On to the questions: >> * In addition to the "system" and "test" parameters, I'll probably want to >> > pass > >> a "master" (e.g. ChromiumPerf) and also the stdio, and possibly more as >> we add >> new features. Is passing all this stuff through annotate() to >> results_dashboard.SendResults(**) the best way to go? Or is there a >> different >> structure you'd prefer for calling my function with a lot of parameters? >> > I wrote this in the file, but you should pass this to runtest from the > master, > then pass it into SendResults from runtest.py. annotate() should have no > concept > of what master it's on. I refactored the code so that I call a send_results_to_dashboard() function instead of passing everything through annotate(). This way annotate() doesn't need to take a bunch of extra parameters. > * When we get a server error, we will hopefully have the results saved >> to the >> cache, and retry next time. In addition to that and writing to stderr, is >> > there > >> any other error handling code I should write? >> > > As long as you write something to the log, I'll leave this up to you. If it > fails after repeated attempts, you should annotate a STEP_WARNINGS, > STEP_FAILURE, or STEP_EXCEPTION. I'm not sure how severe you'd like this > to be, > but it sounds like a STEP_EXCEPTION to me. That makes sense. I will implement when we figure out the best approach for caching. * Any opinions on where to put a cache on disk of requests that need to be >> retried? >> > > Do you need to cache it on disk? Why not in memory? This is a good question and I think we need to figure out how robust the caching should be in order to answer it. Here is a list of reasons the dashboard app could fail and the cache would be needed: 1) Some transient error, works on retry. So seconds or less to resolve. Memory cache should be fine for this case. 2) A problem where the bot is sending data the server can't process. Maybe the test outputs a string where we assumed it would be an int, or maybe the IP address of the bot changed and it's not on our whitelist. I hope this would be rare, but it would require manual intervention. Since it could happen over the weekend, I'm guessing that it's possible it could take 3 days to resolve, and someone might restart the bot to see if it fixes the issue. So we'd want to have a cache on disk in this case, or at least some way to backfill the data, right? 3) We push a bad version of the dashboard or there is a global app engine outage. This would probably be resolved in a few hours, but then all the bots have several runs' data to send all at once. Do you think a disk cache is the right way to go here? Or a memory cache, plus some way of importing data when there are catastrophic errors? > * I had some trouble getting some of the data I wanted. Here is a list of >> > things > >> I didn't know how to get: >> 1) I would love to send the master (ChromiumPerf, ChromiumGPU, etc) with >> the >> data. >> 2) It would be great to have a link to the stdio for the build, e.g. >> > > "http://build.chromium.org/p/**chromium.perf/builders/Linux%** > 20Perf%20%281%29/builds/20560/**steps/scrolling_benchmark/**logs/stdio<http://build.chromium.org/p/chromium.perf/builders/Linux%20Perf%20%281%29/builds/20560/steps/scrolling_benchmark/logs/stdio> > " > > pass it through from the master > > > 3) In the stdio that I pass into runtest.py, I see json like "rev": >> "183839", >> "webkit_rev": "143559". I want those numbers. Instead, when the lines are >> sent >> to SendResults(), I see "rev": "**86b4bcab8439135621f6dfea4192ee** >> d24fb476c5", >> "webkit_rev": "undefined". Any idea what's going wrong? >> > > runtest.py uses GetSvnRevision() instead of reading from factory > properties. The > decision was made to make sure the directory really sync'd up with what > revision > you were outputting, but it causes problems sometimes. I'm guessing your > actual > build checkout is git and has that revision. If this is the case, then it > should > work once on production machines. > > > > https://codereview.chromium.**org/12317053/<https://codereview.chromium.org/1... >
Update: I talked to Chase offline and he'd like the cache to be on disk. Any opinions as to where on disk it should live?
On 2013/02/22 22:09:06, sullivan wrote: > Update: I talked to Chase offline and he'd like the cache to be on disk. Any > opinions as to where on disk it should live? I guess we want this location to be semi-permanent, in that if the bot is restarted the script would attempt to re-upload any cached items it sees. My first guess would be a temp directory, but there would be no way of passing that info from one invocation to the next. My next guess would be somewhere in the build dir. This may get wiped if there is a clobber, but I think that's semi-permanent enough that it would handle most of these cases. There is the additional advantage that we don't have to worry about extra maintenance for troopers/admins.
https://chromiumcodereview.appspot.com/12317053/diff/1003/scripts/slave/runte... File scripts/slave/runtest.py (right): https://chromiumcodereview.appspot.com/12317053/diff/1003/scripts/slave/runte... scripts/slave/runtest.py:556: results_tracker, options.factory_properties.get('master'), nit: I'd prefer masterName or master_name
I have a couple more questions: 1) I'm fine with putting the output somewhere in the build directory. Should I just create a subdirectory of options.build_dir and put a cache file in there? 2) I am passing in a factory_options.master_name, but I just realized there is a _GetMaster() function. Any reason not to reuse it to get the master name? 3) I can definitely read in the name of the stdio file from the factory_options, but how would I actually set it correctly? I would be modifying AddTelemetryTest and AddPageCyclerTest in chromium_commands.py https://chromiumcodereview.appspot.com/12317053/diff/1003/scripts/slave/runte... File scripts/slave/runtest.py (right): https://chromiumcodereview.appspot.com/12317053/diff/1003/scripts/slave/runte... scripts/slave/runtest.py:556: results_tracker, options.factory_properties.get('master'), On 2013/02/23 00:23:21, Mike Stipicevic wrote: > nit: I'd prefer masterName or master_name Done.
Still need answers to those questions, but in the meantime refactored to have a basic cache implementation. At this point I should also implement a unit test--I would put it in slave/unittests and use mox to mock out the file and http access?
On 2013/02/25 22:18:29, sullivan wrote: > I have a couple more questions: > > 1) I'm fine with putting the output somewhere in the build directory. Should I > just create a subdirectory of options.build_dir and put a cache file in there? Yeah, I think that would be good. > > 2) I am passing in a factory_options.master_name, but I just realized there is a > _GetMaster() function. Any reason not to reuse it to get the master name? I assume you're talking about _GetMaster() in scripts/slave/runtest.py, which in tern calls GetActiveMaster in scripts/slave/slave_utils.py? This would give you that info, but it does it by guessing the master given what slave you are on. That works OK in production, but I worry about testing -- you'd have to add an option to specify a slavename if you're testing out changes on a dev machine. wdyt? > > 3) I can definitely read in the name of the stdio file from the factory_options, > but how would I actually set it correctly? I would be modifying AddTelemetryTest > and AddPageCyclerTest in chromium_commands.py I'm not sure if I follow. By stdio do you mean the cached files? If so, couldn't you generate them in runtest.py based on the test name?
On 2013/02/27 09:35:38, Mike Stipicevic wrote: > On 2013/02/25 22:18:29, sullivan wrote: > > I have a couple more questions: > > > > 1) I'm fine with putting the output somewhere in the build directory. Should > I > > just create a subdirectory of options.build_dir and put a cache file in there? > > Yeah, I think that would be good. > > > > > 2) I am passing in a factory_options.master_name, but I just realized there is > a > > _GetMaster() function. Any reason not to reuse it to get the master name? > > I assume you're talking about _GetMaster() in scripts/slave/runtest.py, which in > tern calls GetActiveMaster in scripts/slave/slave_utils.py? This would give you > that info, but it does it by guessing the master given what slave you are on. > That works OK in production, but I worry about testing -- you'd have to add an > option to specify a slavename if you're testing out changes on a dev machine. > wdyt? > > > > > 3) I can definitely read in the name of the stdio file from the > factory_options, > > but how would I actually set it correctly? I would be modifying > AddTelemetryTest > > and AddPageCyclerTest in chromium_commands.py > > I'm not sure if I follow. By stdio do you mean the cached files? If so, couldn't > you generate them in runtest.py based on the test name? Ah I see now, this would be a link back to the raw stdio from the master. You should be able to construct it with the following info: every master.cfg sets c['buildbotURL']. You'll need to pass in the builder name somehow. We may have to provide build properties to runtest.py (build properties has the build number) and finally the step_name should be passed in for all annotated steps. That lets you construct a URL like so: [buildbotURL]/builders/[builderName]/builds/[buildNumber]/steps/[step_name]/logs/stdio. There may be a helper function somewhere, but I don't know off the top of my head.
On 2013/02/26 22:28:12, sullivan wrote: > Still need answers to those questions, but in the meantime refactored to have a > basic cache implementation. > > At this point I should also implement a unit test--I would put it in > slave/unittests and use mox to mock out the file and http access? Cache looks good, although I realize this is more of a journal than a cache. So far we've been using mock instead of mox, so you should probably stick with that unless you *really* want to use mox. Some light examples of unittests with mock are in scripts/master/unittests/annotator_test.py or scripts/master/unittests/gatekeeper_test.py.
https://codereview.chromium.org/12317053/diff/7003/scripts/slave/results_dash... File scripts/slave/results_dashboard.py (right): https://codereview.chromium.org/12317053/diff/7003/scripts/slave/results_dash... scripts/slave/results_dashboard.py:15: CACHE_FILE = "/tmp/my_log_file" in a later version this will be build_dir/results_cache or something?
+simonjam Thanks for the feedback, Mike! Still working through all the changes, but I'm having trouble getting a good unit test written, either with mock or mox (which is used in most of the slave unittests). I uploaded a version that shows what I'm doing; if you or James have opinions on a better way to write the test, I'd love to hear them! Thanks, Annie
I think the test mocks out too much. In a Google unit test, we'd set up a temp directory and just let it do its normal IO to that. I don't know if there are any functions in Chromium that can help with this, but you should just be able to use the tempfile library to set up a directory. Remember to remove it when your test is done. Also, you can let the test create a real urllib2.Request. You just need to verify its .data is correct when passed into urlopen(). So, the only thing I think you need to mock out is urlopen(). And the only additional thing you need to do is tell the script to use the temp directory during tests. You can pass it in to SendResults(), or I think reaching in and changing the constant works.
Thanks, James! I used a temp directory for the unit tests and it's much cleaner now. I think this is ready for full review now. Some notes: 1. There is still a lot of duplicated data in the unit tests. I left it this way for two reasons: a. I think it's easier to understand/fix an individual test when you can see all the input and output in one method, instead of scrolling to the top to piece together how the input was generated b. The more programmatically I generate input and expected output, the closer my unit test becomes to duplicating the code of the function under test. That said, I'm not opposed to factoring out some of the common input/output if you think it's too verbose. 2. I did end up using mox to mock out the url opening. I tried to use mock, but I somehow have a version < 0.8 on my machine, and I couldn't get multiple calls to urlopen with potentially different side effects which include raising exceptions to work (side_effects as an array was introduced in 0.8). I know I could just upgrade, but I think the tests should work on default gPrecise. mox is already used in slave/unittests/get_swarm_results_test.py 3. I added a default argument to slave_utlis.GetActiveMaster() for active local testing. I wasn't sure if this is what you meant about adding an argument for local testing; let me know if you wanted something else. 4. I decided to implement the stdio urls in a follow-up change.
https://codereview.chromium.org/12317053/diff/16002/scripts/slave/unittests/r... File scripts/slave/unittests/results_dashboard_test.py (right): https://codereview.chromium.org/12317053/diff/16002/scripts/slave/unittests/r... scripts/slave/unittests/results_dashboard_test.py:52: cache_file.write("\n".join(cached_json)) I don't think there's anything that tests that the script can read and send the right things after it's written them to a file. So instead of seeding this file from the test, why don't you call SendResults() twice in a single test, but have the first one fail, so it's forced to reuse what it's written. https://codereview.chromium.org/12317053/diff/16002/scripts/slave/unittests/r... scripts/slave/unittests/results_dashboard_test.py:69: self.assertEqual(expected_cache, actual_cache) And then you wouldn't need to test this. Just make sure the second call to urlopen() has what you want.
Thanks, James! Unit tests updated. https://codereview.chromium.org/12317053/diff/16002/scripts/slave/unittests/r... File scripts/slave/unittests/results_dashboard_test.py (right): https://codereview.chromium.org/12317053/diff/16002/scripts/slave/unittests/r... scripts/slave/unittests/results_dashboard_test.py:52: cache_file.write("\n".join(cached_json)) On 2013/03/01 21:21:49, James Simonsen wrote: > I don't think there's anything that tests that the script can read and send the > right things after it's written them to a file. So instead of seeding this file > from the test, why don't you call SendResults() twice in a single test, but have > the first one fail, so it's forced to reuse what it's written. Done. https://codereview.chromium.org/12317053/diff/16002/scripts/slave/unittests/r... scripts/slave/unittests/results_dashboard_test.py:69: self.assertEqual(expected_cache, actual_cache) On 2013/03/01 21:21:49, James Simonsen wrote: > And then you wouldn't need to test this. Just make sure the second call to > urlopen() has what you want. Done.
https://codereview.chromium.org/12317053/diff/21001/scripts/slave/unittests/r... File scripts/slave/unittests/results_dashboard_test.py (right): https://codereview.chromium.org/12317053/diff/21001/scripts/slave/unittests/r... scripts/slave/unittests/results_dashboard_test.py:290: actual_cache = cache_file.read() I was hoping you'd call SendResults again here, but this time let it succeed and expect that you get the same JSON again. And/or a test that adds a new data point and sends the combined list. That test would be more representative of what we want to see.
https://codereview.chromium.org/12317053/diff/21001/scripts/slave/unittests/r... File scripts/slave/unittests/results_dashboard_test.py (right): https://codereview.chromium.org/12317053/diff/21001/scripts/slave/unittests/r... scripts/slave/unittests/results_dashboard_test.py:290: actual_cache = cache_file.read() On 2013/03/04 19:03:24, James Simonsen wrote: > I was hoping you'd call SendResults again here, but this time let it succeed and > expect that you get the same JSON again. And/or a test that adds a new data > point and sends the combined list. That test would be more representative of > what we want to see. I do call SendResults() twice and check that it sends the combined result in test_FailureRetried() above. I wanted to check the cache here because it is important that we actually do store the cache in an expected location in case there is some problem that requires human intervention. For example, test results from dozens of runs pile up because a new bot came online and we forgot to add the IP to the whitelist, and now we have a ton of data and want to manually make sure it all gets sent.
lgtm On 2013/03/04 20:00:57, sullivan wrote: > https://codereview.chromium.org/12317053/diff/21001/scripts/slave/unittests/r... > File scripts/slave/unittests/results_dashboard_test.py (right): > > https://codereview.chromium.org/12317053/diff/21001/scripts/slave/unittests/r... > scripts/slave/unittests/results_dashboard_test.py:290: actual_cache = > cache_file.read() > On 2013/03/04 19:03:24, James Simonsen wrote: > > I was hoping you'd call SendResults again here, but this time let it succeed > and > > expect that you get the same JSON again. And/or a test that adds a new data > > point and sends the combined list. That test would be more representative of > > what we want to see. > > I do call SendResults() twice and check that it sends the combined result in > test_FailureRetried() above. > > I wanted to check the cache here because it is important that we actually do > store the cache in an expected location in case there is some problem that > requires human intervention. For example, test results from dozens of runs pile > up because a new bot came online and we forgot to add the IP to the whitelist, > and now we have a ton of data and want to manually make sure it all gets sent. Okay, I wouldn't bother checking the content then. You can just verify the file is there. For instance, if we decide to change the file format, you shouldn't need to update the tests.
lgtm
Probably should change the title and description before submitting.
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/sullivan@chromium.org/12317053/21001
On 2013/03/04 20:36:54, Mike Stipicevic wrote: > Probably should change the title and description before submitting. Done.
Presubmit check for 12317053-21001 failed and returned exit status 1. INFO:root:Found 4 file(s). INFO:PRESUBMIT:Running pylint on 306 files 48 masters succeeded, 0 failed, 12 skipped in 186.4s. Trying master.chromium Trying master.chromium.chrome Trying master.chromium.chromebot Trying master.chromium.chromiumos Trying master.chromium.endure Trying master.chromium.flaky Trying master.chromium.fyi Trying master.chromium.git Trying master.chromium.gpu Trying master.chromium.gpu.fyi Trying master.chromium.linux Trying master.chromium.lkgr Trying master.chromium.mac Trying master.chromium.memory Trying master.chromium.memory.fyi Trying master.chromium.perf Trying master.chromium.perf_av Trying master.chromium.pyauto Trying master.chromium.swarm Trying master.chromium.webkit Trying master.chromium.webrtc Trying master.chromium.webrtc.fyi Trying master.chromium.win Trying master.chromiumos Trying master.client.dart Trying master.client.dart.fyi Trying master.client.drmemory Trying master.client.dynamorio Trying master.client.libjingle Trying master.client.libyuv Trying master.client.nacl Trying master.client.nacl.chrome Trying master.client.nacl.llvm Trying master.client.nacl.ports Trying master.client.nacl.ragel Trying master.client.nacl.sdk Trying master.client.nacl.sdk.addin Trying master.client.nacl.sdk.mono Trying master.client.nacl.toolchain Trying master.client.omaha Trying master.client.pagespeed Trying master.client.v8 Trying master.client.webrtc Trying master.devtools Trying master.tryserver.chromium Trying master.tryserver.chromium.linux Trying master.tryserver.nacl Trying master.tryserver.webrtc /b/commit-queue/workdir/tools/build/third_party/zope/__init__.py:19: UserWarning: Module mock was already imported from /b/commit-queue/workdir/tools/build/third_party/mock-1.0.1/mock.py, but /usr/local/lib/python2.6/dist-packages/mock-1.0b1-py2.6.egg is being added to sys.path import pkg_resources test_android (__main__.TestMailNotifier) ... ok test_cq_succeed (__main__.TestMailNotifier) ... ok test_simple (__main__.TestMailNotifier) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.252s OK Parsing client.v8 Parsing chromium.endure Parsing chromium.memory Skipping chromium.swarm, fix and enable in masters_cfg_test.py! Skipping client.nacl.chrome, fix and enable in masters_cfg_test.py! Parsing experimental Parsing chromium.win Parsing client.skia Parsing chromium Parsing tryserver.webrtc WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" Parsing client.sfntly Parsing chromium.webrtc Parsing client.dart.fyi Parsing chromium.webrtc.fyi Parsing client.pagespeed Parsing chromium.pyauto Parsing client.libjingle WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" Parsing tryserver.chromium WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-dir option on linux. This is almost certainly incorrect. Assuming you meant "src/sconsbuild" WARNING: Passed "src/build" as --build-d… (message too large)
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/sullivan@chromium.org/12317053/18009
Message was sent while issue was closed.
Change committed as 185969 |