|
|
Created:
7 years, 1 month ago by bajones Modified:
7 years, 1 month ago CC:
chromium-reviews, chrome-speed-team+watch_google.com, telemetry+watch_chromium.org Base URL:
svn://svn.chromium.org/chrome/trunk/src Visibility:
Public. |
DescriptionChanged logging of expected failures from warning to info
R=dtu@chromium.org, kbr@chromium.org
Committed: https://src.chromium.org/viewvc/chrome?view=rev&revision=233124
Patch Set 1 #Patch Set 2 : Updated to only affect tests with expectations #Messages
Total messages: 14 (0 generated)
Having these as warnings is adding noise to the waterfall output in that skipped test and expected failures turn the step orange and shows the test names. This obscures which tests are failing when unexpected failures do occur.
lgtm
Why does logging output affect the results of the run? It seems brittle that just changing the logging level makes a difference.
On 2013/11/05 19:24:39, dtu wrote: > Why does logging output affect the results of the run? It seems brittle that > just changing the logging level makes a difference. It's because of how runtest.py parses the gtest-style output. Changing that behavior is not up for consideration.
On 2013/11/05 19:24:39, dtu wrote: > Why does logging output affect the results of the run? It seems brittle that > just changing the logging level makes a difference. It doesn't change the success/fail state of the run. It can change the bot step orange to indicate a warning was output, but that's not breaking behavior. The issue is that when both warnings and failures are output the waterfall does a very poor job of differentiating which is which. Really this is a cosmetic change, but it's one that will help bot failures be debugged more efficiently.
On 2013/11/05 19:26:04, Ken Russell wrote: > On 2013/11/05 19:24:39, dtu wrote: > > Why does logging output affect the results of the run? It seems brittle that > > just changing the logging level makes a difference. > > It's because of how runtest.py parses the gtest-style output. Changing that > behavior is not up for consideration. Is that different from the parser for telemetry_unittests? Those are running on the waterfall with gtest-style output and verbose logging.
Okay, my main objection is just that I want failures to show a stack trace with default options, so that people running benchmarks at their desk can diagnose without having to re-run. Will it work for you to move that logging message into the AddFailure branch?
On 2013/11/05 19:30:09, dtu wrote: > On 2013/11/05 19:26:04, Ken Russell wrote: > > On 2013/11/05 19:24:39, dtu wrote: > > > Why does logging output affect the results of the run? It seems brittle that > > > just changing the logging level makes a difference. > > > > It's because of how runtest.py parses the gtest-style output. Changing that > > behavior is not up for consideration. > > Is that different from the parser for telemetry_unittests? Those are running on > the waterfall with gtest-style output and verbose logging. The gpu tests are running the same way (gtest output, -v) Looking at the code I could make a small change that would make it so this change only affects tests using expectations. Would that be better?
On 2013/11/05 19:36:39, bajones wrote: > On 2013/11/05 19:30:09, dtu wrote: > > On 2013/11/05 19:26:04, Ken Russell wrote: > > > On 2013/11/05 19:24:39, dtu wrote: > > > > Why does logging output affect the results of the run? It seems brittle > that > > > > just changing the logging level makes a difference. > > > > > > It's because of how runtest.py parses the gtest-style output. Changing that > > > behavior is not up for consideration. > > > > Is that different from the parser for telemetry_unittests? Those are running > on > > the waterfall with gtest-style output and verbose logging. > > The gpu tests are running the same way (gtest output, -v) > > Looking at the code I could make a small change that would make it so this > change only affects tests using expectations. Would that be better? Yes, please :)
On 2013/11/05 19:39:15, dtu wrote: > On 2013/11/05 19:36:39, bajones wrote: > > On 2013/11/05 19:30:09, dtu wrote: > > > On 2013/11/05 19:26:04, Ken Russell wrote: > > > > On 2013/11/05 19:24:39, dtu wrote: > > > > > Why does logging output affect the results of the run? It seems brittle > > that > > > > > just changing the logging level makes a difference. > > > > > > > > It's because of how runtest.py parses the gtest-style output. Changing > that > > > > behavior is not up for consideration. > > > > > > Is that different from the parser for telemetry_unittests? Those are running > > on > > > the waterfall with gtest-style output and verbose logging. > > > > The gpu tests are running the same way (gtest output, -v) > > > > Looking at the code I could make a small change that would make it so this > > change only affects tests using expectations. Would that be better? > > Yes, please :) Done.
lgtm
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/bajones@chromium.org/60473002/120001
Step "update" is always a major failure. Look at the try server FAQ for more details. http://build.chromium.org/p/tryserver.chromium/buildstatus?builder=linux_rel&...
Message was sent while issue was closed.
Committed patchset #2 manually as r233124 (presubmit successful). |