|
|
DescriptionOutputting chart json for disabled tests.
BUG=chromium:647714
Committed: https://chromium.googlesource.com/external/github.com/catapult-project/catapult/+/28c78c375477cdf8bf196b95193977a014a4e23c
Patch Set 1 #
Total comments: 7
Patch Set 2 : Responding to review comments #
Total comments: 6
Patch Set 3 : Responding to review comments #Patch Set 4 : Chaing the right json_output_formatter #
Total comments: 2
Patch Set 5 : Changing default behavior #Patch Set 6 : reverting file #
Total comments: 4
Patch Set 7 : Review comments #
Total comments: 2
Patch Set 8 : Setting arg explicitly #
Messages
Total messages: 26 (11 generated)
The CQ bit was checked by eyaich@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
eyaich@chromium.org changed reviewers: + nednguyen@google.com, perezju@chromium.org
Description was changed from ========== Outputting chart json for disabled tests. BUG=chromium:633253 ========== to ========== Outputting chart json for disabled tests. BUG=chromium:633253 ==========
nednguyen@google.com changed reviewers: + eakuefner@chromium.org
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... File telemetry/telemetry/internal/results/chart_json_output_formatter.py (right): https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... telemetry/telemetry/internal/results/chart_json_output_formatter.py:88: result_dict = { should this also include format_version? Not sure what "next_version" is, but maybe that one too? https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... telemetry/telemetry/internal/results/chart_json_output_formatter.py:112: json.dump(results, self.output_stream, indent=2) nit: now that we're doing this, could you also add the option: separators=(',', ': ') otherwise we end up with trailing spaces in the resulting json. https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... File telemetry/telemetry/internal/story_runner.py (right): https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... telemetry/telemetry/internal/story_runner.py:296: benchmark.GetMetadata(), finder_options) I think this is only one of the three places where we check-for and skip running disabled tests. See: https://cs.chromium.org/search/?q=options%5C.run_disabled_tests+file:%5C.py$&... It would be great if that logic could be unified and put in a single place, but I'm not quite sure how easy/hard would that be.
https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... File telemetry/telemetry/internal/results/chart_json_output_formatter.py (right): https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... telemetry/telemetry/internal/results/chart_json_output_formatter.py:88: result_dict = { On 2016/09/16 15:50:01, perezju wrote: > should this also include format_version? Not sure what "next_version" is, but > maybe that one too? I don't know why any additional information would be necessary for a disabled test. Now we completely ignore them, so this is more informational than anything else. Maybe Ethan or Ned has other thoughts. https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... telemetry/telemetry/internal/results/chart_json_output_formatter.py:112: json.dump(results, self.output_stream, indent=2) On 2016/09/16 15:50:01, perezju wrote: > nit: now that we're doing this, could you also add the option: separators=(',', > ': ') > > otherwise we end up with trailing spaces in the resulting json. Done. https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... File telemetry/telemetry/internal/story_runner.py (right): https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... telemetry/telemetry/internal/story_runner.py:296: benchmark.GetMetadata(), finder_options) On 2016/09/16 15:50:01, perezju wrote: > I think this is only one of the three places where we check-for and skip running > disabled tests. > > See: > https://cs.chromium.org/search/?q=options%5C.run_disabled_tests+file:%5C.py$&... > > It would be great if that logic could be unified and put in a single place, but > I'm not quite sure how easy/hard would that be. Thanks for point that out! I wasn't aware of them. So these places are slightly different in that they should just pass through and then this method in RunBenchmark will get executed to generate the chartjson and then return here. Therefore, this is similar to crrev.com/2342023005 in that I want to do it in the following order: 1) Update this to generate new chartjson 2) Update all recipes to read this new chart json 3) Update all telemetry logic to continue executing (in dual_browser_story and shared_page_state) when a test is disabled 4) Have the StoryRunner to return 0 instead of 1 (original CL crrev.com/2342023005) I will create a separate bug off the swarming perf waterfall bug since this is a separate issue and have the master one blocked on it. After this is completed, is there a reason to even keep the run_disabled_tests flag since we will be always running them?
Description was changed from ========== Outputting chart json for disabled tests. BUG=chromium:633253 ========== to ========== Outputting chart json for disabled tests. BUG=chromium:647714 ==========
non-owner lgtm https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... File telemetry/telemetry/internal/story_runner.py (right): https://codereview.chromium.org/2342023005/diff/1/telemetry/telemetry/interna... telemetry/telemetry/internal/story_runner.py:296: benchmark.GetMetadata(), finder_options) On 2016/09/16 17:03:40, eyaich1 wrote: > 1) Update this to generate new chartjson > 2) Update all recipes to read this new chart json > 3) Update all telemetry logic to continue executing (in dual_browser_story and > shared_page_state) when a test is disabled > 4) Have the StoryRunner to return 0 instead of 1 (original CL > crrev.com/2342023005) sgtm > After this is completed, is there a reason to even keep the run_disabled_tests > flag since we will be always running them? I think the flag is still needed. As it would mean "really run them" and not just "output the empty chartjson".
https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... File telemetry/telemetry/internal/results/chart_json_output_formatter.py (right): https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... telemetry/telemetry/internal/results/chart_json_output_formatter.py:72: 'disabled': False On a second thought, should we change this to 'enabled': True/False; so that boolean tests become more straightforward to read? (e.g. avoid having to write the double-negative "not disabled").
https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... File telemetry/telemetry/internal/results/results_options.py (right): https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... telemetry/telemetry/internal/results/results_options.py:120: if 'chartjson' in options.output_formats: We should use polymorphism to handle this instead https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... File telemetry/telemetry/internal/story_runner.py (right): https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... telemetry/telemetry/internal/story_runner.py:295: results_options.FormatDisabledResults( Instead of using results_options, I think it would make fit better to move the logic to the results object. i.e: with results_options.CreateResults(benchmark_metadata, finder_options, benchmark.ValueCanBeAddedPredicate) as results: results.FormatDisabledResults() return 1 This would allow different outputformatters to define how they want to format the disabled benchmark results. You can leave everything empty for now, and just implement it in chart_json_output_formatter
https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... File telemetry/telemetry/internal/results/chart_json_output_formatter.py (right): https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... telemetry/telemetry/internal/results/chart_json_output_formatter.py:72: 'disabled': False On 2016/09/22 08:42:07, perezju wrote: > On a second thought, should we change this to 'enabled': True/False; so that > boolean tests become more straightforward to read? (e.g. avoid having to write > the double-negative "not disabled"). Done. https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... File telemetry/telemetry/internal/results/results_options.py (right): https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... telemetry/telemetry/internal/results/results_options.py:120: if 'chartjson' in options.output_formats: On 2016/09/22 13:24:33, nednguyen wrote: > We should use polymorphism to handle this instead Done. https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... File telemetry/telemetry/internal/story_runner.py (right): https://codereview.chromium.org/2342023005/diff/20001/telemetry/telemetry/int... telemetry/telemetry/internal/story_runner.py:295: results_options.FormatDisabledResults( On 2016/09/22 13:24:34, nednguyen wrote: > Instead of using results_options, I think it would make fit better to move the > logic to the results object. > i.e: > with results_options.CreateResults(benchmark_metadata, finder_options, > benchmark.ValueCanBeAddedPredicate) as results: > results.FormatDisabledResults() > return 1 > > > This would allow different outputformatters to define how they want to format > the disabled benchmark results. You can leave everything empty for now, and just > implement it in chart_json_output_formatter Done.
https://codereview.chromium.org/2342023005/diff/60001/telemetry/telemetry/int... File telemetry/telemetry/internal/results/output_formatter.py (right): https://codereview.chromium.org/2342023005/diff/60001/telemetry/telemetry/int... telemetry/telemetry/internal/results/output_formatter.py:41: raise NotImplementedError() nit: I would let this one be "pass" as the default implementation, since that's probably what most output formatters will want to do anyway, and there won't be a need to override everywhere else.
https://codereview.chromium.org/2342023005/diff/60001/telemetry/telemetry/int... File telemetry/telemetry/internal/results/output_formatter.py (right): https://codereview.chromium.org/2342023005/diff/60001/telemetry/telemetry/int... telemetry/telemetry/internal/results/output_formatter.py:41: raise NotImplementedError() On 2016/09/26 14:42:35, perezju wrote: > nit: I would let this one be "pass" as the default implementation, since that's > probably what most output formatters will want to do anyway, and there won't be > a need to override everywhere else. Done.
https://codereview.chromium.org/2342023005/diff/100001/telemetry/telemetry/in... File telemetry/telemetry/internal/story_runner.py (right): https://codereview.chromium.org/2342023005/diff/100001/telemetry/telemetry/in... telemetry/telemetry/internal/story_runner.py:298: results.PrintDisabledSummary() "Disabled" is the state of the results, whereas "PrintSummary" is the way to serialze the state of results in some outputformat. Hence instead of PrintDisabledSummary, I suggest adding a method to change the results' state to indicate that the benchmark is disabled, then call results.PrintSummary(). The details of what to serialize when the benchmark is disabled then is an implementation detail of PageTestResults Actually change the state of the results in this case help making sure that it has a consistent state reflecting what happened regardless of the contexts how it's used, e.g: results.AddValue(..) should throw exception if we know that the benchmark is already disabled. https://codereview.chromium.org/2342023005/diff/100001/telemetry/telemetry/in... telemetry/telemetry/internal/story_runner.py:299: return 1 return 0 here?
https://codereview.chromium.org/2342023005/diff/100001/telemetry/telemetry/in... File telemetry/telemetry/internal/story_runner.py (right): https://codereview.chromium.org/2342023005/diff/100001/telemetry/telemetry/in... telemetry/telemetry/internal/story_runner.py:298: results.PrintDisabledSummary() On 2016/09/26 16:58:04, nednguyen wrote: > "Disabled" is the state of the results, whereas "PrintSummary" is the way to > serialze the state of results in some outputformat. Hence instead of > PrintDisabledSummary, I suggest adding a method to change the results' state to > indicate that the benchmark is disabled, then call results.PrintSummary(). The > details of what to serialize when the benchmark is disabled then is an > implementation detail of PageTestResults > > Actually change the state of the results in this case help making sure that it > has a consistent state reflecting what happened regardless of the contexts how > it's used, e.g: results.AddValue(..) should throw exception if we know that the > benchmark is already disabled. Done. https://codereview.chromium.org/2342023005/diff/100001/telemetry/telemetry/in... telemetry/telemetry/internal/story_runner.py:299: return 1 On 2016/09/26 16:58:04, nednguyen wrote: > return 0 here? Not yet. Other Cls have to land that read this new chartjson before we can return 0.
lgtm https://codereview.chromium.org/2342023005/diff/120001/telemetry/telemetry/in... File telemetry/telemetry/internal/story_runner.py (right): https://codereview.chromium.org/2342023005/diff/120001/telemetry/telemetry/in... telemetry/telemetry/internal/story_runner.py:332: benchmark.ValueCanBeAddedPredicate) as results: Set benchmark_enabled here to True as well. It's better not to rely on default value of parameters Would be nice if we can merge this with the part of creating results option above, but it's up to you whether to merge them.
https://codereview.chromium.org/2342023005/diff/120001/telemetry/telemetry/in... File telemetry/telemetry/internal/story_runner.py (right): https://codereview.chromium.org/2342023005/diff/120001/telemetry/telemetry/in... telemetry/telemetry/internal/story_runner.py:332: benchmark.ValueCanBeAddedPredicate) as results: On 2016/09/26 17:23:11, nednguyen wrote: > Set benchmark_enabled here to True as well. It's better not to rely on default > value of parameters > > Would be nice if we can merge this with the part of creating results option > above, but it's up to you whether to merge them. First part done. I looked at that earlier but given that the PrintSummary is part of the finally of the upload results it didn't make sense to wrap it all.
The CQ bit was checked by eyaich@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from perezju@chromium.org, nednguyen@google.com Link to the patchset: https://codereview.chromium.org/2342023005/#ps140001 (title: "Setting arg explicitly")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Message was sent while issue was closed.
Description was changed from ========== Outputting chart json for disabled tests. BUG=chromium:647714 ========== to ========== Outputting chart json for disabled tests. BUG=chromium:647714 Committed: https://chromium.googlesource.com/external/github.com/catapult-project/catapu... ==========
Message was sent while issue was closed.
Committed patchset #8 (id:140001) as https://chromium.googlesource.com/external/github.com/catapult-project/catapu... |