|
|
Created:
3 years, 9 months ago by ashleymarie1 Modified:
3 years, 9 months ago Reviewers:
nednguyen CC:
chromium-reviews, telemetry-reviews_chromium.org, martiniss Target Ref:
refs/heads/master Project:
chromium Visibility:
Public. |
DescriptionGenerating benchmark.csv file
This change enables perf_json_generator to also generate a csv
file with benchmark metadata (name, owners, and component). This
csv will be pulled into go/chrome-benchmarks so this information
will remain accurate and up-to-date.
BUG=chromium:575318
Review-Url: https://codereview.chromium.org/2754883002
Cr-Commit-Position: refs/heads/master@{#459087}
Committed: https://chromium.googlesource.com/chromium/src/+/ff2e6f8d2977066e39d75bf8ffaf0460b9c1a875
Patch Set 1 #
Total comments: 3
Patch Set 2 : Removing renaming changes to a different change #Patch Set 3 : Fixing silly Python mistake #
Total comments: 8
Patch Set 4 : Generating benchmark.csv file #
Total comments: 6
Patch Set 5 : Verifying all tests are in benchmark csv #Patch Set 6 : Generating benchmark.csv file #
Total comments: 11
Patch Set 7 : Responding to Ned's comments #Patch Set 8 : Adding unittests #Patch Set 9 : Generating benchmark.csv file #
Total comments: 5
Patch Set 10 : verifying tests during validate only as well #
Total comments: 2
Patch Set 11 : Generating benchmark.csv file #
Messages
Total messages: 40 (17 generated)
The CQ bit was checked by ashleymarie@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
ashleymarie@chromium.org changed reviewers: + nednguyen@google.com
I've got a few questions in benchmark.csv and perf_data_generator.py but it's generally ready for review :) https://codereview.chromium.org/2754883002/diff/1/tools/perf/benchmark.csv File tools/perf/benchmark.csv (right): https://codereview.chromium.org/2754883002/diff/1/tools/perf/benchmark.csv#ne... tools/perf/benchmark.csv:1: angle_perftests,jmadill@chromium.org, Is it worth including the headers for the csv in here? Or just permanently leaving them go/chrome-benchmarks https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... File tools/perf/core/perf_data_generator.py (right): https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... tools/perf/core/perf_data_generator.py:7: the src/testing/buildbot directory and benchmark.csv in the src/tools/perf Is there a better directory for benchmark.csv to end up in? https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... tools/perf/core/perf_data_generator.py:874: update_benchmark_csv() Should I add an argument to not update the benchmark csv? Or should it just always update when this script is ran
On 2017/03/16 17:09:49, ashleymarie1 wrote: > I've got a few questions in benchmark.csv and perf_data_generator.py but it's > generally ready for review :) > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/benchmark.csv > File tools/perf/benchmark.csv (right): > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/benchmark.csv#ne... > tools/perf/benchmark.csv:1: mailto:angle_perftests,jmadill@chromium.org, > Is it worth including the headers for the csv in here? Or just permanently > leaving them go/chrome-benchmarks > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... > File tools/perf/core/perf_data_generator.py (right): > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... > tools/perf/core/perf_data_generator.py:7: the src/testing/buildbot directory and > benchmark.csv in the src/tools/perf > Is there a better directory for benchmark.csv to end up in? > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... > tools/perf/core/perf_data_generator.py:874: update_benchmark_csv() > Should I add an argument to not update the benchmark csv? Or should it just > always update when this script is ran Can you make a separate CL about renaming generate_perf_data/perf_data_generator? That would make it easier for me to review the CL
On 2017/03/16 17:10:48, nednguyen wrote: > On 2017/03/16 17:09:49, ashleymarie1 wrote: > > I've got a few questions in benchmark.csv and perf_data_generator.py but it's > > generally ready for review :) > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/benchmark.csv > > File tools/perf/benchmark.csv (right): > > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/benchmark.csv#ne... > > tools/perf/benchmark.csv:1: mailto:angle_perftests,jmadill@chromium.org, > > Is it worth including the headers for the csv in here? Or just permanently > > leaving them go/chrome-benchmarks > > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... > > File tools/perf/core/perf_data_generator.py (right): > > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... > > tools/perf/core/perf_data_generator.py:7: the src/testing/buildbot directory > and > > benchmark.csv in the src/tools/perf > > Is there a better directory for benchmark.csv to end up in? > > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... > > tools/perf/core/perf_data_generator.py:874: update_benchmark_csv() > > Should I add an argument to not update the benchmark csv? Or should it just > > always update when this script is ran > > Can you make a separate CL about renaming > generate_perf_data/perf_data_generator? That would make it easier for me to > review the CL Great idea! I'll do that now :)
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: chromium_presubmit on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromium_presub...)
On 2017/03/16 17:11:46, ashleymarie1 wrote: > On 2017/03/16 17:10:48, nednguyen wrote: > > On 2017/03/16 17:09:49, ashleymarie1 wrote: > > > I've got a few questions in benchmark.csv and perf_data_generator.py but > it's > > > generally ready for review :) > > > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/benchmark.csv > > > File tools/perf/benchmark.csv (right): > > > > > > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/benchmark.csv#ne... > > > tools/perf/benchmark.csv:1: mailto:angle_perftests,jmadill@chromium.org, > > > Is it worth including the headers for the csv in here? Or just permanently > > > leaving them go/chrome-benchmarks > > > > > > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... > > > File tools/perf/core/perf_data_generator.py (right): > > > > > > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... > > > tools/perf/core/perf_data_generator.py:7: the src/testing/buildbot directory > > and > > > benchmark.csv in the src/tools/perf > > > Is there a better directory for benchmark.csv to end up in? > > > > > > > > > https://codereview.chromium.org/2754883002/diff/1/tools/perf/core/perf_data_g... > > > tools/perf/core/perf_data_generator.py:874: update_benchmark_csv() > > > Should I add an argument to not update the benchmark csv? Or should it just > > > always update when this script is ran > > > > Can you make a separate CL about renaming > > generate_perf_data/perf_data_generator? That would make it easier for me to > > review the CL > > Great idea! I'll do that now :) Working on a separate cl for renaming but the important logic is split out into this change Please take another look
The CQ bit was checked by ashleymarie@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: chromium_presubmit on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromium_presub...)
The CQ bit was checked by ashleymarie@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
https://codereview.chromium.org/2754883002/diff/40001/tools/perf/benchmark.csv File tools/perf/benchmark.csv (right): https://codereview.chromium.org/2754883002/diff/40001/tools/perf/benchmark.cs... tools/perf/benchmark.csv:1: angle_perftests,jmadill@chromium.org, Can you make the first row to be the columns' names? https://codereview.chromium.org/2754883002/diff/40001/tools/perf/core/perf_js... File tools/perf/core/perf_json_generator.py (right): https://codereview.chromium.org/2754883002/diff/40001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:820: MANUAL_BENCHMARKS = [ Let name this NON_TElEMETRY_BENCHMARKS. Also can you make it a dictionary with keys are benchmark names and values are namedtuple with 2 fields: emails & component? https://codereview.chromium.org/2754883002/diff/40001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:865: 'anytime you add/remove any existing benchmarks in ' The description needs update https://codereview.chromium.org/2754883002/diff/40001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:874: update_benchmark_csv() If --validate-only flag is enabled, we should not modify files
https://codereview.chromium.org/2754883002/diff/40001/tools/perf/benchmark.csv File tools/perf/benchmark.csv (right): https://codereview.chromium.org/2754883002/diff/40001/tools/perf/benchmark.cs... tools/perf/benchmark.csv:1: angle_perftests,jmadill@chromium.org, On 2017/03/16 19:11:18, nednguyen wrote: > Can you make the first row to be the columns' names? Done. https://codereview.chromium.org/2754883002/diff/40001/tools/perf/core/perf_js... File tools/perf/core/perf_json_generator.py (right): https://codereview.chromium.org/2754883002/diff/40001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:820: MANUAL_BENCHMARKS = [ On 2017/03/16 19:11:18, nednguyen wrote: > Let name this NON_TElEMETRY_BENCHMARKS. Also can you make it a dictionary with > keys are benchmark names and values are namedtuple with 2 fields: emails & > component? Done. I had never heard of namedtuple before today! Thanks for teaching me something new :) https://codereview.chromium.org/2754883002/diff/40001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:865: 'anytime you add/remove any existing benchmarks in ' On 2017/03/16 19:11:18, nednguyen wrote: > The description needs update Done. https://codereview.chromium.org/2754883002/diff/40001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:874: update_benchmark_csv() On 2017/03/16 19:11:18, nednguyen wrote: > If --validate-only flag is enabled, we should not modify files Done.
https://codereview.chromium.org/2754883002/diff/60001/tools/perf/benchmark.csv File tools/perf/benchmark.csv (right): https://codereview.chromium.org/2754883002/diff/60001/tools/perf/benchmark.cs... tools/perf/benchmark.csv:1: AUTOGENERATED FILE DO NOT EDIT Not sure whether this will mess up the spreadsheet import. I guess we have to try out. https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... File tools/perf/core/perf_json_generator.py (right): https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:823: NON_TELEMETRY_BENCHMARKS = { My concern here is that there is no way to make sure that this list is in synced with the manual list people put in the json config file. For example, if people remove tracing_perftests at line 85 in this file, how do we make sure that they don't forget to also remove tracing_perftests here?
https://codereview.chromium.org/2754883002/diff/60001/tools/perf/benchmark.csv File tools/perf/benchmark.csv (right): https://codereview.chromium.org/2754883002/diff/60001/tools/perf/benchmark.cs... tools/perf/benchmark.csv:1: AUTOGENERATED FILE DO NOT EDIT On 2017/03/16 21:08:17, nednguyen wrote: > Not sure whether this will mess up the spreadsheet import. I guess we have to > try out. I don't think it will mess up the spreadsheet import I tried copy pasting this into sheets using Paste Special -> Paste CSV as columns and it worked fine so I think this should work with the import function https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... File tools/perf/core/perf_json_generator.py (right): https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:823: NON_TELEMETRY_BENCHMARKS = { On 2017/03/16 21:08:17, nednguyen wrote: > My concern here is that there is no way to make sure that this list is in synced > with the manual list people put in the json config file. For example, if people > remove tracing_perftests at line 85 in this file, how do we make sure that they > don't forget to also remove tracing_perftests here? I, too, am concerned about that. I'm having trouble finding any other way to compile this information though since not all the non telemetry benchmarks are in SCRIPT_TESTS or any other list as far as I've seen. Any ideas for how to make this more robust?
https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... File tools/perf/core/perf_json_generator.py (right): https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:823: NON_TELEMETRY_BENCHMARKS = { On 2017/03/16 21:18:02, ashleymarie1 wrote: > On 2017/03/16 21:08:17, nednguyen wrote: > > My concern here is that there is no way to make sure that this list is in > synced > > with the manual list people put in the json config file. For example, if > people > > remove tracing_perftests at line 85 in this file, how do we make sure that > they > > don't forget to also remove tracing_perftests here? > I, too, am concerned about that. I'm having trouble finding any other way to > compile this information though since not all the non telemetry benchmarks are > in SCRIPT_TESTS or any other list as far as I've seen. Any ideas for how to make > this more robust? One idea I have is make a method that return a map of all benchmark names by theirs metadata (Telemetry + hard code one). Then somewhere, you can make an assert statement that all the benchmark to be generated in the perf waterfall json config must have name match one of benchmark_names_by_metadata's keys.
https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... File tools/perf/core/perf_json_generator.py (right): https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... tools/perf/core/perf_json_generator.py:823: NON_TELEMETRY_BENCHMARKS = { On 2017/03/16 21:23:43, nednguyen wrote: > On 2017/03/16 21:18:02, ashleymarie1 wrote: > > On 2017/03/16 21:08:17, nednguyen wrote: > > > My concern here is that there is no way to make sure that this list is in > > synced > > > with the manual list people put in the json config file. For example, if > > people > > > remove tracing_perftests at line 85 in this file, how do we make sure that > > they > > > don't forget to also remove tracing_perftests here? > > I, too, am concerned about that. I'm having trouble finding any other way to > > compile this information though since not all the non telemetry benchmarks are > > in SCRIPT_TESTS or any other list as far as I've seen. Any ideas for how to > make > > this more robust? > > One idea I have is make a method that return a map of all benchmark names by > theirs metadata (Telemetry + hard code one). Then somewhere, you can make an > assert statement that all the benchmark to be generated in the perf waterfall > json config must have name match one of benchmark_names_by_metadata's keys. So this solves the problem of a user adding a test above but failing to add it here. It doesn't solve the reverse problem of removing a test above and failing to remove it here. So we also need to assert that all the benchmark_names_by_metadata's keys are in the benchmark names generated in the perf waterfall json config. I feel like there should be a cleaner solution here, I just can't think of it :( Maybe add all the benchmark names by metadata to a set and then add all the test names from the json config file to a set and assert the sets are the same?
On 2017/03/17 15:30:19, ashleymarie1 wrote: > https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... > File tools/perf/core/perf_json_generator.py (right): > > https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... > tools/perf/core/perf_json_generator.py:823: NON_TELEMETRY_BENCHMARKS = { > On 2017/03/16 21:23:43, nednguyen wrote: > > On 2017/03/16 21:18:02, ashleymarie1 wrote: > > > On 2017/03/16 21:08:17, nednguyen wrote: > > > > My concern here is that there is no way to make sure that this list is in > > > synced > > > > with the manual list people put in the json config file. For example, if > > > people > > > > remove tracing_perftests at line 85 in this file, how do we make sure that > > > they > > > > don't forget to also remove tracing_perftests here? > > > I, too, am concerned about that. I'm having trouble finding any other way to > > > compile this information though since not all the non telemetry benchmarks > are > > > in SCRIPT_TESTS or any other list as far as I've seen. Any ideas for how to > > make > > > this more robust? > > > > One idea I have is make a method that return a map of all benchmark names by > > theirs metadata (Telemetry + hard code one). Then somewhere, you can make an > > assert statement that all the benchmark to be generated in the perf waterfall > > json config must have name match one of benchmark_names_by_metadata's keys. > > So this solves the problem of a user adding a test above but failing to add it > here. It doesn't solve the reverse problem of removing a test above and failing > to remove it here. So we also need to assert that all the > benchmark_names_by_metadata's keys are in the benchmark names generated in the > perf waterfall json config. > I feel like there should be a cleaner solution here, I just can't think of it :( > Maybe add all the benchmark names by metadata to a set and then add all the test > names from the json config file to a set and assert the sets are the same? Yes, asserting the set of names the same should fix it.
On 2017/03/17 15:32:02, nednguyen wrote: > On 2017/03/17 15:30:19, ashleymarie1 wrote: > > > https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... > > File tools/perf/core/perf_json_generator.py (right): > > > > > https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... > > tools/perf/core/perf_json_generator.py:823: NON_TELEMETRY_BENCHMARKS = { > > On 2017/03/16 21:23:43, nednguyen wrote: > > > On 2017/03/16 21:18:02, ashleymarie1 wrote: > > > > On 2017/03/16 21:08:17, nednguyen wrote: > > > > > My concern here is that there is no way to make sure that this list is > in > > > > synced > > > > > with the manual list people put in the json config file. For example, if > > > > people > > > > > remove tracing_perftests at line 85 in this file, how do we make sure > that > > > > they > > > > > don't forget to also remove tracing_perftests here? > > > > I, too, am concerned about that. I'm having trouble finding any other way > to > > > > compile this information though since not all the non telemetry benchmarks > > are > > > > in SCRIPT_TESTS or any other list as far as I've seen. Any ideas for how > to > > > make > > > > this more robust? > > > > > > One idea I have is make a method that return a map of all benchmark names by > > > theirs metadata (Telemetry + hard code one). Then somewhere, you can make an > > > assert statement that all the benchmark to be generated in the perf > waterfall > > > json config must have name match one of benchmark_names_by_metadata's keys. > > > > So this solves the problem of a user adding a test above but failing to add it > > here. It doesn't solve the reverse problem of removing a test above and > failing > > to remove it here. So we also need to assert that all the > > benchmark_names_by_metadata's keys are in the benchmark names generated in the > > perf waterfall json config. > > I feel like there should be a cleaner solution here, I just can't think of it > :( > > Maybe add all the benchmark names by metadata to a set and then add all the > test > > names from the json config file to a set and assert the sets are the same? > > Yes, asserting the set of names the same should fix it. Alright I believe this solution should work to verify that no changes need to be made to NON_TELEMETRY_BENCHMARKS Please take another look, thanks
On 2017/03/17 15:32:02, nednguyen wrote: > On 2017/03/17 15:30:19, ashleymarie1 wrote: > > > https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... > > File tools/perf/core/perf_json_generator.py (right): > > > > > https://codereview.chromium.org/2754883002/diff/60001/tools/perf/core/perf_js... > > tools/perf/core/perf_json_generator.py:823: NON_TELEMETRY_BENCHMARKS = { > > On 2017/03/16 21:23:43, nednguyen wrote: > > > On 2017/03/16 21:18:02, ashleymarie1 wrote: > > > > On 2017/03/16 21:08:17, nednguyen wrote: > > > > > My concern here is that there is no way to make sure that this list is > in > > > > synced > > > > > with the manual list people put in the json config file. For example, if > > > > people > > > > > remove tracing_perftests at line 85 in this file, how do we make sure > that > > > > they > > > > > don't forget to also remove tracing_perftests here? > > > > I, too, am concerned about that. I'm having trouble finding any other way > to > > > > compile this information though since not all the non telemetry benchmarks > > are > > > > in SCRIPT_TESTS or any other list as far as I've seen. Any ideas for how > to > > > make > > > > this more robust? > > > > > > One idea I have is make a method that return a map of all benchmark names by > > > theirs metadata (Telemetry + hard code one). Then somewhere, you can make an > > > assert statement that all the benchmark to be generated in the perf > waterfall > > > json config must have name match one of benchmark_names_by_metadata's keys. > > > > So this solves the problem of a user adding a test above but failing to add it > > here. It doesn't solve the reverse problem of removing a test above and > failing > > to remove it here. So we also need to assert that all the > > benchmark_names_by_metadata's keys are in the benchmark names generated in the > > perf waterfall json config. > > I feel like there should be a cleaner solution here, I just can't think of it > :( > > Maybe add all the benchmark names by metadata to a set and then add all the > test > > names from the json config file to a set and assert the sets are the same? > > Yes, asserting the set of names the same should fix it. Alright I believe this solution should work to verify that no changes need to be made to NON_TELEMETRY_BENCHMARKS Please take another look, thanks
Can you add unittest for verify_all_tests_in_benchmark_csv method? https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... File tools/perf/core/perf_json_generator.py (right): https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:840: def get_benchmark_metadata(): nits: get_all_benchmarks_metadata() Also can you document the data structure it returns? https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:853: def verify_all_tests_in_benchmark_csv(tests): make this "def verify_all_tests_in_benchmark_csv(tests, benchmark_metadata)" so it's easily unittestable https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:864: assert('Android Compile' == t +1 I like this assertion to make sure that people don't sneak extra tests in form we don't know about. Though can you add a message like: "assert (...), 'Unknown test data %s' % t" https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:873: print 'remove ' + test + ' from NON_TELEMETRY_BENCHMARKS' nits: "Remove" https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:873: print 'remove ' + test + ' from NON_TELEMETRY_BENCHMARKS' Instead of printing the message here, you can do: error_messages += 'Remove ' + test + ' from NON_TELEMETRY_BENCHMARKS' Then in your assert below, you add: 'Please update NON_TELEMETRY_BENCHMARKS as below:\n' + '\n'.join(error_messages) https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:875: print 'add ' + test + ' to NON_TELEMETRY_BENCHMARKS' nits "Add"
I'll work on the unit test now https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... File tools/perf/core/perf_json_generator.py (right): https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:840: def get_benchmark_metadata(): On 2017/03/21 19:02:04, nednguyen wrote: > nits: get_all_benchmarks_metadata() > > Also can you document the data structure it returns? Done. https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:853: def verify_all_tests_in_benchmark_csv(tests): On 2017/03/21 19:02:05, nednguyen wrote: > make this "def verify_all_tests_in_benchmark_csv(tests, benchmark_metadata)" so > it's easily unittestable Done. https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:864: assert('Android Compile' == t On 2017/03/21 19:02:05, nednguyen wrote: > +1 I like this assertion to make sure that people don't sneak extra tests in > form we don't know about. > > Though can you add a message like: "assert (...), 'Unknown test data %s' % t" Done. https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:873: print 'remove ' + test + ' from NON_TELEMETRY_BENCHMARKS' On 2017/03/21 19:02:05, nednguyen wrote: > Instead of printing the message here, you can do: > error_messages += 'Remove ' + test + ' from NON_TELEMETRY_BENCHMARKS' > > Then in your assert below, you add: > 'Please update NON_TELEMETRY_BENCHMARKS as below:\n' + '\n'.join(error_messages) Done. https://codereview.chromium.org/2754883002/diff/100001/tools/perf/core/perf_j... tools/perf/core/perf_json_generator.py:875: print 'add ' + test + ' to NON_TELEMETRY_BENCHMARKS' On 2017/03/21 19:02:05, nednguyen wrote: > nits "Add" Done.
Unit tests added, ptal
lgtm https://codereview.chromium.org/2754883002/diff/160001/tools/perf/core/perf_d... File tools/perf/core/perf_data_generator.py (right): https://codereview.chromium.org/2754883002/diff/160001/tools/perf/core/perf_d... tools/perf/core/perf_data_generator.py:828: "angle_perftests": BenchmarkMetadata("jmadill@chromium.org", None), nits: string should always be singly quoted
https://codereview.chromium.org/2754883002/diff/160001/tools/perf/core/perf_d... File tools/perf/core/perf_data_generator.py (right): https://codereview.chromium.org/2754883002/diff/160001/tools/perf/core/perf_d... tools/perf/core/perf_data_generator.py:929: if options.validate_only: actually I think this should also check for verify_all_tests_in_benchmark_csv() The reason is someone can make a CL that only modify NON_TELEMETRY_BENCHMARKS and PRESUBMIT would not validate whether NON_TELEMETRY_BENCHMARKS are in synced because that doesn't change the the generated benchmarks.
https://codereview.chromium.org/2754883002/diff/160001/tools/perf/core/perf_d... File tools/perf/core/perf_data_generator.py (right): https://codereview.chromium.org/2754883002/diff/160001/tools/perf/core/perf_d... tools/perf/core/perf_data_generator.py:828: "angle_perftests": BenchmarkMetadata("jmadill@chromium.org", None), On 2017/03/21 21:56:03, nednguyen wrote: > nits: string should always be singly quoted Thanks! Updated all the double quotes I added to single quotes. Is there a reason for this or is it just standard in Chromium to do single quoted strings? https://codereview.chromium.org/2754883002/diff/160001/tools/perf/core/perf_d... tools/perf/core/perf_data_generator.py:929: if options.validate_only: On 2017/03/21 21:59:58, nednguyen wrote: > actually I think this should also check for verify_all_tests_in_benchmark_csv() > > The reason is someone can make a CL that only modify NON_TELEMETRY_BENCHMARKS > and PRESUBMIT would not validate whether NON_TELEMETRY_BENCHMARKS are in synced > because that doesn't change the the generated benchmarks. > Good point! I hadn't thought about that
lgtm https://codereview.chromium.org/2754883002/diff/160001/tools/perf/core/perf_d... File tools/perf/core/perf_data_generator.py (right): https://codereview.chromium.org/2754883002/diff/160001/tools/perf/core/perf_d... tools/perf/core/perf_data_generator.py:828: "angle_perftests": BenchmarkMetadata("jmadill@chromium.org", None), On 2017/03/22 19:17:11, ashleymarie1 wrote: > On 2017/03/21 21:56:03, nednguyen wrote: > > nits: string should always be singly quoted > > Thanks! Updated all the double quotes I added to single quotes. Is there a > reason for this or is it just standard in Chromium to do single quoted strings? It's just our convention to use singly quoted string everywhere: https://google.github.io/styleguide/pyguide.html#Strings https://codereview.chromium.org/2754883002/diff/170001/tools/perf/core/perf_d... File tools/perf/core/perf_data_generator.py (right): https://codereview.chromium.org/2754883002/diff/170001/tools/perf/core/perf_d... tools/perf/core/perf_data_generator.py:808: up_to_date = up_to_date and tests_data == config_data nits: you can do: up_to_date &= tests_data == config_data
https://codereview.chromium.org/2754883002/diff/170001/tools/perf/core/perf_d... File tools/perf/core/perf_data_generator.py (right): https://codereview.chromium.org/2754883002/diff/170001/tools/perf/core/perf_d... tools/perf/core/perf_data_generator.py:808: up_to_date = up_to_date and tests_data == config_data On 2017/03/22 19:22:57, nednguyen wrote: > nits: you can do: > up_to_date &= tests_data == config_data Done.
The CQ bit was checked by ashleymarie@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from nednguyen@google.com Link to the patchset: https://codereview.chromium.org/2754883002/#ps190001 (title: "Generating benchmark.csv file")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 190001, "attempt_start_ts": 1490275478554780, "parent_rev": "bded4ed2c3d7b914bf8c63e2ce094c473817ddf7", "commit_rev": "ff2e6f8d2977066e39d75bf8ffaf0460b9c1a875"}
Message was sent while issue was closed.
Description was changed from ========== Generating benchmark.csv file This change enables perf_json_generator to also generate a csv file with benchmark metadata (name, owners, and component). This csv will be pulled into go/chrome-benchmarks so this information will remain accurate and up-to-date. BUG=chromium:575318 ========== to ========== Generating benchmark.csv file This change enables perf_json_generator to also generate a csv file with benchmark metadata (name, owners, and component). This csv will be pulled into go/chrome-benchmarks so this information will remain accurate and up-to-date. BUG=chromium:575318 Review-Url: https://codereview.chromium.org/2754883002 Cr-Commit-Position: refs/heads/master@{#459087} Committed: https://chromium.googlesource.com/chromium/src/+/ff2e6f8d2977066e39d75bf8ffaf... ==========
Message was sent while issue was closed.
Committed patchset #11 (id:190001) as https://chromium.googlesource.com/chromium/src/+/ff2e6f8d2977066e39d75bf8ffaf... |