Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(69)

Side by Side Diff: tools/perf/core/perf_data_generator.py

Issue 2878483002: Fix the error message when a new benchmark is added without sharding info (Closed)
Patch Set: Created 3 years, 7 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « no previous file | no next file » | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 #!/usr/bin/env python 1 #!/usr/bin/env python
2 # Copyright 2016 The Chromium Authors. All rights reserved. 2 # Copyright 2016 The Chromium Authors. All rights reserved.
3 # Use of this source code is governed by a BSD-style license that can be 3 # Use of this source code is governed by a BSD-style license that can be
4 # found in the LICENSE file. 4 # found in the LICENSE file.
5 5
6 # pylint: disable=too-many-lines 6 # pylint: disable=too-many-lines
7 7
8 """Script to generate chromium.perf.json and chromium.perf.fyi.json in 8 """Script to generate chromium.perf.json and chromium.perf.fyi.json in
9 the src/testing/buildbot directory and benchmark.csv in the src/tools/perf 9 the src/testing/buildbot directory and benchmark.csv in the src/tools/perf
10 directory. Maintaining these files by hand is too unwieldy. 10 directory. Maintaining these files by hand is too unwieldy.
(...skipping 656 matching lines...) Expand 10 before | Expand all | Expand 10 after
667 for benchmark in benchmarks: 667 for benchmark in benchmarks:
668 if not ShouldBenchmarkBeScheduled(benchmark, tester_config['platform']): 668 if not ShouldBenchmarkBeScheduled(benchmark, tester_config['platform']):
669 continue 669 continue
670 670
671 # First figure out swarming dimensions this test needs to be triggered on. 671 # First figure out swarming dimensions this test needs to be triggered on.
672 # For each set of dimensions it is only triggered on one of the devices 672 # For each set of dimensions it is only triggered on one of the devices
673 swarming_dimensions = [] 673 swarming_dimensions = []
674 for dimension in tester_config['swarming_dimensions']: 674 for dimension in tester_config['swarming_dimensions']:
675 device = None 675 device = None
676 sharding_map = benchmark_sharding_map.get(name, None) 676 sharding_map = benchmark_sharding_map.get(name, None)
677 if not sharding_map: 677 device = sharding_map.get(benchmark.Name(), None)
678 if device is None:
678 raise ValueError('No sharding map for benchmark %r found. Please' 679 raise ValueError('No sharding map for benchmark %r found. Please'
679 ' disable the benchmark with @Disabled(\'all\'), and' 680 ' disable the benchmark with @Disabled(\'all\'), and'
680 ' file a bug with Speed>Benchmarks>Waterfall' 681 ' file a bug with Speed>Benchmarks>Waterfall'
681 ' component and cc martiniss@ and nednguyen@ to' 682 ' component and cc martiniss@ and nednguyen@ to'
682 ' execute the benchmark on the waterfall.' % ( 683 ' execute the benchmark on the waterfall.' % (
683 name)) 684 benchmark.Name()))
684 685
685 device = sharding_map.get(benchmark.Name(), None)
686
687 if device is None:
688 raise Exception('Device affinity for benchmark %s not found'
689 % benchmark.Name())
690 swarming_dimensions.append(get_swarming_dimension( 686 swarming_dimensions.append(get_swarming_dimension(
691 dimension, device)) 687 dimension, device))
692 688
693 test = generate_telemetry_test( 689 test = generate_telemetry_test(
694 swarming_dimensions, benchmark.Name(), browser_name) 690 swarming_dimensions, benchmark.Name(), browser_name)
695 isolated_scripts.append(test) 691 isolated_scripts.append(test)
696 # Now create another executable for this benchmark on the reference browser 692 # Now create another executable for this benchmark on the reference browser
697 # if it is not blacklisted from running on the reference browser. 693 # if it is not blacklisted from running on the reference browser.
698 if benchmark.Name() not in benchmark_ref_build_blacklist: 694 if benchmark.Name() not in benchmark_ref_build_blacklist:
699 reference_test = generate_telemetry_test( 695 reference_test = generate_telemetry_test(
(...skipping 303 matching lines...) Expand 10 before | Expand all | Expand 10 after
1003 return 0 999 return 0
1004 else: 1000 else:
1005 print ('The perf JSON config files are not up-to-date. Please run %s ' 1001 print ('The perf JSON config files are not up-to-date. Please run %s '
1006 'without --validate-only flag to update the perf JSON ' 1002 'without --validate-only flag to update the perf JSON '
1007 'configs and benchmark.csv.') % sys.argv[0] 1003 'configs and benchmark.csv.') % sys.argv[0]
1008 return 1 1004 return 1
1009 else: 1005 else:
1010 update_all_tests([fyi_waterfall, waterfall]) 1006 update_all_tests([fyi_waterfall, waterfall])
1011 update_benchmark_csv() 1007 update_benchmark_csv()
1012 return 0 1008 return 0
OLDNEW
« no previous file with comments | « no previous file | no next file » | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698