Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(124)

Side by Side Diff: tools/perf/core/perf_data_generator.py

Issue 2854413002: Remove usage of bot_utils.GetDeviceAffinity in perf_data_generator (Closed)
Patch Set: Fix testGenerateTelemetryTestsBlacklistedReferenceBuildTest Created 3 years, 7 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « no previous file | tools/perf/core/perf_data_generator_unittest.py » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 #!/usr/bin/env python 1 #!/usr/bin/env python
2 # Copyright 2016 The Chromium Authors. All rights reserved. 2 # Copyright 2016 The Chromium Authors. All rights reserved.
3 # Use of this source code is governed by a BSD-style license that can be 3 # Use of this source code is governed by a BSD-style license that can be
4 # found in the LICENSE file. 4 # found in the LICENSE file.
5 5
6 # pylint: disable=too-many-lines 6 # pylint: disable=too-many-lines
7 7
8 """Script to generate chromium.perf.json and chromium.perf.fyi.json in 8 """Script to generate chromium.perf.json and chromium.perf.fyi.json in
9 the src/testing/buildbot directory and benchmark.csv in the src/tools/perf 9 the src/testing/buildbot directory and benchmark.csv in the src/tools/perf
10 directory. Maintaining these files by hand is too unwieldy. 10 directory. Maintaining these files by hand is too unwieldy.
(...skipping 623 matching lines...) Expand 10 before | Expand all | Expand 10 after
634 num_shards = len(tester_config['swarming_dimensions'][0]['device_ids']) 634 num_shards = len(tester_config['swarming_dimensions'][0]['device_ids'])
635 current_shard = 0 635 current_shard = 0
636 for benchmark in benchmarks: 636 for benchmark in benchmarks:
637 if not ShouldBenchmarkBeScheduled(benchmark, tester_config['platform']): 637 if not ShouldBenchmarkBeScheduled(benchmark, tester_config['platform']):
638 continue 638 continue
639 639
640 # First figure out swarming dimensions this test needs to be triggered on. 640 # First figure out swarming dimensions this test needs to be triggered on.
641 # For each set of dimensions it is only triggered on one of the devices 641 # For each set of dimensions it is only triggered on one of the devices
642 swarming_dimensions = [] 642 swarming_dimensions = []
643 for dimension in tester_config['swarming_dimensions']: 643 for dimension in tester_config['swarming_dimensions']:
644 device_affinity = None 644 sharding_map = benchmark_sharding_map.get(str(num_shards), None)
645 if benchmark_sharding_map: 645 if not sharding_map:
646 sharding_map = benchmark_sharding_map.get(str(num_shards), None) 646 raise Exception('Invalid number of shards, generate new sharding map')
647 if not sharding_map: 647 device_affinity = sharding_map.get(benchmark.Name(), None)
648 raise Exception('Invalid number of shards, generate new sharding map')
649 device_affinity = sharding_map.get(benchmark.Name(), None)
650 else:
651 # No sharding map was provided, default to legacy device
652 # affinity algorithm
653 device_affinity = bot_utils.GetDeviceAffinity(
654 num_shards, benchmark.Name())
655 if device_affinity is None: 648 if device_affinity is None:
656 raise Exception('Device affinity for benchmark %s not found' 649 raise Exception('Device affinity for benchmark %s not found'
657 % benchmark.Name()) 650 % benchmark.Name())
658 swarming_dimensions.append( 651 swarming_dimensions.append(
659 get_swarming_dimension(dimension, device_affinity)) 652 get_swarming_dimension(dimension, device_affinity))
660 653
661 test = generate_telemetry_test( 654 test = generate_telemetry_test(
662 swarming_dimensions, benchmark.Name(), browser_name) 655 swarming_dimensions, benchmark.Name(), browser_name)
663 isolated_scripts.append(test) 656 isolated_scripts.append(test)
664 # Now create another executable for this benchmark on the reference browser 657 # Now create another executable for this benchmark on the reference browser
(...skipping 337 matching lines...) Expand 10 before | Expand all | Expand 10 after
1002 return 0 995 return 0
1003 else: 996 else:
1004 print ('The perf JSON config files are not up-to-date. Please run %s ' 997 print ('The perf JSON config files are not up-to-date. Please run %s '
1005 'without --validate-only flag to update the perf JSON ' 998 'without --validate-only flag to update the perf JSON '
1006 'configs and benchmark.csv.') % sys.argv[0] 999 'configs and benchmark.csv.') % sys.argv[0]
1007 return 1 1000 return 1
1008 else: 1001 else:
1009 update_all_tests([fyi_waterfall, waterfall]) 1002 update_all_tests([fyi_waterfall, waterfall])
1010 update_benchmark_csv() 1003 update_benchmark_csv()
1011 return 0 1004 return 0
OLDNEW
« no previous file with comments | « no previous file | tools/perf/core/perf_data_generator_unittest.py » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698