OLD | NEW |
1 # Copyright (c) 2013 The Chromium Authors. All rights reserved. | 1 # Copyright (c) 2013 The Chromium Authors. All rights reserved. |
2 # Use of this source code is governed by a BSD-style license that can be | 2 # Use of this source code is governed by a BSD-style license that can be |
3 # found in the LICENSE file. | 3 # found in the LICENSE file. |
4 | 4 |
5 """Config file for Run Performance Test Bisect Tool | 5 """Config file for Run Performance Test Bot |
6 | 6 |
7 This script is intended for use by anyone that wants to run a remote bisection | 7 This script is intended for use by anyone that wants to run a remote performance |
8 on a range of revisions to look for a performance regression. Modify the config | 8 test. Modify the config below and add the command to run the performance test, |
9 below and add the revision range, performance command, and metric. You can then | 9 the metric you're interested in, and repeat/discard parameters. You can then |
10 run a git try <bot>. | 10 run a git try <bot>. |
11 | 11 |
12 Changes to this file should never be submitted. | 12 Changes to this file should never be submitted. |
13 | 13 |
14 Args: | 14 Args: |
15 'command': This is the full command line to pass to the | 15 'command': This is the full command line to pass to the |
16 bisect-perf-regression.py script in order to execute the test. | 16 bisect-perf-regression.py script in order to execute the test. |
17 'good_revision': An svn or git revision where the metric hadn't regressed yet. | |
18 'bad_revision': An svn or git revision sometime after the metric had | |
19 regressed. | |
20 'metric': The name of the metric to parse out from the results of the | 17 'metric': The name of the metric to parse out from the results of the |
21 performance test. You can retrieve the metric by looking at the stdio of | 18 performance test. You can retrieve the metric by looking at the stdio of |
22 the performance test. Look for lines of the format: | 19 the performance test. Look for lines of the format: |
23 | 20 |
24 RESULT <graph>: <trace>= <value> <units> | 21 RESULT <graph>: <trace>= <value> <units> |
25 | 22 |
26 The metric name is "<graph>/<trace>". | 23 The metric name is "<graph>/<trace>". |
27 'repeat_count': The number of times to repeat the performance test. | 24 'repeat_count': The number of times to repeat the performance test. |
28 'max_time_minutes': The script will attempt to run the performance test | 25 'max_time_minutes': The script will attempt to run the performance test |
29 "repeat_count" times, unless it exceeds "max_time_minutes". | 26 "repeat_count" times, unless it exceeds "max_time_minutes". |
30 'truncate_percent': Discard the highest/lowest % values from performance test. | 27 'truncate_percent': Discard the highest/lowest % values from performance test. |
31 | 28 |
32 Sample config: | 29 Sample config: |
33 | 30 |
34 config = { | 31 config = { |
35 'command': './out/Release/performance_ui_tests' + | 32 'command': './out/Release/performance_ui_tests' + |
36 ' --gtest_filter=PageCyclerTest.Intl1File', | 33 ' --gtest_filter=PageCyclerTest.Intl1File', |
37 'good_revision': '179755', | |
38 'bad_revision': '179782', | |
39 'metric': 'times/t', | 34 'metric': 'times/t', |
40 'repeat_count': '20', | 35 'repeat_count': '20', |
41 'max_time_minutes': '20', | 36 'max_time_minutes': '20', |
42 'truncate_percent': '25', | 37 'truncate_percent': '25', |
43 } | 38 } |
44 | 39 |
45 On Windows: | 40 On Windows: |
46 - If you're calling a python script you will need to add "python" to | 41 - If you're calling a python script you will need to add "python" to |
47 the command: | 42 the command: |
48 | 43 |
49 config = { | 44 config = { |
50 'command': 'python tools/perf/run_measurement -v --browser=release kraken', | 45 'command': 'python tools/perf/run_measurement -v --browser=release kraken', |
51 'good_revision': '185319', | |
52 'bad_revision': '185364', | |
53 'metric': 'Total/Total', | 46 'metric': 'Total/Total', |
54 'repeat_count': '20', | 47 'repeat_count': '20', |
55 'max_time_minutes': '20', | 48 'max_time_minutes': '20', |
56 'truncate_percent': '25', | 49 'truncate_percent': '25', |
57 } | 50 } |
58 | 51 |
59 | 52 |
60 On ChromeOS: | 53 On ChromeOS: |
61 - Script accepts either ChromeOS versions, or unix timestamps as revisions. | 54 - Script accepts either ChromeOS versions, or unix timestamps as revisions. |
62 - You don't need to specify --identity and --remote, they will be added to | 55 - You don't need to specify --identity and --remote, they will be added to |
63 the command using the bot's BISECT_CROS_IP and BISECT_CROS_BOARD values. | 56 the command using the bot's BISECT_CROS_IP and BISECT_CROS_BOARD values. |
64 | 57 |
65 config = { | 58 config = { |
66 'command': './tools/perf/run_measurement -v '\ | 59 'command': './tools/perf/run_measurement -v '\ |
67 '--browser=cros-chrome-guest '\ | 60 '--browser=cros-chrome-guest '\ |
68 'dromaeo tools/perf/page_sets/dromaeo/jslibstylejquery.json', | 61 'dromaeo tools/perf/page_sets/dromaeo/jslibstylejquery.json', |
69 'good_revision': '4086.0.0', | |
70 'bad_revision': '4087.0.0', | |
71 'metric': 'jslib/jslib', | 62 'metric': 'jslib/jslib', |
72 'repeat_count': '20', | 63 'repeat_count': '20', |
73 'max_time_minutes': '20', | 64 'max_time_minutes': '20', |
74 'truncate_percent': '25', | 65 'truncate_percent': '25', |
75 } | 66 } |
76 | 67 |
77 """ | 68 """ |
78 | 69 |
79 config = { | 70 config = { |
80 'command': '', | 71 'command': '', |
81 'good_revision': '', | |
82 'bad_revision': '', | |
83 'metric': '', | 72 'metric': '', |
84 'repeat_count':'', | 73 'repeat_count': '', |
85 'max_time_minutes': '', | 74 'max_time_minutes': '', |
86 'truncate_percent':'', | 75 'truncate_percent': '', |
87 } | 76 } |
88 | 77 |
89 # Workaround git try issue, see crbug.com/257689 | 78 # Workaround git try issue, see crbug.com/257689 |
OLD | NEW |