Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(410)

Side by Side Diff: build/android/run_tests.py

Issue 9494007: Upstream test sharder. (Closed) Base URL: svn://svn.chromium.org/chrome/trunk/src
Patch Set: N emulators launched asynchronously Created 8 years, 9 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « build/android/emulator.py ('k') | build/android/single_test_runner.py » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 #!/usr/bin/env python 1 #!/usr/bin/env python
2 # Copyright (c) 2011 The Chromium Authors. All rights reserved. 2 # Copyright (c) 2011 The Chromium Authors. All rights reserved.
3 # Use of this source code is governed by a BSD-style license that can be 3 # Use of this source code is governed by a BSD-style license that can be
4 # found in the LICENSE file. 4 # found in the LICENSE file.
5 5
6 """Runs all the native unit tests. 6 """Runs all the native unit tests.
7 7
8 1. Copy over test binary to /data/local on device. 8 1. Copy over test binary to /data/local on device.
9 2. Resources: chrome/unit_tests requires resources (chrome.pak and en-US.pak) 9 2. Resources: chrome/unit_tests requires resources (chrome.pak and en-US.pak)
10 to be deployed to the device (in /data/local/tmp). 10 to be deployed to the device (in /data/local/tmp).
(...skipping 29 matching lines...) Expand all
40 $ cat gtest_filter/base_unittests_disabled 40 $ cat gtest_filter/base_unittests_disabled
41 DataPackTest.Load 41 DataPackTest.Load
42 ReadOnlyFileUtilTest.ContentsEqual 42 ReadOnlyFileUtilTest.ContentsEqual
43 43
44 This file is generated by the tests running on devices. If running on emulator, 44 This file is generated by the tests running on devices. If running on emulator,
45 additonal filter file which lists the tests only failed in emulator will be 45 additonal filter file which lists the tests only failed in emulator will be
46 loaded. We don't care about the rare testcases which succeeded on emuatlor, but 46 loaded. We don't care about the rare testcases which succeeded on emuatlor, but
47 failed on device. 47 failed on device.
48 """ 48 """
49 49
50 import fnmatch
50 import logging 51 import logging
52 import multiprocessing
51 import os 53 import os
52 import re 54 import re
53 import subprocess 55 import subprocess
54 import sys 56 import sys
55 import time 57 import time
56 58
57 import android_commands 59 import android_commands
60 from base_test_sharder import BaseTestSharder
58 import cmd_helper 61 import cmd_helper
59 import debug_info 62 import debug_info
60 import emulator 63 import emulator
61 import run_tests_helper 64 import run_tests_helper
62 from single_test_runner import SingleTestRunner 65 from single_test_runner import SingleTestRunner
63 from test_package_executable import TestPackageExecutable 66 from test_package_executable import TestPackageExecutable
64 from test_result import BaseTestResult, TestResults 67 from test_result import BaseTestResult, TestResults
65 68
66 _TEST_SUITES = ['base_unittests', 'sql_unittests', 'ipc_tests', 'net_unittests'] 69 _TEST_SUITES = ['base_unittests', 'sql_unittests', 'ipc_tests', 'net_unittests']
67 70
(...skipping 65 matching lines...) Expand 10 before | Expand all | Expand 10 after
133 try: 136 try:
134 os.kill(self._pid, signal.SIGKILL) 137 os.kill(self._pid, signal.SIGKILL)
135 except: 138 except:
136 pass 139 pass
137 del os.environ['DISPLAY'] 140 del os.environ['DISPLAY']
138 self._pid = 0 141 self._pid = 0
139 142
140 143
141 def RunTests(device, test_suite, gtest_filter, test_arguments, rebaseline, 144 def RunTests(device, test_suite, gtest_filter, test_arguments, rebaseline,
142 timeout, performance_test, cleanup_test_files, tool, 145 timeout, performance_test, cleanup_test_files, tool,
143 log_dump_name, fast_and_loose=False, annotate=False): 146 log_dump_name, annotate=False):
144 """Runs the tests. 147 """Runs the tests.
145 148
146 Args: 149 Args:
147 device: Device to run the tests. 150 device: Device to run the tests.
148 test_suite: A specific test suite to run, empty to run all. 151 test_suite: A specific test suite to run, empty to run all.
149 gtest_filter: A gtest_filter flag. 152 gtest_filter: A gtest_filter flag.
150 test_arguments: Additional arguments to pass to the test binary. 153 test_arguments: Additional arguments to pass to the test binary.
151 rebaseline: Whether or not to run tests in isolation and update the filter. 154 rebaseline: Whether or not to run tests in isolation and update the filter.
152 timeout: Timeout for each test. 155 timeout: Timeout for each test.
153 performance_test: Whether or not performance test(s). 156 performance_test: Whether or not performance test(s).
154 cleanup_test_files: Whether or not to cleanup test files on device. 157 cleanup_test_files: Whether or not to cleanup test files on device.
155 tool: Name of the Valgrind tool. 158 tool: Name of the Valgrind tool.
156 log_dump_name: Name of log dump file. 159 log_dump_name: Name of log dump file.
157 fast_and_loose: should we go extra-fast but sacrifice stability
158 and/or correctness? Intended for quick cycle testing; not for bots!
159 annotate: should we print buildbot-style annotations? 160 annotate: should we print buildbot-style annotations?
160 161
161 Returns: 162 Returns:
162 A TestResults object. 163 A TestResults object.
163 """ 164 """
164 results = [] 165 results = []
165 166
166 if test_suite: 167 if test_suite:
167 global _TEST_SUITES 168 global _TEST_SUITES
168 if not os.path.exists(test_suite): 169 if not os.path.exists(test_suite):
169 logging.critical('Unrecognized test suite %s, supported: %s' % 170 logging.critical('Unrecognized test suite %s, supported: %s' %
170 (test_suite, _TEST_SUITES)) 171 (test_suite, _TEST_SUITES))
171 if test_suite in _TEST_SUITES: 172 if test_suite in _TEST_SUITES:
172 logging.critical('(Remember to include the path: out/Release/%s)', 173 logging.critical('(Remember to include the path: out/Release/%s)',
173 test_suite) 174 test_suite)
174 return TestResults.FromOkAndFailed([], [BaseTestResult(test_suite, '')]) 175 return TestResults.FromOkAndFailed([], [BaseTestResult(test_suite, '')])
175 fully_qualified_test_suites = [test_suite] 176 fully_qualified_test_suites = [test_suite]
176 else: 177 else:
177 fully_qualified_test_suites = FullyQualifiedTestSuites() 178 fully_qualified_test_suites = FullyQualifiedTestSuites()
178 debug_info_list = [] 179 debug_info_list = []
179 print 'Known suites: ' + str(_TEST_SUITES) 180 print 'Known suites: ' + str(_TEST_SUITES)
180 print 'Running these: ' + str(fully_qualified_test_suites) 181 print 'Running these: ' + str(fully_qualified_test_suites)
181 for t in fully_qualified_test_suites: 182 for t in fully_qualified_test_suites:
182 if annotate: 183 if annotate:
183 print '@@@BUILD_STEP Test suite %s@@@' % os.path.basename(t) 184 print '@@@BUILD_STEP Test suite %s@@@' % os.path.basename(t)
184 test = SingleTestRunner(device, t, gtest_filter, test_arguments, 185 test = SingleTestRunner(device, t, gtest_filter, test_arguments,
185 timeout, rebaseline, performance_test, 186 timeout, rebaseline, performance_test,
186 cleanup_test_files, tool, not not log_dump_name, 187 cleanup_test_files, tool, 0, not not log_dump_name)
187 fast_and_loose=fast_and_loose) 188 test.Run()
188 test.RunTests()
189 189
190 results += [test.test_results] 190 results += [test.test_results]
191 # Collect debug info. 191 # Collect debug info.
192 debug_info_list += [test.dump_debug_info] 192 debug_info_list += [test.dump_debug_info]
193 if rebaseline: 193 if rebaseline:
194 test.UpdateFilter(test.test_results.failed) 194 test.UpdateFilter(test.test_results.failed)
195 elif test.test_results.failed: 195 elif test.test_results.failed:
196 test.test_results.LogFull() 196 test.test_results.LogFull()
197 # Zip all debug info outputs into a file named by log_dump_name. 197 # Zip all debug info outputs into a file named by log_dump_name.
198 debug_info.GTestDebugInfo.ZipAndCleanResults( 198 debug_info.GTestDebugInfo.ZipAndCleanResults(
199 os.path.join(run_tests_helper.CHROME_DIR, 'out', 'Release', 199 os.path.join(run_tests_helper.CHROME_DIR, 'out', 'Release',
200 'debug_info_dumps'), 200 'debug_info_dumps'),
201 log_dump_name, [d for d in debug_info_list if d]) 201 log_dump_name, [d for d in debug_info_list if d])
202 202
203 if annotate: 203 if annotate:
204 if test.test_results.timed_out: 204 if test.test_results.timed_out:
205 print '@@@STEP_WARNINGS@@@' 205 print '@@@STEP_WARNINGS@@@'
206 elif test.test_results.failed: 206 elif test.test_results.failed:
207 print '@@@STEP_FAILURE@@@' 207 print '@@@STEP_FAILURE@@@'
208 else: 208 else:
209 print 'Step success!' # No annotation needed 209 print 'Step success!' # No annotation needed
210 210
211 return TestResults.FromTestResults(results) 211 return TestResults.FromTestResults(results)
212 212
213 213
214 class TestSharder(BaseTestSharder):
215 """Responsible for sharding the tests on the connected devices."""
216
217 def __init__(self, attached_devices, test_suite, gtest_filter,
218 test_arguments, timeout, rebaseline, performance_test,
219 cleanup_test_files, tool):
220 BaseTestSharder.__init__(self, attached_devices)
221 self.test_suite = test_suite
222 self.test_suite_basename = os.path.basename(test_suite)
223 self.gtest_filter = gtest_filter
224 self.test_arguments = test_arguments
225 self.timeout = timeout
226 self.rebaseline = rebaseline
227 self.performance_test = performance_test
228 self.cleanup_test_files = cleanup_test_files
229 self.tool = tool
230 test = SingleTestRunner(self.attached_devices[0], test_suite, gtest_filter,
231 test_arguments, timeout, rebaseline,
232 performance_test, cleanup_test_files, tool, 0)
233 all_tests = test.test_package.GetAllTests()
234 if not rebaseline:
235 disabled_list = test.GetDisabledTests()
236 # Only includes tests that do not have any match in the disabled list.
237 all_tests = filter(lambda t:
238 not any([fnmatch.fnmatch(t, disabled_pattern)
239 for disabled_pattern in disabled_list]),
240 all_tests)
241 self.tests = all_tests
242
243 def CreateShardedTestRunner(self, device, index):
244 """Creates a suite-specific test runner.
245
246 Args:
247 device: Device serial where this shard will run.
248 index: Index of this device in the pool.
249
250 Returns:
251 A SingleTestRunner object.
252 """
253 shard_size = len(self.tests) / len(self.attached_devices)
254 shard_test_list = self.tests[index * shard_size : (index + 1) * shard_size]
255 test_filter = ':'.join(shard_test_list)
256 return SingleTestRunner(device, self.test_suite,
257 test_filter, self.test_arguments, self.timeout,
258 self.rebaseline, self.performance_test,
259 self.cleanup_test_files, self.tool, index)
260
261 def OnTestsCompleted(self, test_runners, test_results):
262 """Notifies that we completed the tests."""
263 test_results.LogFull()
264 if test_results.failed and self.rebaseline:
265 test_runners[0].UpdateFilter(test_results.failed)
266
267
268
214 def _RunATestSuite(options): 269 def _RunATestSuite(options):
215 """Run a single test suite. 270 """Run a single test suite.
216 271
217 Helper for Dispatch() to allow stop/restart of the emulator across 272 Helper for Dispatch() to allow stop/restart of the emulator across
218 test bundles. If using the emulator, we start it on entry and stop 273 test bundles. If using the emulator, we start it on entry and stop
219 it on exit. 274 it on exit.
220 275
221 Args: 276 Args:
222 options: options for running the tests. 277 options: options for running the tests.
223 278
224 Returns: 279 Returns:
225 0 if successful, number of failing tests otherwise. 280 0 if successful, number of failing tests otherwise.
226 """ 281 """
227 attached_devices = [] 282 attached_devices = []
228 buildbot_emulator = None 283 buildbot_emulators = []
229 284
230 if options.use_emulator: 285 if options.use_emulator:
231 t = TimeProfile('Emulator launch') 286 for n in range(options.use_emulator):
232 buildbot_emulator = emulator.Emulator(options.fast_and_loose) 287 t = TimeProfile('Emulator launch %d' % n)
233 buildbot_emulator.Launch() 288 buildbot_emulator = emulator.Emulator(options.fast_and_loose)
234 t.Stop() 289 buildbot_emulator.Launch(kill_all_emulators=n == 0)
235 attached_devices.append(buildbot_emulator.device) 290 t.Stop()
291 buildbot_emulators.append(buildbot_emulator)
292 attached_devices.append(buildbot_emulator.device)
293 # Wait for all emulators to become available.
294 map(lambda buildbot_emulator:buildbot_emulator.ConfirmLaunch(),
295 buildbot_emulators)
236 else: 296 else:
237 attached_devices = android_commands.GetAttachedDevices() 297 attached_devices = android_commands.GetAttachedDevices()
238 298
239 if not attached_devices: 299 if not attached_devices:
240 logging.critical('A device must be attached and online.') 300 logging.critical('A device must be attached and online.')
241 return 1 301 return 1
242 302
243 test_results = RunTests(attached_devices[0], options.test_suite, 303 if (len(attached_devices) > 1 and options.test_suite and
304 not options.gtest_filter and not options.performance_test):
305 sharder = TestSharder(attached_devices, options.test_suite,
244 options.gtest_filter, options.test_arguments, 306 options.gtest_filter, options.test_arguments,
245 options.rebaseline, options.timeout, 307 options.timeout, options.rebaseline,
246 options.performance_test, 308 options.performance_test,
247 options.cleanup_test_files, options.tool, 309 options.cleanup_test_files, options.tool)
248 options.log_dump, 310 test_results = sharder.RunShardedTests()
249 fast_and_loose=options.fast_and_loose, 311 else:
250 annotate=options.annotate) 312 test_results = RunTests(attached_devices[0], options.test_suite,
313 options.gtest_filter, options.test_arguments,
314 options.rebaseline, options.timeout,
315 options.performance_test,
316 options.cleanup_test_files, options.tool,
317 options.log_dump,
318 annotate=options.annotate)
251 319
252 if buildbot_emulator: 320 for buildbot_emulator in buildbot_emulators:
253 buildbot_emulator.Shutdown() 321 buildbot_emulator.Shutdown()
254 322
255 # Another chance if we timed out? At this point It is safe(r) to 323 # Another chance if we timed out? At this point It is safe(r) to
256 # run fast and loose since we just uploaded all the test data and 324 # run fast and loose since we just uploaded all the test data and
257 # binary. 325 # binary.
258 if test_results.timed_out and options.repeat: 326 if test_results.timed_out and options.repeat:
259 logging.critical('Timed out; repeating in fast_and_loose mode.') 327 logging.critical('Timed out; repeating in fast_and_loose mode.')
260 options.fast_and_loose = True 328 options.fast_and_loose = True
261 options.repeat = options.repeat - 1 329 options.repeat = options.repeat - 1
262 logging.critical('Repeats left: ' + str(options.repeat)) 330 logging.critical('Repeats left: ' + str(options.repeat))
(...skipping 60 matching lines...) Expand 10 before | Expand all | Expand 10 after
323 option_parser.add_option('-p', dest='performance_test', 391 option_parser.add_option('-p', dest='performance_test',
324 help='Indicator of performance test', 392 help='Indicator of performance test',
325 action='store_true', 393 action='store_true',
326 default=False) 394 default=False)
327 option_parser.add_option('-L', dest='log_dump', 395 option_parser.add_option('-L', dest='log_dump',
328 help='file name of log dump, which will be put in' 396 help='file name of log dump, which will be put in'
329 'subfolder debug_info_dumps under the same directory' 397 'subfolder debug_info_dumps under the same directory'
330 'in where the test_suite exists.') 398 'in where the test_suite exists.')
331 option_parser.add_option('-e', '--emulator', dest='use_emulator', 399 option_parser.add_option('-e', '--emulator', dest='use_emulator',
332 help='Run tests in a new instance of emulator', 400 help='Run tests in a new instance of emulator',
333 action='store_true', 401 type='int',
334 default=False) 402 default=0)
335 option_parser.add_option('-x', '--xvfb', dest='use_xvfb', 403 option_parser.add_option('-x', '--xvfb', dest='use_xvfb',
336 action='store_true', default=False, 404 action='store_true', default=False,
337 help='Use Xvfb around tests (ignored if not Linux)') 405 help='Use Xvfb around tests (ignored if not Linux)')
338 option_parser.add_option('--fast', '--fast_and_loose', dest='fast_and_loose', 406 option_parser.add_option('--fast', '--fast_and_loose', dest='fast_and_loose',
339 action='store_true', default=False, 407 action='store_true', default=False,
340 help='Go faster (but be less stable), ' 408 help='Go faster (but be less stable), '
341 'for quick testing. Example: when tracking down ' 409 'for quick testing. Example: when tracking down '
342 'tests that hang to add to the disabled list, ' 410 'tests that hang to add to the disabled list, '
343 'there is no need to redeploy the test binary ' 411 'there is no need to redeploy the test binary '
344 'or data to the device again. ' 412 'or data to the device again. '
345 'Don\'t use on bots by default!') 413 'Don\'t use on bots by default!')
346 option_parser.add_option('--repeat', dest='repeat', type='int', 414 option_parser.add_option('--repeat', dest='repeat', type='int',
347 default=2, 415 default=2,
348 help='Repeat count on test timeout') 416 help='Repeat count on test timeout')
349 option_parser.add_option('--annotate', default=True, 417 option_parser.add_option('--annotate', default=True,
350 help='Print buildbot-style annotate messages ' 418 help='Print buildbot-style annotate messages '
351 'for each test suite. Default=True') 419 'for each test suite. Default=True')
352 options, args = option_parser.parse_args(argv) 420 options, args = option_parser.parse_args(argv)
353 if len(args) > 1: 421 if len(args) > 1:
354 print 'Unknown argument:', args[1:] 422 print 'Unknown argument:', args[1:]
355 option_parser.print_usage() 423 option_parser.print_usage()
356 sys.exit(1) 424 sys.exit(1)
357 run_tests_helper.SetLogLevel(options.verbose_count) 425 run_tests_helper.SetLogLevel(options.verbose_count)
358 return Dispatch(options) 426 return Dispatch(options)
359 427
360 428
361 if __name__ == '__main__': 429 if __name__ == '__main__':
362 sys.exit(main(sys.argv)) 430 sys.exit(main(sys.argv))
OLDNEW
« no previous file with comments | « build/android/emulator.py ('k') | build/android/single_test_runner.py » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698