Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(42)

Side by Side Diff: tools/nocompile_driver.py

Issue 1698763002: Fix base_nocompile_tests dependency (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: Created 4 years, 10 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « build/nocompile.gypi ('k') | no next file » | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 #!/usr/bin/env python 1 #!/usr/bin/env python
2 # Copyright (c) 2011 The Chromium Authors. All rights reserved. 2 # Copyright (c) 2011 The Chromium Authors. All rights reserved.
3 # Use of this source code is governed by a BSD-style license that can be 3 # Use of this source code is governed by a BSD-style license that can be
4 # found in the LICENSE file. 4 # found in the LICENSE file.
5 5
6 """Implements a simple "negative compile" test for C++ on linux. 6 """Implements a simple "negative compile" test for C++ on linux.
7 7
8 Sometimes a C++ API needs to ensure that various usages cannot compile. To 8 Sometimes a C++ API needs to ensure that various usages cannot compile. To
9 enable unittesting of these assertions, we use this python script to 9 enable unittesting of these assertions, we use this python script to
10 invoke gcc on a source file and assert that compilation fails. 10 invoke gcc on a source file and assert that compilation fails.
11 11
12 For more info, see: 12 For more info, see:
13 http://dev.chromium.org/developers/testing/no-compile-tests 13 http://dev.chromium.org/developers/testing/no-compile-tests
14 """ 14 """
15 15
16 import StringIO
16 import ast 17 import ast
17 import locale 18 import locale
18 import os 19 import os
19 import re 20 import re
20 import select 21 import select
21 import shlex 22 import shlex
22 import subprocess 23 import subprocess
23 import sys 24 import sys
24 import time 25 import time
25 26
(...skipping 86 matching lines...) Expand 10 before | Expand all | Expand 10 after
112 raw_expectation = ast.literal_eval(match.group(1)) 113 raw_expectation = ast.literal_eval(match.group(1))
113 assert type(raw_expectation) is list 114 assert type(raw_expectation) is list
114 115
115 expectation = [] 116 expectation = []
116 for regex_str in raw_expectation: 117 for regex_str in raw_expectation:
117 assert type(regex_str) is str 118 assert type(regex_str) is str
118 expectation.append(re.compile(regex_str)) 119 expectation.append(re.compile(regex_str))
119 return expectation 120 return expectation
120 121
121 122
122 def ExtractTestConfigs(sourcefile_path): 123 def ExtractTestConfigs(sourcefile_path, suite_name):
123 """Parses the soruce file for test configurations. 124 """Parses the soruce file for test configurations.
124 125
125 Each no-compile test in the file is separated by an ifdef macro. We scan 126 Each no-compile test in the file is separated by an ifdef macro. We scan
126 the source file with the NCTEST_CONFIG_RE to find all ifdefs that look like 127 the source file with the NCTEST_CONFIG_RE to find all ifdefs that look like
127 they demark one no-compile test and try to extract the test configuration 128 they demark one no-compile test and try to extract the test configuration
128 from that. 129 from that.
129 130
130 Args: 131 Args:
131 sourcefile_path: The path to the source file. 132 sourcefile_path: The path to the source file.
133 suite_name: The name of the test suite.
132 134
133 Returns: 135 Returns:
134 A list of test configurations. Each test configuration is a dictionary of 136 A list of test configurations. Each test configuration is a dictionary of
135 the form: 137 the form:
136 138
137 { name: 'NCTEST_NAME' 139 { name: 'NCTEST_NAME'
138 suite_name: 'SOURCE_FILE_NAME' 140 suite_name: 'SOURCE_FILE_NAME'
139 expectations: [re.Pattern, re.Pattern] } 141 expectations: [re.Pattern, re.Pattern] }
140 142
141 The |suite_name| is used to generate a pretty gtest output on successful 143 The |suite_name| is used to generate a pretty gtest output on successful
142 completion of the no compile test. 144 completion of the no compile test.
143 145
144 The compiled regexps in |expectations| define the valid outputs of the 146 The compiled regexps in |expectations| define the valid outputs of the
145 compiler. If any one of the listed patterns matches either the stderr or 147 compiler. If any one of the listed patterns matches either the stderr or
146 stdout from the compilation, and the compilation failed, then the test is 148 stdout from the compilation, and the compilation failed, then the test is
147 considered to have succeeded. If the list is empty, than we ignore the 149 considered to have succeeded. If the list is empty, than we ignore the
148 compiler output and just check for failed compilation. If |expectations| 150 compiler output and just check for failed compilation. If |expectations|
149 is actually None, then this specifies a compiler sanity check test, which 151 is actually None, then this specifies a compiler sanity check test, which
150 should expect a SUCCESSFUL compilation. 152 should expect a SUCCESSFUL compilation.
151 """ 153 """
152 sourcefile = open(sourcefile_path, 'r') 154 sourcefile = open(sourcefile_path, 'r')
153 155
154 # Convert filename from underscores to CamelCase.
155 words = os.path.splitext(os.path.basename(sourcefile_path))[0].split('_')
156 words = [w.capitalize() for w in words]
157 suite_name = 'NoCompile' + ''.join(words)
158
159 # Start with at least the compiler sanity test. You need to always have one 156 # Start with at least the compiler sanity test. You need to always have one
160 # sanity test to show that compiler flags and configuration are not just 157 # sanity test to show that compiler flags and configuration are not just
161 # wrong. Otherwise, having a misconfigured compiler, or an error in the 158 # wrong. Otherwise, having a misconfigured compiler, or an error in the
162 # shared portions of the .nc file would cause all tests to erroneously pass. 159 # shared portions of the .nc file would cause all tests to erroneously pass.
163 test_configs = [{'name': 'NCTEST_SANITY', 160 test_configs = []
164 'suite_name': suite_name,
165 'expectations': None}]
166 161
167 for line in sourcefile: 162 for line in sourcefile:
168 match_result = NCTEST_CONFIG_RE.match(line) 163 match_result = NCTEST_CONFIG_RE.match(line)
169 if not match_result: 164 if not match_result:
170 continue 165 continue
171 166
172 groups = match_result.groups() 167 groups = match_result.groups()
173 168
174 # Grab the name and remove the defined() predicate if there is one. 169 # Grab the name and remove the defined() predicate if there is one.
175 name = groups[0] 170 name = groups[0]
(...skipping 16 matching lines...) Expand all
192 sourcefile_path: The path to the source file. 187 sourcefile_path: The path to the source file.
193 cflags: A string with all the CFLAGS to give to gcc. This string will be 188 cflags: A string with all the CFLAGS to give to gcc. This string will be
194 split by shelex so be careful with escaping. 189 split by shelex so be careful with escaping.
195 config: A dictionary describing the test. See ExtractTestConfigs 190 config: A dictionary describing the test. See ExtractTestConfigs
196 for a description of the config format. 191 for a description of the config format.
197 192
198 Returns: 193 Returns:
199 A dictionary containing all the information about the started test. The 194 A dictionary containing all the information about the started test. The
200 fields in the dictionary are as follows: 195 fields in the dictionary are as follows:
201 { 'proc': A subprocess object representing the compiler run. 196 { 'proc': A subprocess object representing the compiler run.
202 'cmdline': The exectued command line. 197 'cmdline': The executed command line.
203 'name': The name of the test. 198 'name': The name of the test.
204 'suite_name': The suite name to use when generating the gunit test 199 'suite_name': The suite name to use when generating the gunit test
205 result. 200 result.
206 'terminate_timeout': The timestamp in seconds since the epoch after 201 'terminate_timeout': The timestamp in seconds since the epoch after
207 which the test should be terminated. 202 which the test should be terminated.
208 'kill_timeout': The timestamp in seconds since the epoch after which 203 'kill_timeout': The timestamp in seconds since the epoch after which
209 the test should be given a hard kill signal. 204 the test should be given a hard kill signal.
210 'started_at': A timestamp in seconds since the epoch for when this test 205 'started_at': A timestamp in seconds since the epoch for when this test
211 was started. 206 was started.
212 'aborted_at': A timestamp in seconds since the epoch for when this test 207 'aborted_at': A timestamp in seconds since the epoch for when this test
(...skipping 108 matching lines...) Expand 10 before | Expand all | Expand 10 after
321 # cause we can only call this once on the Popen object, and lots of stuff 316 # cause we can only call this once on the Popen object, and lots of stuff
322 # below will want access to it. 317 # below will want access to it.
323 proc = test['proc'] 318 proc = test['proc']
324 (stdout, stderr) = proc.communicate() 319 (stdout, stderr) = proc.communicate()
325 320
326 if test['aborted_at'] != 0: 321 if test['aborted_at'] != 0:
327 FailTest(resultfile, test, "Compile timed out. Started %f ended %f." % 322 FailTest(resultfile, test, "Compile timed out. Started %f ended %f." %
328 (test['started_at'], test['aborted_at'])) 323 (test['started_at'], test['aborted_at']))
329 return 324 return
330 325
331 if test['expectations'] is None: 326 if proc.poll() == 0:
332 # This signals a compiler sanity check test. Fail iff compilation failed.
333 if proc.poll() == 0:
334 PassTest(resultfile, test)
335 return
336 else:
337 FailTest(resultfile, test, 'Sanity compile failed. Is compiler borked?',
338 stdout, stderr)
339 return
340 elif proc.poll() == 0:
341 # Handle failure due to successful compile. 327 # Handle failure due to successful compile.
342 FailTest(resultfile, test, 328 FailTest(resultfile, test,
343 'Unexpected successful compilation.', 329 'Unexpected successful compilation.',
344 stdout, stderr) 330 stdout, stderr)
345 return 331 return
346 else: 332 else:
347 # Check the output has the right expectations. If there are no 333 # Check the output has the right expectations. If there are no
348 # expectations, then we just consider the output "matched" by default. 334 # expectations, then we just consider the output "matched" by default.
349 if len(test['expectations']) == 0: 335 if len(test['expectations']) == 0:
350 PassTest(resultfile, test) 336 PassTest(resultfile, test)
(...skipping 72 matching lines...) Expand 10 before | Expand all | Expand 10 after
423 409
424 parallelism = int(sys.argv[1]) 410 parallelism = int(sys.argv[1])
425 sourcefile_path = sys.argv[2] 411 sourcefile_path = sys.argv[2]
426 cflags = sys.argv[3] 412 cflags = sys.argv[3]
427 resultfile_path = sys.argv[4] 413 resultfile_path = sys.argv[4]
428 414
429 timings = {'started': time.time()} 415 timings = {'started': time.time()}
430 416
431 ValidateInput(parallelism, sourcefile_path, cflags, resultfile_path) 417 ValidateInput(parallelism, sourcefile_path, cflags, resultfile_path)
432 418
433 test_configs = ExtractTestConfigs(sourcefile_path) 419 # Convert filename from underscores to CamelCase.
420 words = os.path.splitext(os.path.basename(sourcefile_path))[0].split('_')
421 words = [w.capitalize() for w in words]
422 suite_name = 'NoCompile' + ''.join(words)
423
424 test_configs = ExtractTestConfigs(sourcefile_path, suite_name)
434 timings['extract_done'] = time.time() 425 timings['extract_done'] = time.time()
435 426
436 resultfile = open(resultfile_path, 'w') 427 resultfile = StringIO.StringIO()
437 resultfile.write(RESULT_FILE_HEADER % sourcefile_path) 428 resultfile.write(RESULT_FILE_HEADER % sourcefile_path)
438 429
439 # Run the no-compile tests, but ensure we do not run more than |parallelism| 430 # Run the no-compile tests, but ensure we do not run more than |parallelism|
440 # tests at once. 431 # tests at once.
441 timings['header_written'] = time.time() 432 timings['header_written'] = time.time()
442 executing_tests = {} 433 executing_tests = {}
443 finished_tests = [] 434 finished_tests = []
435
436 test = StartTest(
437 sourcefile_path,
438 cflags + ' -MMD -MF %s.d -MT %s' % (resultfile_path, resultfile_path),
439 { 'name': 'NCTEST_SANITY',
440 'suite_name': suite_name,
441 'expectations': None,
442 })
443 executing_tests[test['name']] = test
444
444 for config in test_configs: 445 for config in test_configs:
445 # CompleteAtLeastOneTest blocks until at least one test finishes. Thus, this 446 # CompleteAtLeastOneTest blocks until at least one test finishes. Thus, this
446 # acts as a semaphore. We cannot use threads + a real semaphore because 447 # acts as a semaphore. We cannot use threads + a real semaphore because
447 # subprocess forks, which can cause all sorts of hilarity with threads. 448 # subprocess forks, which can cause all sorts of hilarity with threads.
448 if len(executing_tests) >= parallelism: 449 if len(executing_tests) >= parallelism:
449 finished_tests.extend(CompleteAtLeastOneTest(resultfile, executing_tests)) 450 finished_tests.extend(CompleteAtLeastOneTest(resultfile, executing_tests))
450 451
451 if config['name'].startswith('DISABLED_'): 452 if config['name'].startswith('DISABLED_'):
452 PassTest(resultfile, config) 453 PassTest(resultfile, config)
453 else: 454 else:
454 test = StartTest(sourcefile_path, cflags, config) 455 test = StartTest(sourcefile_path, cflags, config)
455 assert test['name'] not in executing_tests 456 assert test['name'] not in executing_tests
456 executing_tests[test['name']] = test 457 executing_tests[test['name']] = test
457 458
458 # If there are no more test to start, we still need to drain the running 459 # If there are no more test to start, we still need to drain the running
459 # ones. 460 # ones.
460 while len(executing_tests) > 0: 461 while len(executing_tests) > 0:
461 finished_tests.extend(CompleteAtLeastOneTest(resultfile, executing_tests)) 462 finished_tests.extend(CompleteAtLeastOneTest(resultfile, executing_tests))
462 timings['compile_done'] = time.time() 463 timings['compile_done'] = time.time()
463 464
464 for test in finished_tests: 465 for test in finished_tests:
466 if test['name'] == 'NCTEST_SANITY':
467 _, stderr = test['proc'].communicate()
468 return_code = test['proc'].poll()
469 if return_code != 0:
470 sys.stderr.write(stderr)
471 continue
465 ProcessTestResult(resultfile, test) 472 ProcessTestResult(resultfile, test)
466 timings['results_processed'] = time.time() 473 timings['results_processed'] = time.time()
467 474
468 # We always know at least a sanity test was run. 475 WriteStats(resultfile, suite_name, timings)
469 WriteStats(resultfile, finished_tests[0]['suite_name'], timings) 476
477 if return_code == 0:
478 with open(resultfile_path, 'w') as fd:
479 fd.write(resultfile.getvalue())
470 480
471 resultfile.close() 481 resultfile.close()
482 sys.exit(return_code)
472 483
473 484
474 if __name__ == '__main__': 485 if __name__ == '__main__':
475 main() 486 main()
OLDNEW
« no previous file with comments | « build/nocompile.gypi ('k') | no next file » | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698