Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(86)

Side by Side Diff: tools/run_perf.py

Issue 526953005: Add test driver with the notion of perf tests. (Closed) Base URL: https://v8.googlecode.com/svn/branches/bleeding_edge
Patch Set: Change demo config. Created 6 years, 3 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « benchmarks/v8.json ('k') | tools/unittests/run_perf_test.py » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 #!/usr/bin/env python 1 #!/usr/bin/env python
2 # Copyright 2014 the V8 project authors. All rights reserved. 2 # Copyright 2014 the V8 project authors. All rights reserved.
3 # Use of this source code is governed by a BSD-style license that can be 3 # Use of this source code is governed by a BSD-style license that can be
4 # found in the LICENSE file. 4 # found in the LICENSE file.
5 5
6 """ 6 """
7 Performance runner for d8. 7 Performance runner for d8.
8 8
9 Call e.g. with tools/run-benchmarks.py --arch ia32 some_suite.json 9 Call e.g. with tools/run-perf.py --arch ia32 some_suite.json
10 10
11 The suite json format is expected to be: 11 The suite json format is expected to be:
12 { 12 {
13 "path": <relative path chunks to benchmark resources and main file>, 13 "path": <relative path chunks to perf resources and main file>,
14 "name": <optional suite name, file name is default>, 14 "name": <optional suite name, file name is default>,
15 "archs": [<architecture name for which this suite is run>, ...], 15 "archs": [<architecture name for which this suite is run>, ...],
16 "binary": <name of binary to run, default "d8">, 16 "binary": <name of binary to run, default "d8">,
17 "flags": [<flag to d8>, ...], 17 "flags": [<flag to d8>, ...],
18 "run_count": <how often will this suite run (optional)>, 18 "run_count": <how often will this suite run (optional)>,
19 "run_count_XXX": <how often will this suite run for arch XXX (optional)>, 19 "run_count_XXX": <how often will this suite run for arch XXX (optional)>,
20 "resources": [<js file to be loaded before main>, ...] 20 "resources": [<js file to be loaded before main>, ...]
21 "main": <main js benchmark runner file>, 21 "main": <main js perf runner file>,
22 "results_regexp": <optional regexp>, 22 "results_regexp": <optional regexp>,
23 "results_processor": <optional python results processor script>, 23 "results_processor": <optional python results processor script>,
24 "units": <the unit specification for the performance dashboard>, 24 "units": <the unit specification for the performance dashboard>,
25 "benchmarks": [ 25 "tests": [
26 { 26 {
27 "name": <name of the benchmark>, 27 "name": <name of the trace>,
28 "results_regexp": <optional more specific regexp>, 28 "results_regexp": <optional more specific regexp>,
29 "results_processor": <optional python results processor script>, 29 "results_processor": <optional python results processor script>,
30 "units": <the unit specification for the performance dashboard>, 30 "units": <the unit specification for the performance dashboard>,
31 }, ... 31 }, ...
32 ] 32 ]
33 } 33 }
34 34
35 The benchmarks field can also nest other suites in arbitrary depth. A suite 35 The tests field can also nest other suites in arbitrary depth. A suite
36 with a "main" file is a leaf suite that can contain one more level of 36 with a "main" file is a leaf suite that can contain one more level of
37 benchmarks. 37 tests.
38 38
39 A suite's results_regexp is expected to have one string place holder 39 A suite's results_regexp is expected to have one string place holder
40 "%s" for the benchmark name. A benchmark's results_regexp overwrites suite 40 "%s" for the trace name. A trace's results_regexp overwrites suite
41 defaults. 41 defaults.
42 42
43 A suite's results_processor may point to an optional python script. If 43 A suite's results_processor may point to an optional python script. If
44 specified, it is called after running the benchmarks like this (with a path 44 specified, it is called after running the tests like this (with a path
45 relatve to the suite level's path): 45 relatve to the suite level's path):
46 <results_processor file> <same flags as for d8> <suite level name> <output> 46 <results_processor file> <same flags as for d8> <suite level name> <output>
47 47
48 The <output> is a temporary file containing d8 output. The results_regexp will 48 The <output> is a temporary file containing d8 output. The results_regexp will
49 be applied to the output of this script. 49 be applied to the output of this script.
50 50
51 A suite without "benchmarks" is considered a benchmark itself. 51 A suite without "tests" is considered a performance test itself.
52 52
53 Full example (suite with one runner): 53 Full example (suite with one runner):
54 { 54 {
55 "path": ["."], 55 "path": ["."],
56 "flags": ["--expose-gc"], 56 "flags": ["--expose-gc"],
57 "archs": ["ia32", "x64"], 57 "archs": ["ia32", "x64"],
58 "run_count": 5, 58 "run_count": 5,
59 "run_count_ia32": 3, 59 "run_count_ia32": 3,
60 "main": "run.js", 60 "main": "run.js",
61 "results_regexp": "^%s: (.+)$", 61 "results_regexp": "^%s: (.+)$",
62 "units": "score", 62 "units": "score",
63 "benchmarks": [ 63 "tests": [
64 {"name": "Richards"}, 64 {"name": "Richards"},
65 {"name": "DeltaBlue"}, 65 {"name": "DeltaBlue"},
66 {"name": "NavierStokes", 66 {"name": "NavierStokes",
67 "results_regexp": "^NavierStokes: (.+)$"} 67 "results_regexp": "^NavierStokes: (.+)$"}
68 ] 68 ]
69 } 69 }
70 70
71 Full example (suite with several runners): 71 Full example (suite with several runners):
72 { 72 {
73 "path": ["."], 73 "path": ["."],
74 "flags": ["--expose-gc"], 74 "flags": ["--expose-gc"],
75 "archs": ["ia32", "x64"], 75 "archs": ["ia32", "x64"],
76 "run_count": 5, 76 "run_count": 5,
77 "units": "score", 77 "units": "score",
78 "benchmarks": [ 78 "tests": [
79 {"name": "Richards", 79 {"name": "Richards",
80 "path": ["richards"], 80 "path": ["richards"],
81 "main": "run.js", 81 "main": "run.js",
82 "run_count": 3, 82 "run_count": 3,
83 "results_regexp": "^Richards: (.+)$"}, 83 "results_regexp": "^Richards: (.+)$"},
84 {"name": "NavierStokes", 84 {"name": "NavierStokes",
85 "path": ["navier_stokes"], 85 "path": ["navier_stokes"],
86 "main": "run.js", 86 "main": "run.js",
87 "results_regexp": "^NavierStokes: (.+)$"} 87 "results_regexp": "^NavierStokes: (.+)$"}
88 ] 88 ]
(...skipping 54 matching lines...) Expand 10 before | Expand all | Expand 10 after
143 def __add__(self, other): 143 def __add__(self, other):
144 self.traces += other.traces 144 self.traces += other.traces
145 self.errors += other.errors 145 self.errors += other.errors
146 return self 146 return self
147 147
148 def __str__(self): # pragma: no cover 148 def __str__(self): # pragma: no cover
149 return str(self.ToDict()) 149 return str(self.ToDict())
150 150
151 151
152 class Node(object): 152 class Node(object):
153 """Represents a node in the benchmark suite tree structure.""" 153 """Represents a node in the suite tree structure."""
154 def __init__(self, *args): 154 def __init__(self, *args):
155 self._children = [] 155 self._children = []
156 156
157 def AppendChild(self, child): 157 def AppendChild(self, child):
158 self._children.append(child) 158 self._children.append(child)
159 159
160 160
161 class DefaultSentinel(Node): 161 class DefaultSentinel(Node):
162 """Fake parent node with all default values.""" 162 """Fake parent node with all default values."""
163 def __init__(self): 163 def __init__(self):
164 super(DefaultSentinel, self).__init__() 164 super(DefaultSentinel, self).__init__()
165 self.binary = "d8" 165 self.binary = "d8"
166 self.run_count = 10 166 self.run_count = 10
167 self.path = [] 167 self.path = []
168 self.graphs = [] 168 self.graphs = []
169 self.flags = [] 169 self.flags = []
170 self.resources = [] 170 self.resources = []
171 self.results_regexp = None 171 self.results_regexp = None
172 self.stddev_regexp = None 172 self.stddev_regexp = None
173 self.units = "score" 173 self.units = "score"
174 self.total = False 174 self.total = False
175 175
176 176
177 class Graph(Node): 177 class Graph(Node):
178 """Represents a benchmark suite definition. 178 """Represents a suite definition.
179 179
180 Can either be a leaf or an inner node that provides default values. 180 Can either be a leaf or an inner node that provides default values.
181 """ 181 """
182 def __init__(self, suite, parent, arch): 182 def __init__(self, suite, parent, arch):
183 super(Graph, self).__init__() 183 super(Graph, self).__init__()
184 self._suite = suite 184 self._suite = suite
185 185
186 assert isinstance(suite.get("path", []), list) 186 assert isinstance(suite.get("path", []), list)
187 assert isinstance(suite["name"], basestring) 187 assert isinstance(suite["name"], basestring)
188 assert isinstance(suite.get("flags", []), list) 188 assert isinstance(suite.get("flags", []), list)
(...skipping 25 matching lines...) Expand all
214 214
215 # A similar regular expression for the standard deviation (optional). 215 # A similar regular expression for the standard deviation (optional).
216 if parent.stddev_regexp: 216 if parent.stddev_regexp:
217 stddev_default = parent.stddev_regexp % re.escape(suite["name"]) 217 stddev_default = parent.stddev_regexp % re.escape(suite["name"])
218 else: 218 else:
219 stddev_default = None 219 stddev_default = None
220 self.stddev_regexp = suite.get("stddev_regexp", stddev_default) 220 self.stddev_regexp = suite.get("stddev_regexp", stddev_default)
221 221
222 222
223 class Trace(Graph): 223 class Trace(Graph):
224 """Represents a leaf in the benchmark suite tree structure. 224 """Represents a leaf in the suite tree structure.
225 225
226 Handles collection of measurements. 226 Handles collection of measurements.
227 """ 227 """
228 def __init__(self, suite, parent, arch): 228 def __init__(self, suite, parent, arch):
229 super(Trace, self).__init__(suite, parent, arch) 229 super(Trace, self).__init__(suite, parent, arch)
230 assert self.results_regexp 230 assert self.results_regexp
231 self.results = [] 231 self.results = []
232 self.errors = [] 232 self.errors = []
233 self.stddev = "" 233 self.stddev = ""
234 234
235 def ConsumeOutput(self, stdout): 235 def ConsumeOutput(self, stdout):
236 try: 236 try:
237 self.results.append( 237 self.results.append(
238 re.search(self.results_regexp, stdout, re.M).group(1)) 238 re.search(self.results_regexp, stdout, re.M).group(1))
239 except: 239 except:
240 self.errors.append("Regexp \"%s\" didn't match for benchmark %s." 240 self.errors.append("Regexp \"%s\" didn't match for test %s."
241 % (self.results_regexp, self.graphs[-1])) 241 % (self.results_regexp, self.graphs[-1]))
242 242
243 try: 243 try:
244 if self.stddev_regexp and self.stddev: 244 if self.stddev_regexp and self.stddev:
245 self.errors.append("Benchmark %s should only run once since a stddev " 245 self.errors.append("Test %s should only run once since a stddev "
246 "is provided by the benchmark." % self.graphs[-1]) 246 "is provided by the test." % self.graphs[-1])
247 if self.stddev_regexp: 247 if self.stddev_regexp:
248 self.stddev = re.search(self.stddev_regexp, stdout, re.M).group(1) 248 self.stddev = re.search(self.stddev_regexp, stdout, re.M).group(1)
249 except: 249 except:
250 self.errors.append("Regexp \"%s\" didn't match for benchmark %s." 250 self.errors.append("Regexp \"%s\" didn't match for test %s."
251 % (self.stddev_regexp, self.graphs[-1])) 251 % (self.stddev_regexp, self.graphs[-1]))
252 252
253 def GetResults(self): 253 def GetResults(self):
254 return Results([{ 254 return Results([{
255 "graphs": self.graphs, 255 "graphs": self.graphs,
256 "units": self.units, 256 "units": self.units,
257 "results": self.results, 257 "results": self.results,
258 "stddev": self.stddev, 258 "stddev": self.stddev,
259 }], self.errors) 259 }], self.errors)
260 260
261 261
262 class Runnable(Graph): 262 class Runnable(Graph):
263 """Represents a runnable benchmark suite definition (i.e. has a main file). 263 """Represents a runnable suite definition (i.e. has a main file).
264 """ 264 """
265 @property 265 @property
266 def main(self): 266 def main(self):
267 return self._suite.get("main", "") 267 return self._suite.get("main", "")
268 268
269 def ChangeCWD(self, suite_path): 269 def ChangeCWD(self, suite_path):
270 """Changes the cwd to to path defined in the current graph. 270 """Changes the cwd to to path defined in the current graph.
271 271
272 The benchmarks are supposed to be relative to the suite configuration. 272 The tests are supposed to be relative to the suite configuration.
273 """ 273 """
274 suite_dir = os.path.abspath(os.path.dirname(suite_path)) 274 suite_dir = os.path.abspath(os.path.dirname(suite_path))
275 bench_dir = os.path.normpath(os.path.join(*self.path)) 275 bench_dir = os.path.normpath(os.path.join(*self.path))
276 os.chdir(os.path.join(suite_dir, bench_dir)) 276 os.chdir(os.path.join(suite_dir, bench_dir))
277 277
278 def GetCommand(self, shell_dir): 278 def GetCommand(self, shell_dir):
279 # TODO(machenbach): This requires +.exe if run on windows. 279 # TODO(machenbach): This requires +.exe if run on windows.
280 return ( 280 return (
281 [os.path.join(shell_dir, self.binary)] + 281 [os.path.join(shell_dir, self.binary)] +
282 self.flags + 282 self.flags +
(...skipping 24 matching lines...) Expand all
307 for i in range(0, n_results)] 307 for i in range(0, n_results)]
308 res.traces.append({ 308 res.traces.append({
309 "graphs": self.graphs + ["Total"], 309 "graphs": self.graphs + ["Total"],
310 "units": res.traces[0]["units"], 310 "units": res.traces[0]["units"],
311 "results": total_results, 311 "results": total_results,
312 "stddev": "", 312 "stddev": "",
313 }) 313 })
314 return res 314 return res
315 315
316 class RunnableTrace(Trace, Runnable): 316 class RunnableTrace(Trace, Runnable):
317 """Represents a runnable benchmark suite definition that is a leaf.""" 317 """Represents a runnable suite definition that is a leaf."""
318 def __init__(self, suite, parent, arch): 318 def __init__(self, suite, parent, arch):
319 super(RunnableTrace, self).__init__(suite, parent, arch) 319 super(RunnableTrace, self).__init__(suite, parent, arch)
320 320
321 def Run(self, runner): 321 def Run(self, runner):
322 """Iterates over several runs and handles the output.""" 322 """Iterates over several runs and handles the output."""
323 for stdout in runner(): 323 for stdout in runner():
324 self.ConsumeOutput(stdout) 324 self.ConsumeOutput(stdout)
325 return self.GetResults() 325 return self.GetResults()
326 326
327 327
328 class RunnableGeneric(Runnable): 328 class RunnableGeneric(Runnable):
329 """Represents a runnable benchmark suite definition with generic traces.""" 329 """Represents a runnable suite definition with generic traces."""
330 def __init__(self, suite, parent, arch): 330 def __init__(self, suite, parent, arch):
331 super(RunnableGeneric, self).__init__(suite, parent, arch) 331 super(RunnableGeneric, self).__init__(suite, parent, arch)
332 332
333 def Run(self, runner): 333 def Run(self, runner):
334 """Iterates over several runs and handles the output.""" 334 """Iterates over several runs and handles the output."""
335 traces = {} 335 traces = {}
336 for stdout in runner(): 336 for stdout in runner():
337 for line in stdout.strip().splitlines(): 337 for line in stdout.strip().splitlines():
338 match = GENERIC_RESULTS_RE.match(line) 338 match = GENERIC_RESULTS_RE.match(line)
339 if match: 339 if match:
(...skipping 12 matching lines...) Expand all
352 return reduce(lambda r, t: r + t, traces.itervalues(), Results()) 352 return reduce(lambda r, t: r + t, traces.itervalues(), Results())
353 353
354 354
355 def MakeGraph(suite, arch, parent): 355 def MakeGraph(suite, arch, parent):
356 """Factory method for making graph objects.""" 356 """Factory method for making graph objects."""
357 if isinstance(parent, Runnable): 357 if isinstance(parent, Runnable):
358 # Below a runnable can only be traces. 358 # Below a runnable can only be traces.
359 return Trace(suite, parent, arch) 359 return Trace(suite, parent, arch)
360 elif suite.get("main"): 360 elif suite.get("main"):
361 # A main file makes this graph runnable. 361 # A main file makes this graph runnable.
362 if suite.get("benchmarks"): 362 if suite.get("tests"):
363 # This graph has subbenchmarks (traces). 363 # This graph has subgraphs (traces).
364 return Runnable(suite, parent, arch) 364 return Runnable(suite, parent, arch)
365 else: 365 else:
366 # This graph has no subbenchmarks, it's a leaf. 366 # This graph has no subgraphs, it's a leaf.
367 return RunnableTrace(suite, parent, arch) 367 return RunnableTrace(suite, parent, arch)
368 elif suite.get("generic"): 368 elif suite.get("generic"):
369 # This is a generic suite definition. It is either a runnable executable 369 # This is a generic suite definition. It is either a runnable executable
370 # or has a main js file. 370 # or has a main js file.
371 return RunnableGeneric(suite, parent, arch) 371 return RunnableGeneric(suite, parent, arch)
372 elif suite.get("benchmarks"): 372 elif suite.get("tests"):
373 # This is neither a leaf nor a runnable. 373 # This is neither a leaf nor a runnable.
374 return Graph(suite, parent, arch) 374 return Graph(suite, parent, arch)
375 else: # pragma: no cover 375 else: # pragma: no cover
376 raise Exception("Invalid benchmark suite configuration.") 376 raise Exception("Invalid suite configuration.")
377 377
378 378
379 def BuildGraphs(suite, arch, parent=None): 379 def BuildGraphs(suite, arch, parent=None):
380 """Builds a tree structure of graph objects that corresponds to the suite 380 """Builds a tree structure of graph objects that corresponds to the suite
381 configuration. 381 configuration.
382 """ 382 """
383 parent = parent or DefaultSentinel() 383 parent = parent or DefaultSentinel()
384 384
385 # TODO(machenbach): Implement notion of cpu type? 385 # TODO(machenbach): Implement notion of cpu type?
386 if arch not in suite.get("archs", ["ia32", "x64"]): 386 if arch not in suite.get("archs", ["ia32", "x64"]):
387 return None 387 return None
388 388
389 graph = MakeGraph(suite, arch, parent) 389 graph = MakeGraph(suite, arch, parent)
390 for subsuite in suite.get("benchmarks", []): 390 for subsuite in suite.get("tests", []):
391 BuildGraphs(subsuite, arch, graph) 391 BuildGraphs(subsuite, arch, graph)
392 parent.AppendChild(graph) 392 parent.AppendChild(graph)
393 return graph 393 return graph
394 394
395 395
396 def FlattenRunnables(node): 396 def FlattenRunnables(node):
397 """Generator that traverses the tree structure and iterates over all 397 """Generator that traverses the tree structure and iterates over all
398 runnables. 398 runnables.
399 """ 399 """
400 if isinstance(node, Runnable): 400 if isinstance(node, Runnable):
401 yield node 401 yield node
402 elif isinstance(node, Node): 402 elif isinstance(node, Node):
403 for child in node._children: 403 for child in node._children:
404 for result in FlattenRunnables(child): 404 for result in FlattenRunnables(child):
405 yield result 405 yield result
406 else: # pragma: no cover 406 else: # pragma: no cover
407 raise Exception("Invalid benchmark suite configuration.") 407 raise Exception("Invalid suite configuration.")
408 408
409 409
410 # TODO: Implement results_processor. 410 # TODO: Implement results_processor.
411 def Main(args): 411 def Main(args):
412 parser = optparse.OptionParser() 412 parser = optparse.OptionParser()
413 parser.add_option("--arch", 413 parser.add_option("--arch",
414 help=("The architecture to run tests for, " 414 help=("The architecture to run tests for, "
415 "'auto' or 'native' for auto-detect"), 415 "'auto' or 'native' for auto-detect"),
416 default="x64") 416 default="x64")
417 parser.add_option("--buildbot", 417 parser.add_option("--buildbot",
(...skipping 22 matching lines...) Expand all
440 shell_dir = os.path.join(workspace, options.outdir, "Release") 440 shell_dir = os.path.join(workspace, options.outdir, "Release")
441 else: 441 else:
442 shell_dir = os.path.join(workspace, options.outdir, 442 shell_dir = os.path.join(workspace, options.outdir,
443 "%s.release" % options.arch) 443 "%s.release" % options.arch)
444 444
445 results = Results() 445 results = Results()
446 for path in args: 446 for path in args:
447 path = os.path.abspath(path) 447 path = os.path.abspath(path)
448 448
449 if not os.path.exists(path): # pragma: no cover 449 if not os.path.exists(path): # pragma: no cover
450 results.errors.append("Benchmark file %s does not exist." % path) 450 results.errors.append("Configuration file %s does not exist." % path)
451 continue 451 continue
452 452
453 with open(path) as f: 453 with open(path) as f:
454 suite = json.loads(f.read()) 454 suite = json.loads(f.read())
455 455
456 # If no name is given, default to the file name without .json. 456 # If no name is given, default to the file name without .json.
457 suite.setdefault("name", os.path.splitext(os.path.basename(path))[0]) 457 suite.setdefault("name", os.path.splitext(os.path.basename(path))[0])
458 458
459 for runnable in FlattenRunnables(BuildGraphs(suite, options.arch)): 459 for runnable in FlattenRunnables(BuildGraphs(suite, options.arch)):
460 print ">>> Running suite: %s" % "/".join(runnable.graphs) 460 print ">>> Running suite: %s" % "/".join(runnable.graphs)
(...skipping 18 matching lines...) Expand all
479 479
480 if options.json_test_results: 480 if options.json_test_results:
481 results.WriteToFile(options.json_test_results) 481 results.WriteToFile(options.json_test_results)
482 else: # pragma: no cover 482 else: # pragma: no cover
483 print results 483 print results
484 484
485 return min(1, len(results.errors)) 485 return min(1, len(results.errors))
486 486
487 if __name__ == "__main__": # pragma: no cover 487 if __name__ == "__main__": # pragma: no cover
488 sys.exit(Main(sys.argv[1:])) 488 sys.exit(Main(sys.argv[1:]))
OLDNEW
« no previous file with comments | « benchmarks/v8.json ('k') | tools/unittests/run_perf_test.py » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698