Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(4)

Unified Diff: tracing/bin/run_metric

Issue 2110683010: Allow benchmarks to specify running multiple metrics. (Closed) Base URL: git@github.com:catapult-project/catapult@master
Patch Set: fix style some more Created 4 years, 5 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View side-by-side diff with in-line comments
Download patch
Index: tracing/bin/run_metric
diff --git a/tracing/bin/run_metric b/tracing/bin/run_metric
index 91a0eb35fb8cc0bdf1c94b0b24e8072a611e3504..6e29c18d190099217a1f301e480ece1cdf32481a 100755
--- a/tracing/bin/run_metric
+++ b/tracing/bin/run_metric
@@ -17,25 +17,26 @@ def Main(argv):
['/tracing/metrics/all_metrics.html'])
parser = argparse.ArgumentParser(
description='Runs metrics on local traces')
- parser.add_argument('metric_name',
- help=('The function name of a registered metric, NOT '
- 'filename. Available metrics are: %s' %
- ', '.join(all_metrics)),
- choices=all_metrics, metavar='metricName')
parser.add_argument('trace_file_or_dir',
help='A trace file, or a dir containing trace files')
+ parser.add_argument('metrics', nargs=argparse.REMAINDER,
+ help=('Function names of registered metrics '
+ '(not filenames.) '
+ 'Available metrics are: %s' %
+ ', '.join(all_metrics)),
+ choices=all_metrics, metavar='metricName')
args = parser.parse_args(argv[1:])
- metric = args.metric_name
+ trace_file_or_dir = os.path.abspath(args.trace_file_or_dir)
- if os.path.isdir(args.trace_file_or_dir):
- trace_dir = args.trace_file_or_dir
+ if os.path.isdir(trace_file_or_dir):
+ trace_dir = trace_file_or_dir
traces = [os.path.join(trace_dir, trace) for trace in os.listdir(trace_dir)]
else:
- traces = [args.trace_file_or_dir]
+ traces = [trace_file_or_dir]
- results = {k: v.AsDict()
- for k, v in metric_runner.RunMetricOnTraces(traces, metric).iteritems()}
+ results = {k: v.AsDict() for k, v in
+ metric_runner.RunMetricOnTraces(traces, args.metrics).iteritems()}
failures = []
for trace in traces:
« no previous file with comments | « telemetry/telemetry/web_perf/timeline_based_measurement.py ('k') | tracing/tracing/metrics/discover_unittest.py » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698