Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(88)

Unified Diff: tools/perf/metrics/smoothness.py

Issue 23506030: telemetry: Add new metrics to smoothness benchmark. (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: Replaced 'score' by inverse RMS frame time. Created 7 years, 3 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View side-by-side diff with in-line comments
Download patch
« no previous file with comments | « tools/perf/metrics/gpu_rendering_stats.py ('k') | tools/perf/metrics/smoothness_unittest.py » ('j') | no next file with comments »
Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
Index: tools/perf/metrics/smoothness.py
diff --git a/tools/perf/metrics/smoothness.py b/tools/perf/metrics/smoothness.py
index cfb19b093c372530fd0dee77365f7a69e65922b2..e228cd7fb693777fadafe3ff9dcdb8823afb3aca 100644
--- a/tools/perf/metrics/smoothness.py
+++ b/tools/perf/metrics/smoothness.py
@@ -4,6 +4,7 @@
import os
from telemetry.core import util
+from metrics import discrepancy
TIMELINE_MARKER = 'smoothness_scroll'
@@ -88,6 +89,21 @@ def Average(numerator, denominator, scale = None, precision = None):
avg = round(avg, precision)
return avg
+def DivideIfPossibleOrZero(numerator, denominator):
+ if not denominator:
+ return 0.0
+ else:
+ return numerator / denominator
+
+def GeneralizedMean(values, exponent):
+ ''' http://en.wikipedia.org/wiki/Generalized_mean '''
+ if not values:
+ return 0.0
+ sum_of_powers = 0.0
+ for v in values:
+ sum_of_powers += v ** exponent
+ return (sum_of_powers / len(values)) ** (1.0/exponent)
+
def CalcFirstPaintTimeResults(results, tab):
if tab.browser.is_content_shell:
results.Add('first_paint', 'ms', 'unsupported')
@@ -110,9 +126,24 @@ def CalcFirstPaintTimeResults(results, tab):
def CalcResults(benchmark_stats, results):
nduca 2013/09/05 17:57:41 you're saying screen frame throughout here but as
ernstm 2013/09/05 18:38:07 The name originates from the numFramesSentToScreen
s = benchmark_stats
+ frame_times = []
nduca 2013/09/05 17:57:41 screen_frame_xxx here and elsewhere
+ for i in xrange(1, len(s.screen_frame_timestamps)):
+ frame_times.append(
+ s.screen_frame_timestamps[i] - s.screen_frame_timestamps[i-1])
+
# Scroll Results
results.Add('mean_frame_time', 'ms',
Average(s.total_time, s.screen_frame_count, 1000, 3))
+ results.Add('absolute_frame_discrepancy', '',
+ round(discrepancy.FrameDiscrepancy(s.screen_frame_timestamps,
+ True), 4))
+ results.Add('relative_frame_discrepancy', '',
nduca 2013/09/05 17:57:41 people are going to be very confused about the dif
ernstm 2013/09/05 18:38:07 I've put in both, so that we can evaluate which on
+ round(discrepancy.FrameDiscrepancy(s.screen_frame_timestamps,
+ False), 4))
+ results.Add('inverse_rms_frame_time', '',
+ round(DivideIfPossibleOrZero(1000.0,
nduca 2013/09/05 17:57:41 this is the key metric, no? can we make this less
ernstm 2013/09/05 18:38:07 The thing about this metric is that the exponent i
+ GeneralizedMean(frame_times, 2.0)),
+ 2))
results.Add('dropped_percent', '%',
Average(s.dropped_frame_count, s.screen_frame_count,
100, 1),
« no previous file with comments | « tools/perf/metrics/gpu_rendering_stats.py ('k') | tools/perf/metrics/smoothness_unittest.py » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698