|
|
Created:
5 years, 5 months ago by cpaulin (no longer in chrome) Modified:
5 years, 2 months ago CC:
chromium-reviews, telemetry-reviews_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionTelemetry Test for WebRTC Rendering.
Design doc: https://docs.google.com/document/d/1TX0NnT_xxvnRNDkw6P2kxunXyK_I8OChkLb4jehXR9I/edit
BUG=
CQ_EXTRA_TRYBOTS=tryserver.chromium.perf:linux_perf_bisect;tryserver.chromium.perf:mac_10_10_perf_bisect;tryserver.chromium.perf:win_perf_bisect;tryserver.chromium.perf:android_nexus5_perf_bisect
Committed: https://crrev.com/4db46d73b08de41f0d838408b9b79b8844769489
Cr-Commit-Position: refs/heads/master@{#353615}
Committed: https://crrev.com/48cd49f762c805387937845ca64528f1dbc975ba
Cr-Commit-Position: refs/heads/master@{#354111}
Patch Set 1 #Patch Set 2 : Added rendering_lenth_error and normalized drift_time and smoothness_score #
Total comments: 20
Patch Set 3 : Move to TBM design (not ready for review) #
Total comments: 2
Patch Set 4 : not for rewview yet #
Total comments: 1
Patch Set 5 : Timeline based measurement version of WebRtc rendering Test #Patch Set 6 : modified as per review comments #
Total comments: 30
Patch Set 7 : Clean up code for readability after review comments #Patch Set 8 : Minor changes : remove one print statement and add dots #
Total comments: 78
Patch Set 9 : Added unit test to CL #
Total comments: 12
Patch Set 10 : Accomodate 'Serial' field definition change and minor fixes. #
Total comments: 24
Patch Set 11 : Implemented changes from reviewer #Patch Set 12 : Changes to stats output #
Total comments: 2
Patch Set 13 : Removed magic from TimeStats... #Patch Set 14 : Use improvement direction for smoothness and freezing scores. #Patch Set 15 : Added improvement direction to all result metrics and removed frame_distribution from result.html #Patch Set 16 : Fixed merge conflicts in tools/telemetry/telemetry/web_perf/timeline_based_measurement.py #Patch Set 17 : Use telemetry stats functions. #
Total comments: 1
Patch Set 18 : Remove spaces in cc_filter as workaround for telemetry bug. #Patch Set 19 : Final cleanup: no need to add stddev and mean to result file #Patch Set 20 : Added copyrhight header to webrtc_rendering_stats_unittest.py #Patch Set 21 : use SetExtraBrowserOptions instead of CustomizeBrowserOptions in benchmark #Patch Set 22 : Moved loopback_peerconnection.html to page_set/webrtc_rendering to avoid unecessary deps in test/bu… #Patch Set 23 : Remove the flag enable-rtc-smoothness-algorithm now as it crashes the browser #Patch Set 24 : Remove mocks to fix border effects on subsequent tests #Messages
Total messages: 143 (53 generated)
cpaulin@google.com changed reviewers: + nednguyen@google.com, phoglund@chromium.org
cpaulin@google.com changed reviewers: + qiangchen@google.com
cpaulin@google.com changed reviewers: + nednguyen@chromium.org, qiangchen@chromium.org - qiangchen@google.com
I'm not familiar with browser test framework, so I just focused on the math part of this CL. One thing I just come up with is that If WebMediaPlayerMS dropped a frame, then this code will not work properly as it cannot detect a frame with cadence 0. https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... File tools/perf/measurements/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:67: weight = sum([bucket[ticks] for ticks in bucket]) Possibly name this variable as num_frames. https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:68: population= sum([ticks * bucket[ticks] for ticks in bucket]) Possibly name this variable as num_vsyncs. https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:84: {'frozen_frames' : ticks -1 , Question: why we take ticks-1 here? https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:143: [x for x in norm_drift_time if abs(x) > 2e6 / DISPLAY_HERTZ]) How about defining a variable say vsync_duration = 1e6 / DISPLAY_HERTZ instead of do this computation here and there. https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:155: population = sum([n * bucket[n] for n in bucket]) ditto
Hmm, yeah, what's the status of this patch?... Should we review it?
On 2015/08/31 08:22:49, phoglund_chrome wrote: > Hmm, yeah, what's the status of this patch?... Should we review it? Ready for review. Thanks
On 2015/09/01 21:08:16, cpaulin wrote: > On 2015/08/31 08:22:49, phoglund_chrome wrote: > > Hmm, yeah, what's the status of this patch?... Should we review it? > > Ready for review. Thanks Ok, I don't think you mailed this CL out, which is why I didn't see it until qiangchenc sent his mail.
On 2015/09/02 09:17:30, phoglund_chrome wrote: > On 2015/09/01 21:08:16, cpaulin wrote: > > On 2015/08/31 08:22:49, phoglund_chrome wrote: > > > Hmm, yeah, what's the status of this patch?... Should we review it? > > > > Ready for review. Thanks > > Ok, I don't think you mailed this CL out, which is why I didn't see it until > qiangchenc sent his mail. Ok, anyway, to add a new measurement you should get someone from telemetry-team on early to see if the measurement is even viable. They're really careful to add new ones since they're quite hard to write. Do you have some kind of design doc as well?
phoglund@chromium.org changed reviewers: - nednguyen@chromium.org
I started reviewing, but there's a fundamental structural problem you need to solve first. Ned can elaborate. I quote from https://www.chromium.org/developers/telemetry/add_a_measurement: "A measurement is a python class that obtains a set of Numbers about a Page. Measurements must work for all pages, not just one specific page." Your current measurement only works with the page you wrote. It should be possible to make it work for any WebRTC-backed video tag, or even maybe any video tag. The good news is that, unless I misread, that you don't really need to instrument any video tags or peer connections in javascript - you just read traces, right? In that case all you need is remove your demo page. You can probably remove your page set too, and just re-use the webrtc_cases page set. All of those represent a nice set of WebRTC demo pages both with both peerconnection calls and local camera playback through gUM. You should really implement your new WebMediaPlayer stats as a _metric_ (you've currently made it a measurement, which is wrong) and add it to https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/measure.... For an example of how to properly implement a metric, see https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/metrics.... Let me know if you have any questions. https://codereview.chromium.org/1254023003/diff/20001/chrome/test/data/webrtc... File chrome/test/data/webrtc_rendering/loopback_peerconnection.html (right): https://codereview.chromium.org/1254023003/diff/20001/chrome/test/data/webrtc... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:9: durationMS = 30000; Style guide: Use NAMES_LIKE_THIS for constant values. https://codereview.chromium.org/1254023003/diff/20001/chrome/test/data/webrtc... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:10: testProgress = 0; This variable is badly named - a better name would be testDone, looking at how it gets read. Also, I think it's better to make a function isTestDone() that you call in the condition, to better encapsulate the variable. Finally, document where this is read. A casual glance may suggest the variable isn't ever read. https://codereview.chromium.org/1254023003/diff/20001/tools/perf/page_sets/we... File tools/perf/page_sets/webrtc_rendering_measurement.py (right): https://codereview.chromium.org/1254023003/diff/20001/tools/perf/page_sets/we... tools/perf/page_sets/webrtc_rendering_measurement.py:33: 'loopback_peerconnection.html') You're not supposed to build page sets like this, you're supposed to "record" them using web page replay and use Telemetry's page set support. Ned, do you still allow having demo pages like this? See https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/page_se... for an example. In WebRTC we generally host the demo page as an example page on http://webrtc.github.io/samples/ and record it with wpr. In general you should construct your measurement so it works on any page with a WebRTC-backed video element on it. Or maybe it could even work on any video element regardless of source?
Please demonstrate how these metrics catch a regression according to our policy: https://docs.google.com/document/d/1bBKyYCW3VlUUPDpQE4xvrMFdA6tovQMZoqO9KCcmq... https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... File tools/perf/measurements/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:182: class WebRTCRendering(page_test.PageTest): Please don't add another page_test & use TimelineBasedMeasurement's structure instead. It's less code to write for you & more existing features that you can reuse (for example: TBM upload the trace to cloud storage at the end of test run so you can view the trace on the dashboard) For example of how to add a timeline_based_metric, see: https://codereview.chromium.org/1104053006/ https://codereview.chromium.org/1254023003/diff/20001/tools/perf/page_sets/we... File tools/perf/page_sets/webrtc_rendering_measurement.py (right): https://codereview.chromium.org/1254023003/diff/20001/tools/perf/page_sets/we... tools/perf/page_sets/webrtc_rendering_measurement.py:10: class WebrtcRenderingMeasurementPage(page_module.Page): s/WebrtcRenderingMeasurement/WebrtcRendering https://codereview.chromium.org/1254023003/diff/20001/tools/perf/page_sets/we... tools/perf/page_sets/webrtc_rendering_measurement.py:33: 'loopback_peerconnection.html') On 2015/09/02 11:27:45, phoglund_chrome wrote: > You're not supposed to build page sets like this, you're supposed to "record" > them using web page replay and use Telemetry's page set support. Ned, do you > still allow having demo pages like this? See > https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/page_se... > for an example. In WebRTC we generally host the demo page as an example page on > http://webrtc.github.io/samples/ and record it with wpr. > > In general you should construct your measurement so it works on any page with a > WebRTC-backed video element on it. Or maybe it could even work on any video > element regardless of source? Telemetry page can have file url (and I indeed prefer this over recorded WPR due to the fact that we can't record & replay page deterministically all the time). For example: https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/page_se...
> Telemetry page can have file url (and I indeed prefer this over recorded WPR due > to the fact that we can't record & replay page deterministically all the time). > > For example: > https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/page_se... Ok, then I shall bow to your wisdom :) Nontheless, I think it's a good idea to rewrite the measurement as a metric and add to the WebRTC measurement. It'll be one measurement and benchmark less and I think these metrics make a lot of sense for all WebRTC pages.
On 2015/09/02 09:18:34, phoglund_chrome wrote: > On 2015/09/02 09:17:30, phoglund_chrome wrote: > > On 2015/09/01 21:08:16, cpaulin wrote: > > > On 2015/08/31 08:22:49, phoglund_chrome wrote: > > > > Hmm, yeah, what's the status of this patch?... Should we review it? > > > > > > Ready for review. Thanks > > > > Ok, I don't think you mailed this CL out, which is why I didn't see it until > > qiangchenc sent his mail. > > Ok, anyway, to add a new measurement you should get someone from telemetry-team > on early to see if the measurement is even viable. They're really careful to add > new ones since they're quite hard to write. Do you have some kind of design doc > as well? Design doc https://docs.google.com/document/d/1TX0NnT_xxvnRNDkw6P2kxunXyK_I8OChkLb4jehXR...
> > Ok, then I shall bow to your wisdom :) Nontheless, I think it's a good idea to > rewrite the measurement as a metric and add to the WebRTC measurement. It'll be > one measurement and benchmark less and I think these metrics make a lot of sense > for all WebRTC pages. Any thoughts on this, Ned? The WebRTC measurement currently looks like this: https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/measure... Or is there any reason why we should make a new measurement, rather than just add the new metric to the WebRTC measurement?
https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... File tools/perf/measurements/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:182: class WebRTCRendering(page_test.PageTest): On 2015/09/02 16:26:46, nednguyen wrote: > Please don't add another page_test & use TimelineBasedMeasurement's structure > instead. It's less code to write for you & more existing features that you can > reuse (for example: TBM upload the trace to cloud storage at the end of test run > so you can view the trace on the dashboard) > > > For example of how to add a timeline_based_metric, see: > https://codereview.chromium.org/1104053006/ Right, and so most of this code should be a Metric which gets imported by the Measurement?
On 2015/09/04 09:16:13, phoglund_chrome wrote: > > > > Ok, then I shall bow to your wisdom :) Nontheless, I think it's a good idea to > > rewrite the measurement as a metric and add to the WebRTC measurement. It'll > be > > one measurement and benchmark less and I think these metrics make a lot of > sense > > for all WebRTC pages. > > Any thoughts on this, Ned? The WebRTC measurement currently looks like this: > https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/measure... > > Or is there any reason why we should make a new measurement, rather than just > add the new metric to the WebRTC measurement? Adding new metrics to existing WebRTC measurement is also fine. I prefer adding the new metric to TimelineBasedMeasurement because that's model that telemetry tests are gravitate to. We eventually will put all of our important metrics there, including: power(https://docs.google.com/document/d/1wzCn52iB1gJUPyEnXT0W3LaCdgGa-n0Eh4z9jJKB1hs), memory (Primiano's team), cpu time, etc... Also see this explainer for why TimelineBasedMeasurement is the preferred model: https://docs.google.com/document/d/10G0PbePQOwJao57Mu6Xr7Fx0o2ng8fwZ33wUqpnsy...
https://codereview.chromium.org/1254023003/diff/20001/chrome/test/data/webrtc... File chrome/test/data/webrtc_rendering/loopback_peerconnection.html (right): https://codereview.chromium.org/1254023003/diff/20001/chrome/test/data/webrtc... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:8: Add: 'use strict';
> https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/measure... > > > > Or is there any reason why we should make a new measurement, rather than just > > add the new metric to the WebRTC measurement? > > Adding new metrics to existing WebRTC measurement is also fine. I prefer adding > the new metric to TimelineBasedMeasurement because that's model that telemetry > tests are gravitate to. We eventually will put all of our important metrics > there, including: > power(https://docs.google.com/document/d/1wzCn52iB1gJUPyEnXT0W3LaCdgGa-n0Eh4z9jJKB1hs), > memory (Primiano's team), cpu time, etc... > > Also see this explainer for why TimelineBasedMeasurement is the preferred model: > https://docs.google.com/document/d/10G0PbePQOwJao57Mu6Xr7Fx0o2ng8fwZ33wUqpnsy... Right, so then the options are to either write a new measurement, or add to the WebRTC one and refactor WebRTC to be timeline-based. Perhaps we should move ahead with writing a new measurement for now then and merge them later when WebRTC is refactored?
On 2015/09/07 06:48:29, phoglund_chrome wrote: > > > https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/measure... > > > > > > Or is there any reason why we should make a new measurement, rather than > just > > > add the new metric to the WebRTC measurement? > > > > Adding new metrics to existing WebRTC measurement is also fine. I prefer > adding > > the new metric to TimelineBasedMeasurement because that's model that telemetry > > tests are gravitate to. We eventually will put all of our important metrics > > there, including: > > > power(https://docs.google.com/document/d/1wzCn52iB1gJUPyEnXT0W3LaCdgGa-n0Eh4z9jJKB1hs), > > memory (Primiano's team), cpu time, etc... > > > > Also see this explainer for why TimelineBasedMeasurement is the preferred > model: > > > https://docs.google.com/document/d/10G0PbePQOwJao57Mu6Xr7Fx0o2ng8fwZ33wUqpnsy... > > Right, so then the options are to either write a new measurement, or add to the > WebRTC one and refactor WebRTC to be timeline-based. Perhaps we should move > ahead with writing a new measurement for now then and merge them later when > WebRTC is refactored? I am thinking of doing the new measurement approach at this point and am learning how to rewrite it using TBM.
> > > > Right, so then the options are to either write a new measurement, or add to > the > > WebRTC one and refactor WebRTC to be timeline-based. Perhaps we should move > > ahead with writing a new measurement for now then and merge them later when > > WebRTC is refactored? > > I am thinking of doing the new measurement approach at this point and am > learning how to rewrite it using TBM. SGTM. Perhaps you can look at refactoring the WebRTC telemetry tests to TBM later? :)
On 2015/09/09 07:30:06, phoglund_chrome wrote: > > > > > > Right, so then the options are to either write a new measurement, or add to > > the > > > WebRTC one and refactor WebRTC to be timeline-based. Perhaps we should move > > > ahead with writing a new measurement for now then and merge them later when > > > WebRTC is refactored? > > > > I am thinking of doing the new measurement approach at this point and am > > learning how to rewrite it using TBM. > > SGTM. Perhaps you can look at refactoring the WebRTC telemetry tests to TBM > later? :) Patrik, what do we mean exactly by new measurement, since my CL introduces already a new measurement, in what way would another new measurement be different?
On 2015/09/08 20:44:07, cpaulin wrote: > On 2015/09/07 06:48:29, phoglund_chrome wrote: > > > > > > https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/measure... > > > > > > > > Or is there any reason why we should make a new measurement, rather than > > just > > > > add the new metric to the WebRTC measurement? > > > > > > Adding new metrics to existing WebRTC measurement is also fine. I prefer > > adding > > > the new metric to TimelineBasedMeasurement because that's model that > telemetry > > > tests are gravitate to. We eventually will put all of our important metrics > > > there, including: > > > > > > power(https://docs.google.com/document/d/1wzCn52iB1gJUPyEnXT0W3LaCdgGa-n0Eh4z9jJKB1hs), > > > memory (Primiano's team), cpu time, etc... > > > > > > Also see this explainer for why TimelineBasedMeasurement is the preferred > > model: > > > > > > https://docs.google.com/document/d/10G0PbePQOwJao57Mu6Xr7Fx0o2ng8fwZ33wUqpnsy... > > > > Right, so then the options are to either write a new measurement, or add to > the > > WebRTC one and refactor WebRTC to be timeline-based. Perhaps we should move > > ahead with writing a new measurement for now then and merge them later when > > WebRTC is refactored? > > I am thinking of doing the new measurement approach at this point and am > learning how to rewrite it using TBM. I am having trouble findind how TBM sets the chrome browser options if it does not call a perf/measurement wrapper class. Am I missing something?
On 2015/09/08 20:44:07, cpaulin wrote: > On 2015/09/07 06:48:29, phoglund_chrome wrote: > > > > > > https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/measure... > > > > > > > > Or is there any reason why we should make a new measurement, rather than > > just > > > > add the new metric to the WebRTC measurement? > > > > > > Adding new metrics to existing WebRTC measurement is also fine. I prefer > > adding > > > the new metric to TimelineBasedMeasurement because that's model that > telemetry > > > tests are gravitate to. We eventually will put all of our important metrics > > > there, including: > > > > > > power(https://docs.google.com/document/d/1wzCn52iB1gJUPyEnXT0W3LaCdgGa-n0Eh4z9jJKB1hs), > > > memory (Primiano's team), cpu time, etc... > > > > > > Also see this explainer for why TimelineBasedMeasurement is the preferred > > model: > > > > > > https://docs.google.com/document/d/10G0PbePQOwJao57Mu6Xr7Fx0o2ng8fwZ33wUqpnsy... > > > > Right, so then the options are to either write a new measurement, or add to > the > > WebRTC one and refactor WebRTC to be timeline-based. Perhaps we should move > > ahead with writing a new measurement for now then and merge them later when > > WebRTC is refactored? > > I am thinking of doing the new measurement approach at this point and am > learning how to rewrite it using TBM. I am having trouble finding how TBM sets the chrome browser options if it does not call a perf/measurement wrapper class. Am I missing something?
On 2015/09/10 01:15:08, cpaulin wrote: > On 2015/09/08 20:44:07, cpaulin wrote: > > On 2015/09/07 06:48:29, phoglund_chrome wrote: > > > > > > > > > > https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/measure... > > > > > > > > > > Or is there any reason why we should make a new measurement, rather than > > > just > > > > > add the new metric to the WebRTC measurement? > > > > > > > > Adding new metrics to existing WebRTC measurement is also fine. I prefer > > > adding > > > > the new metric to TimelineBasedMeasurement because that's model that > > telemetry > > > > tests are gravitate to. We eventually will put all of our important > metrics > > > > there, including: > > > > > > > > > > power(https://docs.google.com/document/d/1wzCn52iB1gJUPyEnXT0W3LaCdgGa-n0Eh4z9jJKB1hs), > > > > memory (Primiano's team), cpu time, etc... > > > > > > > > Also see this explainer for why TimelineBasedMeasurement is the preferred > > > model: > > > > > > > > > > https://docs.google.com/document/d/10G0PbePQOwJao57Mu6Xr7Fx0o2ng8fwZ33wUqpnsy... > > > > > > Right, so then the options are to either write a new measurement, or add to > > the > > > WebRTC one and refactor WebRTC to be timeline-based. Perhaps we should move > > > ahead with writing a new measurement for now then and merge them later when > > > WebRTC is refactored? > > > > I am thinking of doing the new measurement approach at this point and am > > learning how to rewrite it using TBM. > > I am having trouble finding how TBM sets the chrome browser options if it does > not call a perf/measurement wrapper class. Am I missing something? browser options can be set in the benchmark https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te...
> > Patrik, what do we mean exactly by new measurement, since my CL introduces > already a new measurement, in what way would another new measurement be > different? I am referring to the new measurement you're introducing. We should probably not make two new measurements! :)
On 2015/09/10 07:38:28, phoglund_chrome wrote: > > > > Patrik, what do we mean exactly by new measurement, since my CL introduces > > already a new measurement, in what way would another new measurement be > > different? > > I am referring to the new measurement you're introducing. We should probably not > make two new measurements! :) Thanks. I moved to TBM and am debugging it right now. Patch set 3 is not ready for review yet, i have an issue with no metrics showing up in results
https://codereview.chromium.org/1254023003/diff/40001/tools/perf/page_sets/we... File tools/perf/page_sets/webrtc_rendering_measurement.py (right): https://codereview.chromium.org/1254023003/diff/40001/tools/perf/page_sets/we... tools/perf/page_sets/webrtc_rendering_measurement.py:24: repeatable=False): Looking at the trace, I can't find Action_Create_PeerConnection event created. Do we have any navigation that happens in the middle of this? That would blow up the current renderer process and make the creation of interaction record fails.
https://codereview.chromium.org/1254023003/diff/40001/tools/perf/page_sets/we... File tools/perf/page_sets/webrtc_rendering_measurement.py (right): https://codereview.chromium.org/1254023003/diff/40001/tools/perf/page_sets/we... tools/perf/page_sets/webrtc_rendering_measurement.py:24: repeatable=False): On 2015/09/11 00:56:32, nednguyen wrote: > Looking at the trace, I can't find Action_Create_PeerConnection event created. > Do we have any navigation that happens in the middle of this? That would blow up > the current renderer process and make the creation of interaction record fails. Oh, it looks like because we have a non-stop javascript?
On 2015/09/11 00:58:44, nednguyen wrote: > https://codereview.chromium.org/1254023003/diff/40001/tools/perf/page_sets/we... > File tools/perf/page_sets/webrtc_rendering_measurement.py (right): > > https://codereview.chromium.org/1254023003/diff/40001/tools/perf/page_sets/we... > tools/perf/page_sets/webrtc_rendering_measurement.py:24: repeatable=False): > On 2015/09/11 00:56:32, nednguyen wrote: > > Looking at the trace, I can't find Action_Create_PeerConnection event created. > > Do we have any navigation that happens in the middle of this? That would blow > up > > the current renderer process and make the creation of interaction record > fails. > > Oh, it looks like because we have a non-stop javascript? Ned, I am not sure I understand your comment, can you please elaborate?
On 2015/09/14 18:09:30, cpaulin wrote: > On 2015/09/11 00:58:44, nednguyen wrote: > > > https://codereview.chromium.org/1254023003/diff/40001/tools/perf/page_sets/we... > > File tools/perf/page_sets/webrtc_rendering_measurement.py (right): > > > > > https://codereview.chromium.org/1254023003/diff/40001/tools/perf/page_sets/we... > > tools/perf/page_sets/webrtc_rendering_measurement.py:24: repeatable=False): > > On 2015/09/11 00:56:32, nednguyen wrote: > > > Looking at the trace, I can't find Action_Create_PeerConnection event > created. > > > Do we have any navigation that happens in the middle of this? That would > blow > > up > > > the current renderer process and make the creation of interaction record > > fails. > > > > Oh, it looks like because we have a non-stop javascript? > > Ned, I am not sure I understand your comment, can you please elaborate? For some reason, the following is observed in the code: with action_runner.CreateInteraction('Action_Create_PeerConnection', repeatable=False): No interaction event of name Action_Create_PeerConnection is found. I do see my WebMediaPlayerMs:UpdateCurrrentFrame but the TBM module is not able to find them. I think there is a bug in the TBM module since the CreateInteraction call does not guarantee that the corresponding time events will be created. Thus this makes this TBM method unreliable in my case and probably prone to flakyness or worse not workable for me. Still digging and trying to characterize the problem.
https://codereview.chromium.org/1254023003/diff/60001/tools/perf/benchmarks/w... File tools/perf/benchmarks/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/60001/tools/perf/benchmarks/w... tools/perf/benchmarks/webrtc_rendering.py:23: filter_string='cc') The bug here is that the filter string is too narrowed, hence the trace events that represent the interaction record are suppressed. To fix it, change this to filter_string='cc,webkit.console,blink.console'
On 2015/08/28 21:26:02, qiangchenC wrote: > I'm not familiar with browser test framework, so I just focused on the math part > of this CL. > > One thing I just come up with is that > If WebMediaPlayerMS dropped a frame, then this code will not work properly as it > cannot detect a frame with cadence 0. > > https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... > File tools/perf/measurements/webrtc_rendering.py (right): > > https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... > tools/perf/measurements/webrtc_rendering.py:67: weight = sum([bucket[ticks] for > ticks in bucket]) > Possibly name this variable as num_frames. > > https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... > tools/perf/measurements/webrtc_rendering.py:68: population= sum([ticks * > bucket[ticks] for ticks in bucket]) > Possibly name this variable as num_vsyncs. > > https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... > tools/perf/measurements/webrtc_rendering.py:84: {'frozen_frames' : ticks -1 , > Question: why we take ticks-1 here? > > https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... > tools/perf/measurements/webrtc_rendering.py:143: [x for x in norm_drift_time if > abs(x) > 2e6 / DISPLAY_HERTZ]) > How about defining a variable say vsync_duration = 1e6 / DISPLAY_HERTZ instead > of do this computation here and there. > > https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... > tools/perf/measurements/webrtc_rendering.py:155: population = sum([n * bucket[n] > for n in bucket]) > ditto Qiang, how can we tell that we dropped a frame looking at the traces?
Patch Set 6 is out, please review at your convenience https://codereview.chromium.org/1254023003/diff/20001/chrome/test/data/webrtc... File chrome/test/data/webrtc_rendering/loopback_peerconnection.html (right): https://codereview.chromium.org/1254023003/diff/20001/chrome/test/data/webrtc... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:8: On 2015/09/04 15:46:40, nednguyen wrote: > Add: 'use strict'; Acknowledged. https://codereview.chromium.org/1254023003/diff/20001/chrome/test/data/webrtc... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:9: durationMS = 30000; On 2015/09/02 11:27:45, phoglund_chrome wrote: > Style guide: Use NAMES_LIKE_THIS for constant values. Acknowledged. https://codereview.chromium.org/1254023003/diff/20001/chrome/test/data/webrtc... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:10: testProgress = 0; On 2015/09/02 11:27:45, phoglund_chrome wrote: > This variable is badly named - a better name would be testDone, looking at how > it gets read. Also, I think it's better to make a function isTestDone() that you > call in the condition, to better encapsulate the variable. Finally, document > where this is read. A casual glance may suggest the variable isn't ever read. Acknowledged. https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... File tools/perf/measurements/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:67: weight = sum([bucket[ticks] for ticks in bucket]) On 2015/08/28 21:26:01, qiangchenC wrote: > Possibly name this variable as num_frames. Done. https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:68: population= sum([ticks * bucket[ticks] for ticks in bucket]) On 2015/08/28 21:26:01, qiangchenC wrote: > Possibly name this variable as num_vsyncs. Done. https://codereview.chromium.org/1254023003/diff/20001/tools/perf/measurements... tools/perf/measurements/webrtc_rendering.py:84: {'frozen_frames' : ticks -1 , On 2015/08/28 21:26:01, qiangchenC wrote: > Question: why we take ticks-1 here? Because if you have 1 source frame and it displayed 4 times, I think the number of frozen frames is 3. Does it make sense? https://codereview.chromium.org/1254023003/diff/20001/tools/perf/page_sets/we... File tools/perf/page_sets/webrtc_rendering_measurement.py (right): https://codereview.chromium.org/1254023003/diff/20001/tools/perf/page_sets/we... tools/perf/page_sets/webrtc_rendering_measurement.py:10: class WebrtcRenderingMeasurementPage(page_module.Page): On 2015/09/02 16:26:46, nednguyen wrote: > s/WebrtcRenderingMeasurement/WebrtcRendering Done.
I think the general structure looks good, but the python code needs a lot of work (and a unit test). https://codereview.chromium.org/1254023003/diff/100001/chrome/test/data/webrt... File chrome/test/data/webrtc_rendering/loopback_peerconnection.html (right): https://codereview.chromium.org/1254023003/diff/100001/chrome/test/data/webrt... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:22: // Actual test object. Well, actually this is a constructor for a prototype class; the object is instantiated on line 17. I'd just remove this comment. https://codereview.chromium.org/1254023003/diff/100001/chrome/test/data/webrt... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:38: setTimeout(function() {testDone = 1}, DURATION_MS); Ned can weigh in here, but I feel it's healthier to sleep in Telemetry instead of in the javascript. That is, instead of waiting for testDone to be set in the python code, simply do command = 'testCamera([%s, %s]);' % (WIDTH, HEIGHT) action_runner.ExecuteJavaScript(command) action_runner.Wait(30) ... This makes a lot easier for maintainers to tweak the test length in the future. Also less interaction between the javascript and test page -> more flexibility to adapt to other page sets later. https://codereview.chromium.org/1254023003/diff/100001/chrome/test/data/webrt... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:125: window.onload = testCamera([1280, 720]); Umm, this onload can't be right. testCamera is invoked from the python code, right? https://codereview.chromium.org/1254023003/diff/100001/tools/perf/benchmarks/... File tools/perf/benchmarks/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/100001/tools/perf/benchmarks/... tools/perf/benchmarks/webrtc_rendering.py:16: class WebRTCRendering(perf_benchmark.PerfBenchmark): Nit: always two blank line before top-level class or function (apply throughout patch). https://codereview.chromium.org/1254023003/diff/100001/tools/perf/benchmarks/... tools/perf/benchmarks/webrtc_rendering.py:23: filter_string='webrtc, webkit.console, blink.console') Nit: indent 4 on line break https://codereview.chromium.org/1254023003/diff/100001/tools/perf/page_sets/w... File tools/perf/page_sets/webrtc_rendering_measurement.py (right): https://codereview.chromium.org/1254023003/diff/100001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:9: WIDTH = 1280 Since you're just using these constants once I think you might as well inline them for simplicity. https://codereview.chromium.org/1254023003/diff/100001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:19: self.webrtc_rendering = True What is this used for? https://codereview.chromium.org/1254023003/diff/100001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:37: self.webrtc_rendering = True And this? https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:24: class WebMediaPlayerMsRenderingStats(object): Move this to its own file? Also I would love a unit test for this class; there's a lot of logic in it. See for instance here how I wrote a unit test for the WebRTC stats parser: https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/metrics.... https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:31: for event in events: This algorithm is a bit hard to read. So it's for each event that has args defined and contains the arg 'Serial' sort event into buckets on serial To make the algorithm clearer you can separate the notion of what events we throw away, and how we sort those that are left. for event in events: if not event.args or not 'Serial' in event.args: # This event isn't interesting because xyz... continue event_serial_number = event.args['Serial'] if event_serial_number in self.serial_to_events: self.serial_to_events[event_serial_number].append(event) else: self.serial_to_events[event_serial_number] = [event] I find it a LOT easier to read what's going on in the above, and it should be equivalent. Note, you can even use setdefault to get rid of the second if. setdefault returns the value pointed to by the key, except if that key isn't mapped yet by the dict (in which case it returns the default). In that case, ... event_serial_number = event.args['Serial'] bucket = self.serial_to_events.setdefault(event_serial_number, default=[]) bucket.append(event) Nice, right? To further improve readability I'd move the whole algorithm into a helper function so you just have self.serial_to_events = _MapEventsToSerial(events) For that matter, what the hell is the serial anyway? Try to explain this in the code or comments as well. https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:45: if (event.args and 'Ideal Render Instant' in event.args consider ideal_render_instant = 'Ideal Render Instant' to avoid having to duplicate the string so much. That should make it a bit more resistant to bugs. Note, don't make it a global constant, a variable is fine (since it's only used by this function). https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:56: def Bucketize(self, cadence): "Bucketize" isn't a great name, what about ComputeSourceToOutputDistribution? https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:118: def InferTimeStats(self): You're going to have to rewrite this one quite a bit before I'm happy. I suggest you write a unit test and try to cover the most important parts first, it's going to help a lot when you do the refactoring! https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:122: self.relevant_events = self.events[stream] Ugh, don't do this - get rid of self.relevant.events. It has multiple problems: * It's used in this method, and in InferCadence. This puts a very implicit assumption on InferCadence - it looks like a nice public method, but it won't work unless it's called from this method or some other method that helpfully sets up the relevant events for it. This is bad design. Firstly, InferCadence should be _InferCadence to signify it's an internal helper method. Second, it should take the relevant events as an argument instead of reading from self. Then it's suddenly very clear that it works on a set of relevant events provided by someone else. Also, your code will not quietly stop inferring cadence if someone moves where the relevant events are set. Instead, you will get a clean error saying the relevant events are not defined. * The constructor states it "sorts the relevant events", but relevant events are never populated there, just self.events. This is very confusing. Just make relevant_events a local variable here, and pass as an argument to InferCadence. In general, something should only be put in self if it's part of the object's state - something that mutates over time and intrinsically belongs to the class. The less events you have, the better. https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:130: # Drift time between Ideal Render Instant and Actual Render Begin. The method really gets off to a good start, with clean helper methods to compute cadence, fps and so on. The rest of it really needs to be broken up. A good guideline is that a method should never be longer than 30 lines; otherwise it's doing too much. This method in particular is a mess of interlocking local variables, continue statements etc that make it impossible to reason about what it's doing. Can you promise me to never again write a method that's longer than 30 lines? :) You should be able to move lines 131 to 151 or so into a helper method. It should take relevant events as input and return the drift time list and the rendering length error. If your local variables trip you up, try moving computations that don't depend on each other so you can extract a clean helper method. You can return multiple returns values in Python fortunately. For the continue on line 150: Try to check this earlier, if possible on line 123. In that case you can pass the *ideal_render variables as arguments to your new helper method. Or is there some other, easier way of finding out if we only have one event? https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:189: Here is how I would ideally want this method to look: drift_time = ComputeDriftTime(relevant_events) mean_drift_time = numpy.mean(drift_time) norm_drift_time = [...] percent_badly_oos = ComputePercentBadlyOutOfSync(norm_drift_time) That is, try to keep the computations out of the main method as much as possible. All that should remain is the management of how data depend on each other. In this case, for instance, percent_badly_oos depends on drift_time through a couple intermediaries. That's legit to express is in this function, but there's too much computation obstructing the interesting parts.
Hi all, I have implemented the very helpful suggestions from phoglund@ and the new patch set 7 is ready for review ;-) https://codereview.chromium.org/1254023003/diff/100001/chrome/test/data/webrt... File chrome/test/data/webrtc_rendering/loopback_peerconnection.html (right): https://codereview.chromium.org/1254023003/diff/100001/chrome/test/data/webrt... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:22: // Actual test object. On 2015/09/17 11:23:58, phoglund_chrome wrote: > Well, actually this is a constructor for a prototype class; the object is > instantiated on line 17. I'd just remove this comment. Acknowledged. https://codereview.chromium.org/1254023003/diff/100001/chrome/test/data/webrt... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:38: setTimeout(function() {testDone = 1}, DURATION_MS); On 2015/09/17 11:23:58, phoglund_chrome wrote: > Ned can weigh in here, but I feel it's healthier to sleep in Telemetry instead > of in the javascript. That is, instead of waiting for testDone to be set in the > python code, simply do > > command = 'testCamera([%s, %s]);' % (WIDTH, HEIGHT) > action_runner.ExecuteJavaScript(command) > action_runner.Wait(30) > ... > > This makes a lot easier for maintainers to tweak the test length in the future. > Also less interaction between the javascript and test page -> more flexibility > to adapt to other page sets later. Acknowledged. https://codereview.chromium.org/1254023003/diff/100001/chrome/test/data/webrt... chrome/test/data/webrtc_rendering/loopback_peerconnection.html:125: window.onload = testCamera([1280, 720]); On 2015/09/17 11:23:58, phoglund_chrome wrote: > Umm, this onload can't be right. testCamera is invoked from the python code, > right? Acknowledged. https://codereview.chromium.org/1254023003/diff/100001/tools/perf/benchmarks/... File tools/perf/benchmarks/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/100001/tools/perf/benchmarks/... tools/perf/benchmarks/webrtc_rendering.py:16: class WebRTCRendering(perf_benchmark.PerfBenchmark): On 2015/09/17 11:23:58, phoglund_chrome wrote: > Nit: always two blank line before top-level class or function (apply throughout > patch). Done. https://codereview.chromium.org/1254023003/diff/100001/tools/perf/benchmarks/... tools/perf/benchmarks/webrtc_rendering.py:23: filter_string='webrtc, webkit.console, blink.console') On 2015/09/17 11:23:58, phoglund_chrome wrote: > Nit: indent 4 on line break Done. https://codereview.chromium.org/1254023003/diff/100001/tools/perf/page_sets/w... File tools/perf/page_sets/webrtc_rendering_measurement.py (right): https://codereview.chromium.org/1254023003/diff/100001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:9: WIDTH = 1280 On 2015/09/17 11:23:58, phoglund_chrome wrote: > Since you're just using these constants once I think you might as well inline > them for simplicity. Done. https://codereview.chromium.org/1254023003/diff/100001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:19: self.webrtc_rendering = True On 2015/09/17 11:23:58, phoglund_chrome wrote: > What is this used for? Old artifact from when I did not know what I was doing ;-) https://codereview.chromium.org/1254023003/diff/100001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:37: self.webrtc_rendering = True On 2015/09/17 11:23:58, phoglund_chrome wrote: > And this? Done. https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:24: class WebMediaPlayerMsRenderingStats(object): On 2015/09/17 11:23:58, phoglund_chrome wrote: > Move this to its own file? Also I would love a unit test for this class; there's > a lot of logic in it. See for instance here how I wrote a unit test for the > WebRTC stats parser: > https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/metrics.... Done. https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:31: for event in events: On 2015/09/17 11:23:58, phoglund_chrome wrote: > This algorithm is a bit hard to read. So it's > > for each event that has args defined and contains the arg 'Serial' > sort event into buckets on serial > > To make the algorithm clearer you can separate the notion of what events we > throw away, and how we sort those that are left. > > for event in events: > if not event.args or not 'Serial' in event.args: > # This event isn't interesting because xyz... > continue > event_serial_number = event.args['Serial'] > if event_serial_number in self.serial_to_events: > self.serial_to_events[event_serial_number].append(event) > else: > self.serial_to_events[event_serial_number] = [event] > > I find it a LOT easier to read what's going on in the above, and it should be > equivalent. > > Note, you can even use setdefault to get rid of the second if. setdefault > returns the value pointed to by the key, except if that key isn't mapped yet by > the dict (in which case it returns the default). In that case, > > ... > event_serial_number = event.args['Serial'] > bucket = self.serial_to_events.setdefault(event_serial_number, default=[]) > bucket.append(event) > > Nice, right? > > To further improve readability I'd move the whole algorithm into a helper > function so you just have > > self.serial_to_events = _MapEventsToSerial(events) > > For that matter, what the hell is the serial anyway? Try to explain this in the > code or comments as well. Done. https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:45: if (event.args and 'Ideal Render Instant' in event.args On 2015/09/17 11:23:58, phoglund_chrome wrote: > consider > > ideal_render_instant = 'Ideal Render Instant' to avoid having to duplicate the > string so much. That should make it a bit more resistant to bugs. Note, don't > make it a global constant, a variable is fine (since it's only used by this > function). Acknowledged. https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:118: def InferTimeStats(self): On 2015/09/17 11:23:58, phoglund_chrome wrote: > You're going to have to rewrite this one quite a bit before I'm happy. I suggest > you write a unit test and try to cover the most important parts first, it's > going to help a lot when you do the refactoring! Acknowledged. https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:122: self.relevant_events = self.events[stream] On 2015/09/17 11:23:58, phoglund_chrome wrote: > Ugh, don't do this - get rid of self.relevant.events. > > It has multiple problems: > * It's used in this method, and in InferCadence. This puts a very implicit > assumption on InferCadence - it looks like a nice public method, but it won't > work unless it's called from this method or some other method that helpfully > sets up the relevant events for it. This is bad design. Firstly, InferCadence > should be _InferCadence to signify it's an internal helper method. Second, it > should take the relevant events as an argument instead of reading from self. > Then it's suddenly very clear that it works on a set of relevant events provided > by someone else. Also, your code will not quietly stop inferring cadence if > someone moves where the relevant events are set. Instead, you will get a clean > error saying the relevant events are not defined. > * The constructor states it "sorts the relevant events", but relevant events are > never populated there, just self.events. This is very confusing. > > Just make relevant_events a local variable here, and pass as an argument to > InferCadence. In general, something should only be put in self if it's part of > the object's state - something that mutates over time and intrinsically belongs > to the class. The less events you have, the better. Acknowledged. https://codereview.chromium.org/1254023003/diff/100001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:189: On 2015/09/17 11:23:58, phoglund_chrome wrote: > Here is how I would ideally want this method to look: > > drift_time = ComputeDriftTime(relevant_events) > mean_drift_time = numpy.mean(drift_time) > norm_drift_time = [...] > percent_badly_oos = ComputePercentBadlyOutOfSync(norm_drift_time) > > That is, try to keep the computations out of the main method as much as > possible. All that should remain is the management of how data depend on each > other. In this case, for instance, percent_badly_oos depends on drift_time > through a couple intermediaries. That's legit to express is in this function, > but there's too much computation obstructing the interesting parts. Patrik, I have grouped the stats by logical units: __ComputeDrifTimeStats : everything directly derived from drift_time, including rendering_length_error __ComputeSmoothnesStats : everything directly involved in smoothness score calculation like the out of sync frames __ComputeFreezingScore : everythin that has to do with frozen frames I think it make sense to keep the related stats together and adds meaning.
Hi Patch set 8 is out with minor cosmetic changes
I did another round on the python code. It's much better but needs some work. Still missing that unit test. We will need a couple more rounds probably. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py (right): https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:28: # The events of interest have a 'Serial' argument which Nice! Put this comment into the docstring, and use Args: and Returns: sections where appropriate. """Build a dictionary of events indexed by stream. The events of interest have a 'Serial' argument which represents the stream id. The 'Serial' argument identifies the source or remote nature of the stream with a single digit 0 or 1 as well as the URL blob associated with the video stream. So stream::=(0|1}blob . Args: events Telemetry events. Returns: A dict of stream IDs mapped to events on that stream. """ https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:39: # This event is not something we are looking for "Not a stream event." https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:42: if stream in stream_to_events: I think events_for_stream = stream_to_events.setdefault(stream, []) events_for_stream.append(stream) reads better than 42-44. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:48: def __InferCadence(self, relevant_events): In general, you're using Infer and Compute a lot. These don't add much information; often when one calls a method Compute or calls a class Agent, that often a smell. Can you find more descriptive names? If not, Infer and Compute are fine if it makes the calling code read better. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:49: """Calculate the apparent cadence of the rendering.""" Try to describe what goes into the returned cadence list. Is it timestamps or what? Use Args: and Returns: https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:56: if (event.args and ideal_render_instant in event.args Rewrite this similar to 37-45 above. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:69: # If the overall display distribution is A1:A2:..:An, Fold into docstring. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:73: bucket = {} Well, it's actually several buckets, right? Call it buckets. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:75: if ticks in bucket: Or replace 75-78 with ticks_so_far = buckets.setdefault(ticks, 0) buckets[ticks] = ticks_so_far + 1 https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:81: def __InferFpsFromCadence(self, bucket): The name "bucket" doesn't make a lot of sense in the context of this method. The _caller_ happens to have its data stored in a bucket, but this method doesn't really care about that. The bucket is really a source-to-output distribution, right? source_to_output_dist would be a much better name, but it's maybe a bit long. What about frame_distribution? Either way, names are important and should be maximally descriptive in their current context. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:83: # The mean ratio is the barycenter Nit: ALWAYS end comments with . You need to get used to this! https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:84: number_frames = sum([bucket[ticks] for ticks in bucket]) This is simpler to write as sum(bucket.values()). https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:88: return fps Nit: inline the fps variable. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:90: def __InferFrozenFramesEvents(self, bucket): CountFrozenFrames? https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:92: # For simplicity we count as freezing the frames Fold into docstring https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:96: for ticks in bucket: This is fine, but you can also do frozen_ticks = [ticks for ticks in frame_distribution if ticks >= FROZEN_THRESHOLD] for frozen_tick in frozen_ticks: logging.debug(...) frozen_frames.append({ ... }) which is a bit more pythonic. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:98: logging.error('%s frames not updated after %s vsyncs', I don't think you should log an error in your telemetry test, that should be for errors in the test itself I believe. A debug log is fine. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:101: {'frozen_frames': ticks -1, Nit: space around - https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:102: 'occurences': bucket[ticks]}) Nit: it's spelled occurrences https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:107: # As mentioned earlier, we count for frozen anything above 6 vsync Fold into docstring https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:121: 8 * (number_frozen_frames - 4)) Nit: doesn't this fit in the above line? https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:125: """Determines if a stream is remote or local.""" The docstring is a bit overkill here. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:128: def __ComputeDrifTimeStats(self, relevant_events, cadence): DriftTimeStats https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:130: # This method will calculate drift_time stats, Fold into docstring. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:135: # std_dev_drift_time = standard deviation of drit time Nit: drift https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:145: # Skip to next event because looking for a source frame. Nit: we're looking https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:148: event.args['Actual Render Begin'] - current_ideal_render) actual_render_begin = event.args['Actual Render Begin'] drift_time.append(actual_render_begin - current_ideal_render) reads better. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:152: index += 1 It would be nice to get rid of this index. However, I don't understand the connection between an event and an entry in the cadence list. How do you know you should in index 0 for the first source frame, and 1 for the second? Are you sure there's not going to be more events than entries in the cadence list? https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:158: # Some stats on drift time. Nit: in general, blank line before comments inside methods. Apply throughout. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:159: mean_drift_time = numpy.mean(drift_time) These are all computed from drift_time; do this outside this method. Since they're so short, you can do them in the InferTimeStats method (maybe even inline in the stats dict)? https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:162: result = (drift_time, norm_drift_time, mean_drift_time, std_dev_drift_time, Just write this as return drift_time, rendering_length_error (if you get rid of the mean, stddev, etc at least) https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:166: def __ComputeSmoothnesStats(self, norm_drift_time): Nit: Smoothness https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:181: # Calculate smoothness metric. Nit: again, blank line before comment. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:185: # Minimun smoothness_score value allowed is zero. Minimum https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:188: results = (percent_badly_oos, percent_out_of_sync, return percent_badly_oos, percent_out_of_sync, smoothness_score https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:198: # Calculate freezing metric. Nit: in general, try to pull up comments so you fill out the 80 chars and reduce vertical space. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:213: for stream in self.stream_to_events: for stream, relevant_events in self.stream_to_events.iteritems(): https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:221: cadence = self.__InferCadence(relevant_events) Nit: a blank line before this one would look nice. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:225: drift_time_stats = self.__ComputeDrifTimeStats(relevant_events, cadence) drift_time, rendering_length_error = self.__ComputeDrifTimeStats(relevant_events, cadence)
I addressed all comments. Also patch set 9 is out with the unittests! Thanks Christian https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py (right): https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:28: # The events of interest have a 'Serial' argument which On 2015/09/18 08:24:02, phoglund_chrome wrote: > Nice! Put this comment into the docstring, and use > Args: and Returns: sections where appropriate. > > """Build a dictionary of events indexed by stream. > > The events of interest have a 'Serial' argument which > represents the stream id. The 'Serial' argument identifies > the source or remote nature of the stream with a single > digit 0 or 1 as well as the URL blob associated with the > video stream. So stream::=(0|1}blob . > > Args: > events Telemetry events. > > Returns: > A dict of stream IDs mapped to events on that stream. > """ Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:42: if stream in stream_to_events: On 2015/09/18 08:24:01, phoglund_chrome wrote: > I think > > events_for_stream = stream_to_events.setdefault(stream, []) > events_for_stream.append(stream) > > reads better than 42-44. Ok, got it this time, that's pretty useful tidbit! https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:48: def __InferCadence(self, relevant_events): On 2015/09/18 08:24:02, phoglund_chrome wrote: > In general, you're using Infer and Compute a lot. These don't add much > information; often when one calls a method Compute or calls a class Agent, that > often a smell. Can you find more descriptive names? If not, Infer and Compute > are fine if it makes the calling code read better. I changed it all to the very short 'Get' that everyone knows and loves, no more fancy wording for me ;-) https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:56: if (event.args and ideal_render_instant in event.args On 2015/09/18 08:24:02, phoglund_chrome wrote: > Rewrite this similar to 37-45 above. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:69: # If the overall display distribution is A1:A2:..:An, On 2015/09/18 08:24:01, phoglund_chrome wrote: > Fold into docstring. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:73: bucket = {} On 2015/09/18 08:24:01, phoglund_chrome wrote: > Well, it's actually several buckets, right? Call it buckets. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:75: if ticks in bucket: On 2015/09/18 08:24:01, phoglund_chrome wrote: > Or replace 75-78 with > > ticks_so_far = buckets.setdefault(ticks, 0) > buckets[ticks] = ticks_so_far + 1 Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:81: def __InferFpsFromCadence(self, bucket): On 2015/09/18 08:24:01, phoglund_chrome wrote: > The name "bucket" doesn't make a lot of sense in the context of this method. The > _caller_ happens to have its data stored in a bucket, but this method doesn't > really care about that. > > The bucket is really a source-to-output distribution, right? > source_to_output_dist would be a much better name, but it's maybe a bit long. > What about frame_distribution? Either way, names are important and should be > maximally descriptive in their current context. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:81: def __InferFpsFromCadence(self, bucket): On 2015/09/18 08:24:01, phoglund_chrome wrote: > The name "bucket" doesn't make a lot of sense in the context of this method. The > _caller_ happens to have its data stored in a bucket, but this method doesn't > really care about that. > > The bucket is really a source-to-output distribution, right? > source_to_output_dist would be a much better name, but it's maybe a bit long. > What about frame_distribution? Either way, names are important and should be > maximally descriptive in their current context. frame_distrubition is it them ;-) https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:83: # The mean ratio is the barycenter On 2015/09/18 08:24:02, phoglund_chrome wrote: > Nit: ALWAYS end comments with . You need to get used to this! Yes indeed, working on it. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:84: number_frames = sum([bucket[ticks] for ticks in bucket]) On 2015/09/18 08:24:00, phoglund_chrome wrote: > This is simpler to write as sum(bucket.values()). Thanks for the hint. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:88: return fps On 2015/09/18 08:24:02, phoglund_chrome wrote: > Nit: inline the fps variable. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:90: def __InferFrozenFramesEvents(self, bucket): On 2015/09/18 08:24:02, phoglund_chrome wrote: > CountFrozenFrames? How about __GetFrozenFramesReports because we create a record with 2 dimensions? https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:92: # For simplicity we count as freezing the frames On 2015/09/18 08:24:01, phoglund_chrome wrote: > Fold into docstring Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:96: for ticks in bucket: On 2015/09/18 08:24:02, phoglund_chrome wrote: > This is fine, but you can also do > > frozen_ticks = [ticks for ticks in frame_distribution if ticks >= > FROZEN_THRESHOLD] > for frozen_tick in frozen_ticks: > logging.debug(...) > frozen_frames.append({ ... }) > > which is a bit more pythonic. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:98: logging.error('%s frames not updated after %s vsyncs', On 2015/09/18 08:24:02, phoglund_chrome wrote: > I don't think you should log an error in your telemetry test, that should be for > errors in the test itself I believe. A debug log is fine. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:101: {'frozen_frames': ticks -1, On 2015/09/18 08:24:01, phoglund_chrome wrote: > Nit: space around - Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:102: 'occurences': bucket[ticks]}) On 2015/09/18 08:24:01, phoglund_chrome wrote: > Nit: it's spelled occurrences Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:107: # As mentioned earlier, we count for frozen anything above 6 vsync On 2015/09/18 08:24:01, phoglund_chrome wrote: > Fold into docstring Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:121: 8 * (number_frozen_frames - 4)) On 2015/09/18 08:24:02, phoglund_chrome wrote: > Nit: doesn't this fit in the above line? Indeed it does! https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:125: """Determines if a stream is remote or local.""" On 2015/09/18 08:24:01, phoglund_chrome wrote: > The docstring is a bit overkill here. I am not sure I should have no doc string?? https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:128: def __ComputeDrifTimeStats(self, relevant_events, cadence): On 2015/09/18 08:24:02, phoglund_chrome wrote: > DriftTimeStats __GetDriftTimeStats to be consistent? https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:130: # This method will calculate drift_time stats, On 2015/09/18 08:24:00, phoglund_chrome wrote: > Fold into docstring. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:135: # std_dev_drift_time = standard deviation of drit time On 2015/09/18 08:24:02, phoglund_chrome wrote: > Nit: drift Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:145: # Skip to next event because looking for a source frame. On 2015/09/18 08:24:01, phoglund_chrome wrote: > Nit: we're looking Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:148: event.args['Actual Render Begin'] - current_ideal_render) On 2015/09/18 08:24:02, phoglund_chrome wrote: > actual_render_begin = event.args['Actual Render Begin'] > drift_time.append(actual_render_begin - current_ideal_render) > > reads better. Sure. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:148: event.args['Actual Render Begin'] - current_ideal_render) On 2015/09/18 08:24:02, phoglund_chrome wrote: > actual_render_begin = event.args['Actual Render Begin'] > drift_time.append(actual_render_begin - current_ideal_render) > > reads better. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:152: index += 1 On 2015/09/18 08:24:02, phoglund_chrome wrote: > It would be nice to get rid of this index. However, I don't understand the > connection between an event and an entry in the cadence list. How do you know > you should in index 0 for the first source frame, and 1 for the second? Are you > sure there's not going to be more events than entries in the cadence list? Patrik, the correlation is: An event is defined as follows: TRACE_EVENT_BEGIN2("webrtc", "WebMediaPlayerMS::UpdateCurrentFrame", "Actual Render Begin", deadline_min.ToInternalValue(), "Actual Render End", deadline_max.ToInternalValue()); TRACE_EVENT_END2("webrtc", "WebMediaPlayerMS::UpdateCurrentFrame", "Ideal Render Instant", render_time.ToInternalValue(), "Serial", serial_); Cadence is locked on unique Ideal Render Instant, that is to say cadence tells you exactly how many VSYNCs each source frame that was rendered used. Drift time per se is also locked onto unique Ideal Render Instant, since we don't calculate drift_time for the repeat output frames. So by definition len(Drift time) = len(Cadence), which means the index is accurate position locator for all elements of Cadence and Drift time. My design doc has more info if you want to dig deeper: https://docs.google.com/document/d/1TX0NnT_xxvnRNDkw6P2kxunXyK_I8OChkLb4jehXR... https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:158: # Some stats on drift time. On 2015/09/18 08:24:01, phoglund_chrome wrote: > Nit: in general, blank line before comments inside methods. Apply throughout. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:158: # Some stats on drift time. On 2015/09/18 08:24:01, phoglund_chrome wrote: > Nit: in general, blank line before comments inside methods. Apply throughout. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:159: mean_drift_time = numpy.mean(drift_time) On 2015/09/18 08:24:01, phoglund_chrome wrote: > These are all computed from drift_time; do this outside this method. Since > they're so short, you can do them in the InferTimeStats method (maybe even > inline in the stats dict)? I need access to mean_drift_time to do drift time normalization, so that one cannot be inlined unless I calculate it twice, which I prefer not to. I can inline the std_dev_drift_time in the stats dict though https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:162: result = (drift_time, norm_drift_time, mean_drift_time, std_dev_drift_time, On 2015/09/18 08:24:02, phoglund_chrome wrote: > Just write this as > > return drift_time, rendering_length_error > > (if you get rid of the mean, stddev, etc at least) Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:166: def __ComputeSmoothnesStats(self, norm_drift_time): On 2015/09/18 08:24:01, phoglund_chrome wrote: > Nit: Smoothness Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:181: # Calculate smoothness metric. On 2015/09/18 08:24:01, phoglund_chrome wrote: > Nit: again, blank line before comment. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:185: # Minimun smoothness_score value allowed is zero. On 2015/09/18 08:24:02, phoglund_chrome wrote: > Minimum Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:188: results = (percent_badly_oos, percent_out_of_sync, On 2015/09/18 08:24:02, phoglund_chrome wrote: > return percent_badly_oos, percent_out_of_sync, smoothness_score Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:198: # Calculate freezing metric. On 2015/09/18 08:24:01, phoglund_chrome wrote: > Nit: in general, try to pull up comments so you fill out the 80 chars and reduce > vertical space. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:221: cadence = self.__InferCadence(relevant_events) On 2015/09/18 08:24:02, phoglund_chrome wrote: > Nit: a blank line before this one would look nice. Done. https://codereview.chromium.org/1254023003/diff/140001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:225: drift_time_stats = self.__ComputeDrifTimeStats(relevant_events, cadence) On 2015/09/18 08:24:00, phoglund_chrome wrote: > drift_time, rendering_length_error = > self.__ComputeDrifTimeStats(relevant_events, cadence) Done.
Now that is damn near perfect. lgtm https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py (right): https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:59: Nit: this would be an exception to the rule of "blank line before comment"; don't do it if the comment is right at the start of a new indent level. So, remove this blank line :) https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:118: Nit: remove blank line https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:119: """ Nit: remove blank line https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:242: return (drift_time, rendering_length_error) Nit: It's fine to drop the () here. https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats_unittest.py (right): https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats_unittest.py:121: expected_cadence = [2, 1, 2, 3, 3, 3, 1] Maybe clarify the expected cadence and how it relates to the input data a bit: "We have two frames on the first ideal instant 1663780179998, one on 1663780197998, and so on." https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:113: # make the output distribution a list since Nit: m -> M, end with .
Patch Set 10 is out with the new checks for the 'Serial' field of traces and minor fixes from previous comments. Thanks. Christian https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py (right): https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:59: On 2015/09/24 07:11:46, phoglund_chrome wrote: > Nit: this would be an exception to the rule of "blank line before comment"; > don't do it if the comment is right at the start of a new indent level. So, > remove this blank line :) Done. https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:118: On 2015/09/24 07:11:46, phoglund_chrome wrote: > Nit: remove blank line Done. https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:119: """ On 2015/09/24 07:11:46, phoglund_chrome wrote: > Nit: remove blank line Done. https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:242: return (drift_time, rendering_length_error) On 2015/09/24 07:11:46, phoglund_chrome wrote: > Nit: It's fine to drop the () here. Done. https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats_unittest.py (right): https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats_unittest.py:121: expected_cadence = [2, 1, 2, 3, 3, 3, 1] On 2015/09/24 07:11:46, phoglund_chrome wrote: > Maybe clarify the expected cadence and how it relates to the input data a bit: > "We have two frames on the first ideal instant 1663780179998, one on > 1663780197998, and so on." Done. https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): https://codereview.chromium.org/1254023003/diff/160001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:113: # make the output distribution a list since On 2015/09/24 07:11:47, phoglund_chrome wrote: > Nit: m -> M, end with . Done.
nednguyen@google.com changed reviewers: + eakuefner@chromium.org
+Ethan: can you take a pass at this?
On 2015/09/24 21:49:36, nednguyen wrote: > +Ethan: can you take a pass at this? Ethan, Do you have time to review this CL? I wish to get this CL landed soon. Thanks.
The CQ bit was checked by cpaulin@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from phoglund@chromium.org Link to the patchset: https://codereview.chromium.org/1254023003/#ps180001 (title: "Accomodate 'Serial' field definition change and minor fixes.")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/180001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/180001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: android_arm64_dbg_recipe on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/android_arm64_d...) android_chromium_gn_compile_dbg on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/android_chromiu...) android_clang_dbg_recipe on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/android_clang_d...) android_compile_dbg on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/android_compile...) cast_shell_android on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/cast_shell_andr...) cast_shell_linux on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/cast_shell_linu...) chromeos_amd64-generic_chromium_compile_only_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromeos_amd64-...) chromeos_daisy_chromium_compile_only_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromeos_daisy_...) chromium_presubmit on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromium_presub...) linux_chromium_asan_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_chromeos_compile_dbg_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_chromeos_ozone_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_chromeos_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_clobber_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_compile_dbg_32_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_gn_chromeos_rel on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) ios_dbg_simulator_ninja on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios_dbg_simulator...) ios_rel_device_ninja on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios_rel_device_ni...) mac_chromium_compile_dbg_ng on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_comp...) mac_chromium_gn_rel on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_gn_r...) mac_chromium_rel_ng on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_...) linux_perf_bisect on tryserver.chromium.perf (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.perf/builders/linux_perf_bisec...) mac_perf_bisect on tryserver.chromium.perf (JOB_FAILED, no build URL) win8_chromium_ng on tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win8_chromium_ng/...) win_chromium_rel_ng on tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_rel_...) win_chromium_x64_rel_ng on tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_x64_...)
Sorry for the delay, some comments. https://codereview.chromium.org/1254023003/diff/180001/tools/perf/benchmarks/... File tools/perf/benchmarks/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/perf/benchmarks/... tools/perf/benchmarks/webrtc_rendering.py:38: return bool(RE_BENCHMARK_VALUES.match(value.name)) It seems like there is no need to use regular expressions here. Can you just write return value.name.startswith('WebRTCRendering_') instead? https://codereview.chromium.org/1254023003/diff/180001/tools/perf/page_sets/w... File tools/perf/page_sets/webrtc_rendering_measurement.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:27: nit: two newlines between top-level definitions https://codereview.chromium.org/1254023003/diff/180001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:29: nit: no newline before docstring https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:13: nit: 2 newlines instead of 3 https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:20: WebRtcRendering_drift_time usec The units of the values you add should match entries in unit-info.json to be compatible with the perf dashboard: https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te... The file is kind of a mess but we want to avoid adding new units to it if at all possible. For 'usec' you probably want 'us' which is already defined, and for 'FPS' you probably want 'fps'. '%' will work as long as the improvement direction for those values is down (smaller is better). Also, is it intentional that the capitalization is different for the name of this metric in particular compared to the rest (WebRtc vs WebRTC)? https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:39: def IsEventInInteraction(event, interaction): Ned has recently added IsEventInInteractions here: https://codereview.chromium.org/1357423008 Please prefer that over this. https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:66: none_reason = "No WebMediaPlayerMS::UpdateCurrentFrame event found" nit: single quotes
The CQ bit was checked by cpaulin@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/180001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/180001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: android_compile_dbg on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/android_compile...) linux_chromium_chromeos_ozone_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_clobber_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_compile_dbg_32_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) ios_dbg_simulator_ninja on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios_dbg_simulator...) ios_rel_device_ninja on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios_rel_device_ni...) mac_chromium_compile_dbg_ng on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_comp...) mac_chromium_gn_rel on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_gn_r...) mac_chromium_rel_ng on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_...) mac_perf_bisect on tryserver.chromium.perf (JOB_FAILED, no build URL)
https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:89: ideal_render_instant = 'Ideal Render Instant' These constant strings should be declared at the top of the file, s.t like: IDEAL_RENDER_INSTANT = 'Ideal Render Instant' https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:332: 'drift_time': drift_time, I would prefer you create a light weight "TimeStats" class whose drift_time, mean_drift_time,..etc are member variable instead of using a dict. This way, if you make a typo, the pylint tool should help you discover it early. https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:342: print "Stats for remote stream {0}: {1}".format(stream, stats) Please remove the debug print https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:67: logging.info('rendering stats : %s', rendering_stats) Remove this logging
Patch set 12 is done with all comments addressed, hopefully we can land this CL soon. https://codereview.chromium.org/1254023003/diff/180001/tools/perf/benchmarks/... File tools/perf/benchmarks/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/perf/benchmarks/... tools/perf/benchmarks/webrtc_rendering.py:38: return bool(RE_BENCHMARK_VALUES.match(value.name)) On 2015/09/28 19:49:48, eakuefner wrote: > It seems like there is no need to use regular expressions here. Can you just > write > return value.name.startswith('WebRTCRendering_') instead? Done. https://codereview.chromium.org/1254023003/diff/180001/tools/perf/page_sets/w... File tools/perf/page_sets/webrtc_rendering_measurement.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:27: On 2015/09/28 19:49:48, eakuefner wrote: > nit: two newlines between top-level definitions Done. https://codereview.chromium.org/1254023003/diff/180001/tools/perf/page_sets/w... tools/perf/page_sets/webrtc_rendering_measurement.py:29: On 2015/09/28 19:49:48, eakuefner wrote: > nit: no newline before docstring Done. https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:89: ideal_render_instant = 'Ideal Render Instant' On 2015/09/28 20:56:43, nednguyen wrote: > These constant strings should be declared at the top of the file, s.t like: > IDEAL_RENDER_INSTANT = 'Ideal Render Instant' Done. https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:332: 'drift_time': drift_time, On 2015/09/28 20:56:43, nednguyen wrote: > I would prefer you create a light weight "TimeStats" class whose drift_time, > mean_drift_time,..etc are member variable instead of using a dict. This way, if > you make a typo, the pylint tool should help you discover it early. Sure, opted for this class: class TimeStats(object): """Stats container for webrtc rendering metrics.""" def __init__(self, **kwargs): for name in STATS_GATHERED: setattr(self, name, kwargs.get(name, None)) https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:342: print "Stats for remote stream {0}: {1}".format(stream, stats) On 2015/09/28 20:56:43, nednguyen wrote: > Please remove the debug print Sure https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:13: On 2015/09/28 19:49:48, eakuefner wrote: > nit: 2 newlines instead of 3 Done. https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:20: WebRtcRendering_drift_time usec On 2015/09/28 19:49:48, eakuefner wrote: > The units of the values you add should match entries in unit-info.json to be > compatible with the perf dashboard: > https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te... > The file is kind of a mess but we want to avoid adding new units to it if at all > possible. > > For 'usec' you probably want 'us' which is already defined, and for 'FPS' you > probably want 'fps'. '%' will work as long as the improvement direction for > those values is down (smaller is better). > > Also, is it intentional that the capitalization is different for the name of > this metric in particular compared to the rest (WebRtc vs WebRTC)? Ethan, I fixed the WebRtcRendering_ occurences, also changed 'usec' to 'us'. For the '%' case, sometimes low percentage is good like for WebRTCRendering_percent_out_of_sync, but for WebRTCRendering_smoothness_score higher is better, how do you deal with this? Thanks https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:39: def IsEventInInteraction(event, interaction): On 2015/09/28 19:49:48, eakuefner wrote: > Ned has recently added IsEventInInteractions here: > https://codereview.chromium.org/1357423008 > > Please prefer that over this. Done. https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:66: none_reason = "No WebMediaPlayerMS::UpdateCurrentFrame event found" On 2015/09/28 19:49:48, eakuefner wrote: > nit: single quotes Done. https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:67: logging.info('rendering stats : %s', rendering_stats) On 2015/09/28 20:56:43, nednguyen wrote: > Remove this logging Done.
lgtm Please wait for Ethan's stamp. https://codereview.chromium.org/1254023003/diff/220001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py (right): https://codereview.chromium.org/1254023003/diff/220001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:29: def __init__(self, **kwargs): Don't use def __init__(self, **kwargs), just make it def __init__(self, drift_time, mean_drift_time, std_dev_drift_time, percent_badly_out_of_sync,..): self.drift_time = drift_time self.mean_drift_time = mean_drift_time ...
On 2015/09/29 15:59:07, nednguyen wrote: > lgtm > > Please wait for Ethan's stamp. > > https://codereview.chromium.org/1254023003/diff/220001/tools/telemetry/teleme... > File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py > (right): > > https://codereview.chromium.org/1254023003/diff/220001/tools/telemetry/teleme... > tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:29: def > __init__(self, **kwargs): > Don't use def __init__(self, **kwargs), just make it > > def __init__(self, drift_time, mean_drift_time, std_dev_drift_time, > percent_badly_out_of_sync,..): > self.drift_time = drift_time > self.mean_drift_time = mean_drift_time > ... Done in patch set 13
Patch set 13 is out https://codereview.chromium.org/1254023003/diff/220001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py (right): https://codereview.chromium.org/1254023003/diff/220001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_stats.py:29: def __init__(self, **kwargs): On 2015/09/29 15:59:07, nednguyen wrote: > Don't use def __init__(self, **kwargs), just make it > > def __init__(self, drift_time, mean_drift_time, std_dev_drift_time, > percent_badly_out_of_sync,..): > self.drift_time = drift_time > self.mean_drift_time = mean_drift_time > ... Done.
lgtm https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:20: WebRtcRendering_drift_time usec On 2015/09/29 at 15:50:25, cpaulin wrote: > On 2015/09/28 19:49:48, eakuefner wrote: > > The units of the values you add should match entries in unit-info.json to be > > compatible with the perf dashboard: > > https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te... > > The file is kind of a mess but we want to avoid adding new units to it if at all > > possible. > > > > For 'usec' you probably want 'us' which is already defined, and for 'FPS' you > > probably want 'fps'. '%' will work as long as the improvement direction for > > those values is down (smaller is better). > > > > Also, is it intentional that the capitalization is different for the name of > > this metric in particular compared to the rest (WebRtc vs WebRTC)? > > Ethan, > I fixed the WebRtcRendering_ occurences, also changed 'usec' to 'us'. > For the '%' case, sometimes low percentage is good like for WebRTCRendering_percent_out_of_sync, but for WebRTCRendering_smoothness_score higher is better, how do you deal with this? Thanks Great question. For a little background, we're currently midway through the process of decoupling improvement_direction from unit-info.json, and moving towards specifying all improvement directions in value callsites. The improvement_direction field already exists in values, so you can use this functionality by writing: from telemetry.value import improvement_direction to bring in the improvement direction module, and then setting improvement_direction=improvement_direction.UP or improvement_direction.DOWN at the sites where you're constructing values. For now, if you don't explicitly set an improvement direction, we'll still use the values from unit-info.json but it would be great if you could specify an improvement_direction for all your values. I need to do some minor plumbing on the perf dashboard to make this work, and I've opened https://github.com/catapult-project/catapult/issues/1541 for that. I should be able to get this done today, and I'll CC you on the CL.
Added improvement_direction to all metrics output and removed frame_distribution from metrics output as it is not really a metric per se. Last patch I hope. Thanks for the reviews https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:20: WebRtcRendering_drift_time usec On 2015/09/29 18:09:27, eakuefner wrote: > On 2015/09/29 at 15:50:25, cpaulin wrote: > > On 2015/09/28 19:49:48, eakuefner wrote: > > > The units of the values you add should match entries in unit-info.json to be > > > compatible with the perf dashboard: > > > > https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te... > > > The file is kind of a mess but we want to avoid adding new units to it if at > all > > > possible. > > > > > > For 'usec' you probably want 'us' which is already defined, and for 'FPS' > you > > > probably want 'fps'. '%' will work as long as the improvement direction for > > > those values is down (smaller is better). > > > > > > Also, is it intentional that the capitalization is different for the name of > > > this metric in particular compared to the rest (WebRtc vs WebRTC)? > > > > Ethan, > > I fixed the WebRtcRendering_ occurences, also changed 'usec' to 'us'. > > For the '%' case, sometimes low percentage is good like for > WebRTCRendering_percent_out_of_sync, but for WebRTCRendering_smoothness_score > higher is better, how do you deal with this? Thanks > > Great question. For a little background, we're currently midway through the > process of decoupling improvement_direction from unit-info.json, and moving > towards specifying all improvement directions in value callsites. > > The improvement_direction field already exists in values, so you can use this > functionality by writing: > > from telemetry.value import improvement_direction > > to bring in the improvement direction module, and then setting > improvement_direction=improvement_direction.UP or improvement_direction.DOWN at > the sites where you're constructing values. For now, if you don't explicitly set > an improvement direction, we'll still use the values from unit-info.json but it > would be great if you could specify an improvement_direction for all your > values. > > I need to do some minor plumbing on the perf dashboard to make this work, and > I've opened https://github.com/catapult-project/catapult/issues/1541 for that. I > should be able to get this done today, and I'll CC you on the CL. > Done.
The CQ bit was checked by cpaulin@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from phoglund@chromium.org, nednguyen@google.com, eakuefner@chromium.org Link to the patchset: https://codereview.chromium.org/1254023003/#ps280001 (title: "Added improvement direction to all result metrics and removed frame_distribution from result.html")
The CQ bit was unchecked by cpaulin@chromium.org
The CQ bit was checked by cpaulin@chromium.org
The CQ bit was unchecked by eakuefner@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/280001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/280001
On 2015/09/29 at 18:59:05, cpaulin wrote: > Added improvement_direction to all metrics output and removed frame_distribution from metrics output as it is not really a metric per se. > Last patch I hope. Thanks for the reviews > > https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... > File tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py (right): > > https://codereview.chromium.org/1254023003/diff/180001/tools/telemetry/teleme... > tools/telemetry/telemetry/web_perf/metrics/webrtc_rendering_timeline.py:20: WebRtcRendering_drift_time usec > On 2015/09/29 18:09:27, eakuefner wrote: > > On 2015/09/29 at 15:50:25, cpaulin wrote: > > > On 2015/09/28 19:49:48, eakuefner wrote: > > > > The units of the values you add should match entries in unit-info.json to be > > > > compatible with the perf dashboard: > > > > > > https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te... > > > > The file is kind of a mess but we want to avoid adding new units to it if at > > all > > > > possible. > > > > > > > > For 'usec' you probably want 'us' which is already defined, and for 'FPS' > > you > > > > probably want 'fps'. '%' will work as long as the improvement direction for > > > > those values is down (smaller is better). > > > > > > > > Also, is it intentional that the capitalization is different for the name of > > > > this metric in particular compared to the rest (WebRtc vs WebRTC)? > > > > > > Ethan, > > > I fixed the WebRtcRendering_ occurences, also changed 'usec' to 'us'. > > > For the '%' case, sometimes low percentage is good like for > > WebRTCRendering_percent_out_of_sync, but for WebRTCRendering_smoothness_score > > higher is better, how do you deal with this? Thanks > > > > Great question. For a little background, we're currently midway through the > > process of decoupling improvement_direction from unit-info.json, and moving > > towards specifying all improvement directions in value callsites. > > > > The improvement_direction field already exists in values, so you can use this > > functionality by writing: > > > > from telemetry.value import improvement_direction > > > > to bring in the improvement direction module, and then setting > > improvement_direction=improvement_direction.UP or improvement_direction.DOWN at > > the sites where you're constructing values. For now, if you don't explicitly set > > an improvement direction, we'll still use the values from unit-info.json but it > > would be great if you could specify an improvement_direction for all your > > values. > > > > I need to do some minor plumbing on the perf dashboard to make this work, and > > I've opened https://github.com/catapult-project/catapult/issues/1541 for that. I > > should be able to get this done today, and I'll CC you on the CL. > > > > Done. Please don't land this until the dashboard fix is landed. In the meantime you can try a CQ dry run to see whether the CL passes the trybots.
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/300001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/300001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: mac_perf_bisect on tryserver.chromium.perf (JOB_FAILED, no build URL)
The CQ bit was checked by cpaulin@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from eakuefner@chromium.org, nednguyen@google.com, phoglund@chromium.org Link to the patchset: https://codereview.chromium.org/1254023003/#ps300001 (title: "Fixed merge conflicts in tools/telemetry/telemetry/web_perf/timeline_based_measurement.py")
The CQ bit was unchecked by cpaulin@chromium.org
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
The CQ bit was checked by cpaulin@chromium.org
The CQ bit was unchecked by cpaulin@chromium.org
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/300001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/300001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: mac_perf_bisect on tryserver.chromium.perf (JOB_FAILED, no build URL)
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/300001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/300001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: mac_perf_bisect on tryserver.chromium.perf (JOB_FAILED, no build URL)
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/300001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/300001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: mac_perf_bisect on tryserver.chromium.perf (JOB_FAILED, no build URL)
lgtm
On 2015/10/05 20:52:10, qiangchenC wrote: > lgtm What is the status of this change?
On 2015/10/08 00:21:42, nednguyen(REVIEW IN OTHER ACC) wrote: > On 2015/10/05 20:52:10, qiangchenC wrote: > > lgtm > > What is the status of this change? I get no traces anymore: https://code.google.com/p/chromium/issues/detail?id=539660
On 2015/10/08 00:21:42, nednguyen(REVIEW IN OTHER ACC) wrote: > On 2015/10/05 20:52:10, qiangchenC wrote: > > lgtm > > What is the status of this change? I get no traces anymore: https://code.google.com/p/chromium/issues/detail?id=539660
nednguyen@google.com changed reviewers: + zhenw@chromium.org
https://codereview.chromium.org/1254023003/diff/320001/tools/perf/benchmarks/... File tools/perf/benchmarks/webrtc_rendering.py (right): https://codereview.chromium.org/1254023003/diff/320001/tools/perf/benchmarks/... tools/perf/benchmarks/webrtc_rendering.py:20: filter_string='webrtc, webkit.console, blink.console') Remove the space here should fix the problem. You can following the style used in other TBM benchmarks: https://code.google.com/p/chromium/codesearch#chromium/src/tools/perf/measure... That is, something like webrtc_categories = ['webrtc', 'webkit.console', 'blink.console'] ccc_filter = tracing_category_filter.TracingCategoryFilter(','.join(webrtc_categories)) Meanwhile, I will fix the tracing category filter bug.
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/340001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/340001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: mac_perf_bisect on tryserver.chromium.perf (JOB_FAILED, no build URL)
Chatted with Christian offline -- just so there's no confusion, from my perspective, this CL is okay to land.
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/380001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/380001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: mac_perf_bisect on tryserver.chromium.perf (JOB_FAILED, no build URL)
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/400001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/400001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: mac_perf_bisect on tryserver.chromium.perf (JOB_FAILED, no build URL)
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/400001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/400001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/420001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/420001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_perf_bisect on tryserver.chromium.perf (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.perf/builders/win_perf_bisect/...)
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/440001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/440001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by cpaulin@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from eakuefner@chromium.org, nednguyen@google.com, phoglund@chromium.org, qiangchen@chromium.org Link to the patchset: https://codereview.chromium.org/1254023003/#ps440001 (title: "Remove the flag enable-rtc-smoothness-algorithm now as it crashes the browser")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/440001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/440001
Message was sent while issue was closed.
Committed patchset #23 (id:440001)
Message was sent while issue was closed.
Patchset 23 (id:??) landed as https://crrev.com/4db46d73b08de41f0d838408b9b79b8844769489 Cr-Commit-Position: refs/heads/master@{#353615}
Message was sent while issue was closed.
A revert of this CL (patchset #23 id:440001) has been created in https://codereview.chromium.org/1401903002/ by treib@chromium.org. The reason for reverting is: Some of the new tests fail on WinXP (testNegativeSmoothnessScoreChangedToZero and testNegativeFrezingScoreChangedToZero), starting here: http://build.chromium.org/p/chromium.win/builders/XP%20Tests%20%281%29/builds....
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/460001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/460001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_chromium_x64_rel_ng on tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_x64_...)
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/460001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/460001
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by cpaulin@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from eakuefner@chromium.org, nednguyen@google.com, phoglund@chromium.org, qiangchen@chromium.org Link to the patchset: https://codereview.chromium.org/1254023003/#ps460001 (title: "Remove mocks to fix border effects on subsequent tests")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1254023003/460001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1254023003/460001
Message was sent while issue was closed.
Committed patchset #24 (id:460001)
Message was sent while issue was closed.
Patchset 24 (id:??) landed as https://crrev.com/48cd49f762c805387937845ca64528f1dbc975ba Cr-Commit-Position: refs/heads/master@{#354111}
Message was sent while issue was closed.
Description was changed from ========== Telemetry Test for WebRTC Rendering. Design doc: https://docs.google.com/document/d/1TX0NnT_xxvnRNDkw6P2kxunXyK_I8OChkLb4jehXR... BUG= CQ_EXTRA_TRYBOTS=tryserver.chromium.perf:linux_perf_bisect;tryserver.chromium.perf:mac_10_10_perf_bisect;tryserver.chromium.perf:win_perf_bisect;tryserver.chromium.perf:android_nexus5_perf_bisect Committed: https://crrev.com/4db46d73b08de41f0d838408b9b79b8844769489 Cr-Commit-Position: refs/heads/master@{#353615} Committed: https://crrev.com/48cd49f762c805387937845ca64528f1dbc975ba Cr-Commit-Position: refs/heads/master@{#354111} ========== to ========== Telemetry Test for WebRTC Rendering. Design doc: https://docs.google.com/document/d/1TX0NnT_xxvnRNDkw6P2kxunXyK_I8OChkLb4jehXR... BUG= CQ_EXTRA_TRYBOTS=tryserver.chromium.perf:linux_perf_bisect;tryserver.chromium.perf:mac_10_10_perf_bisect;tryserver.chromium.perf:win_perf_bisect;tryserver.chromium.perf:android_nexus5_perf_bisect Committed: https://crrev.com/4db46d73b08de41f0d838408b9b79b8844769489 Cr-Commit-Position: refs/heads/master@{#353615} Committed: https://crrev.com/48cd49f762c805387937845ca64528f1dbc975ba Cr-Commit-Position: refs/heads/master@{#354111} ==========
The CQ bit was checked by cpaulin@chromium.org to run a CQ dry run
Message was sent while issue was closed.
Patchset #25 (id:480001) has been deleted |