|
|
Created:
4 years, 3 months ago by dproy-google Modified:
3 years, 9 months ago CC:
bckenny, catapult-reviews_chromium.org, tracing-review_chromium.org, ulan Target Ref:
refs/heads/master Project:
catapult Visibility:
Public. |
DescriptionAdd Estimated Input Latency - EQT 90th Percentile definition
Added an implementation of EIL that mirrors lighthouse definition.
Explicitly called the metric EQT 90th Percentile to be clear about what
I'm implementing. For a quick overview of the different definitions of
EIL that have been proposed, see
https://docs.google.com/a/chromium.org/document/d/13XKD4XQ5a_XA_or9ePZnD0rIJYRFUbdRH1kNAtOQyIc/edit?usp=sharing
BUG=chromium:631206
Patch Set 1 #Patch Set 2 : Delete useless empty loop #
Total comments: 12
Patch Set 3 : Review comments #
Total comments: 1
Patch Set 4 : Add tests #Patch Set 5 : Some more cleanup #
Total comments: 23
Patch Set 6 : Address review comments #
Total comments: 10
Patch Set 7 : More review comments #Patch Set 8 : Address review comments v4 #
Total comments: 8
Patch Set 9 : Review comments v5 #
Total comments: 4
Patch Set 10 : Review comments + rebase on master #
Total comments: 93
Patch Set 11 : review comments v7 #
Total comments: 2
Patch Set 12 : Fix typos and link to bug #Patch Set 13 : Tidy up single line conditionals #
Total comments: 46
Patch Set 14 : more readable code #Patch Set 15 : Fix jsdoc #
Total comments: 56
Patch Set 16 : review comments #Patch Set 17 : List -> Plural #
Messages
Total messages: 59 (11 generated)
Description was changed from ========== Add Estimated Input Latency - EQT 90th Percentile definition Added an implementation of EIL that mirrors lighthouse definition. Explicitly called the metric EQT 90th Percentile to be clear about what I'm implementing. For a quick overview of the different definitions of EIL that have been proposed, see https://docs.google.com/a/chromium.org/document/d/13XKD4XQ5a_XA_or9ePZnD0rIJY... BUG=catapult:# ========== to ========== Add Estimated Input Latency - EQT 90th Percentile definition Added an implementation of EIL that mirrors lighthouse definition. Explicitly called the metric EQT 90th Percentile to be clear about what I'm implementing. For a quick overview of the different definitions of EIL that have been proposed, see https://docs.google.com/a/chromium.org/document/d/13XKD4XQ5a_XA_or9ePZnD0rIJY... BUG=catapult:# ==========
dproy@chromium.org changed reviewers: + benjhayden@chromium.org, tdresser@chromium.org
dproy@chromium.org changed reviewers: + dproy@chromium.org
PTAL. I'm not sure if this 90th percentile implementation is actually useful is we're starting the window after TTI - I'm having trouble getting a page with non zero EIL. I don't think it's an implementation error, because increasing the percentile to 100% does give non zero results. I think the pages just go really quiet after reaching TTI. The code needs some cleanup of TODOs and tests, but I wanted to get some early feedback.
Description was changed from ========== Add Estimated Input Latency - EQT 90th Percentile definition Added an implementation of EIL that mirrors lighthouse definition. Explicitly called the metric EQT 90th Percentile to be clear about what I'm implementing. For a quick overview of the different definitions of EIL that have been proposed, see https://docs.google.com/a/chromium.org/document/d/13XKD4XQ5a_XA_or9ePZnD0rIJY... BUG=catapult:# ========== to ========== Add Estimated Input Latency - EQT 90th Percentile definition Added an implementation of EIL that mirrors lighthouse definition. Explicitly called the metric EQT 90th Percentile to be clear about what I'm implementing. For a quick overview of the different definitions of EIL that have been proposed, see https://docs.google.com/a/chromium.org/document/d/13XKD4XQ5a_XA_or9ePZnD0rIJY... BUG=631206 ==========
Description was changed from ========== Add Estimated Input Latency - EQT 90th Percentile definition Added an implementation of EIL that mirrors lighthouse definition. Explicitly called the metric EQT 90th Percentile to be clear about what I'm implementing. For a quick overview of the different definitions of EIL that have been proposed, see https://docs.google.com/a/chromium.org/document/d/13XKD4XQ5a_XA_or9ePZnD0rIJY... BUG=631206 ========== to ========== Add Estimated Input Latency - EQT 90th Percentile definition Added an implementation of EIL that mirrors lighthouse definition. Explicitly called the metric EQT 90th Percentile to be clear about what I'm implementing. For a quick overview of the different definitions of EIL that have been proposed, see https://docs.google.com/a/chromium.org/document/d/13XKD4XQ5a_XA_or9ePZnD0rIJY... BUG=chromium:631206 ==========
On 2016/09/08 06:46:41, dproy1 wrote: > PTAL. > > I'm not sure if this 90th percentile implementation is actually useful is we're > starting the window after TTI - I'm having trouble getting a page with non zero > EIL. I don't think it's an implementation error, because increasing the > percentile to 100% does give non zero results. I think the pages just go really > quiet after reaching TTI. > > The code needs some cleanup of TODOs and tests, but I wanted to get some early > feedback. Are you testing on Android? On Android, our 90'th percentile is about a millisecond (based on UMA). Are you planning on implementing the average queueing time and max windowed average queuing time in a separate patch? We really need a way to code share with lighthouse...
On 2016/09/08 at 15:45:14, tdresser wrote: > On 2016/09/08 06:46:41, dproy1 wrote: > > PTAL. > > > > I'm not sure if this 90th percentile implementation is actually useful is we're > > starting the window after TTI - I'm having trouble getting a page with non zero > > EIL. I don't think it's an implementation error, because increasing the > > percentile to 100% does give non zero results. I think the pages just go really > > quiet after reaching TTI. > > > > The code needs some cleanup of TODOs and tests, but I wanted to get some early > > feedback. > > Are you testing on Android? On Android, our 90'th percentile is about a millisecond > (based on UMA). > > Are you planning on implementing the average queueing time and max windowed average > queuing time in a separate patch? > > We really need a way to code share with lighthouse... Lighthouse imports catapult. If something is in lighthouse but needed by tbm2 metrics, then it should be upstreamed to catapult, right?
On 2016/09/08 15:52:40, benjhayden wrote: > On 2016/09/08 at 15:45:14, tdresser wrote: > > On 2016/09/08 06:46:41, dproy1 wrote: > > > PTAL. > > > > > > I'm not sure if this 90th percentile implementation is actually useful is > we're > > > starting the window after TTI - I'm having trouble getting a page with non > zero > > > EIL. I don't think it's an implementation error, because increasing the > > > percentile to 100% does give non zero results. I think the pages just go > really > > > quiet after reaching TTI. > > > > > > The code needs some cleanup of TODOs and tests, but I wanted to get some > early > > > feedback. > > > > Are you testing on Android? On Android, our 90'th percentile is about a > millisecond > > (based on UMA). > > > > Are you planning on implementing the average queueing time and max windowed > average > > queuing time in a separate patch? > > > > We really need a way to code share with lighthouse... > > Lighthouse imports catapult. If something is in lighthouse but needed by tbm2 > metrics, then it should be upstreamed to catapult, right? Oh, good. So this'll end up being two sided, one patch adding logic to tbm2, and the next removing it from lighthouse.
Test, please. https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:26: // TODO(dproy): This is copy pasta from loading_metric.html. Factor out. https://github.com/catapult-project/catapult/issues/2784 https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:32: // TODO (dproy): This is copy pasta from loading_metric.html. Factor out. Feel free to move it to ChromeBrowserHelper. https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:58: if (succeededOnSomeDescendant) return true; Why not move this early return up into the loop so that you don't process siblings unnecessarily? https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:68: function isTopLevelSlice(slice) { This name is confusing. Slice and AsyncSlice have their own definitions of "isTopLevel". Can you move this to an arrow function in its caller? https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:74: for (var slice of thread.sliceGroup.topLevelSlices) { I seem to recall the existence of tracing categories that, when enabled, can cause slices in the "toplevel" category to not be in this array, something like MainThread::Run would contain "toplevel" events, so the MainThread::Run slices would be in topLevelSlices instead of the "toplevel" events. Maybe don't worry about it for now since you're focused on telemetry, but be aware in case it looks like something fishy may be happening here. https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:80: // TODO(dproy): When LoadExpectation v.1.0 is released, I'll take care of this refactor. You can either change the todo or remove it or leave it, up to you.
> Are you testing on Android? On Android, our 90'th percentile is about a > millisecond > (based on UMA). > Is this after TTI? I don't have my android device on me right now but I can try it when I'm back in WAT on Monday. > Are you planning on implementing the average queueing time and max windowed > average queuing time in a separate patch? Yes. > Tests coming soon https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:26: // TODO(dproy): This is copy pasta from loading_metric.html. Factor out. On 2016/09/08 16:03:26, benjhayden wrote: > https://github.com/catapult-project/catapult/issues/2784 Ok I'm changing the TODO here to wait until that issue is closed. https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:32: // TODO (dproy): This is copy pasta from loading_metric.html. Factor out. On 2016/09/08 16:03:26, benjhayden wrote: > Feel free to move it to ChromeBrowserHelper. Done. I moved it to ChromeModelHelper because it felt more natural there. https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:58: if (succeededOnSomeDescendant) return true; On 2016/09/08 16:03:26, benjhayden wrote: > Why not move this early return up into the loop so that you don't process > siblings unnecessarily? I actually want it to run on all siblings. The primary use case for this function is to digest renderer scheduler's ProcessTaskFromWorkQueue slices, which are 'toplevel' slices buried a few levels deeper that the thread message loop. See https://github.com/GoogleChrome/lighthouse/issues/489 https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:68: function isTopLevelSlice(slice) { Done. https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:74: for (var slice of thread.sliceGroup.topLevelSlices) { On 2016/09/08 16:03:26, benjhayden wrote: > I seem to recall the existence of tracing categories that, when enabled, can > cause slices in the "toplevel" category to not be in this array, something like > MainThread::Run would contain "toplevel" events, so the MainThread::Run slices > would be in topLevelSlices instead of the "toplevel" events. > Maybe don't worry about it for now since you're focused on telemetry, but be > aware in case it looks like something fishy may be happening here. That should be ok. I actually want the biggest and topmost slices I see in traceviewer main thread here, not slices that have the trace category 'toplevel'. I'm digging deep in to the slice hierarchy anyway to find the actual 'toplevel' slices we care about. https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:80: // TODO(dproy): When LoadExpectation v.1.0 is released, On 2016/09/08 16:03:26, benjhayden wrote: > I'll take care of this refactor. You can either change the todo or remove it or > leave it, up to you. Ok I'll remove the TODO for me here. Thanks!
On 2016/09/08 19:45:53, dproy1 wrote: > > Are you testing on Android? On Android, our 90'th percentile is about a > > millisecond > > (based on UMA). > > > > Is this after TTI? I don't have my android device on me right now but I can try > it when I'm back in WAT on Monday. > > > Are you planning on implementing the average queueing time and max windowed > > average queuing time in a separate patch? > > Yes. > > > > Tests > > coming soon > > https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... > File tracing/tracing/metrics/estimated_input_latency_metric.html (right): > > https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... > tracing/tracing/metrics/estimated_input_latency_metric.html:26: // TODO(dproy): > This is copy pasta from loading_metric.html. Factor out. > On 2016/09/08 16:03:26, benjhayden wrote: > > https://github.com/catapult-project/catapult/issues/2784 > > Ok I'm changing the TODO here to wait until that issue is closed. > > https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... > tracing/tracing/metrics/estimated_input_latency_metric.html:32: // TODO (dproy): > This is copy pasta from loading_metric.html. Factor out. > On 2016/09/08 16:03:26, benjhayden wrote: > > Feel free to move it to ChromeBrowserHelper. > > Done. I moved it to ChromeModelHelper because it felt more natural there. > > https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... > tracing/tracing/metrics/estimated_input_latency_metric.html:58: if > (succeededOnSomeDescendant) return true; > On 2016/09/08 16:03:26, benjhayden wrote: > > Why not move this early return up into the loop so that you don't process > > siblings unnecessarily? > > I actually want it to run on all siblings. The primary use case for this > function is to digest renderer scheduler's ProcessTaskFromWorkQueue slices, > which are 'toplevel' slices buried a few levels deeper that the thread message > loop. See https://github.com/GoogleChrome/lighthouse/issues/489 > > https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... > tracing/tracing/metrics/estimated_input_latency_metric.html:68: function > isTopLevelSlice(slice) { > Done. > > https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... > tracing/tracing/metrics/estimated_input_latency_metric.html:74: for (var slice > of thread.sliceGroup.topLevelSlices) { > On 2016/09/08 16:03:26, benjhayden wrote: > > I seem to recall the existence of tracing categories that, when enabled, can > > cause slices in the "toplevel" category to not be in this array, something > like > > MainThread::Run would contain "toplevel" events, so the MainThread::Run slices > > would be in topLevelSlices instead of the "toplevel" events. > > Maybe don't worry about it for now since you're focused on telemetry, but be > > aware in case it looks like something fishy may be happening here. > > That should be ok. I actually want the biggest and topmost slices I see in > traceviewer main thread here, not slices that have the trace category > 'toplevel'. I'm digging deep in to the slice hierarchy anyway to find the actual > 'toplevel' slices we care about. > > https://codereview.chromium.org/2323533003/diff/20001/tracing/tracing/metrics... > tracing/tracing/metrics/estimated_input_latency_metric.html:80: // TODO(dproy): > When LoadExpectation v.1.0 is released, > On 2016/09/08 16:03:26, benjhayden wrote: > > I'll take care of this refactor. You can either change the todo or remove it > or > > leave it, up to you. > > Ok I'll remove the TODO for me here. Thanks! Er, nope, that isn't after TTI, good point. I suspect those numbers will be a lot higher on Android though.
benjhayden@chromium.org changed reviewers: + charliea@chromium.org
@charliea, style master, should we start using 'let'? https://codereview.chromium.org/2323533003/diff/40001/tracing/tracing/metrics... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/40001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:202: function getEQTPercentilesForWindow( You'll need to export these functions so that lighthouse and other metrics can access them.
Added tests and some general cleanup. PTAL.
https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:3: Copyright (c) 2015 The Chromium Authors. All rights reserved. 2016 https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:24: var BASE_RESPONSE_LATENCY = 0; What is this? Also, if it's a number of milliseconds, can you add "_MS" to the name? https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:41: if (succeededOnSomeDescendant) return true; Please put the return on the next line https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:43: if (predicate(slice)) { We find it easier to read if the normal flow of execution is outside of branches, and you don't need to use 'else' when the 'if-then' branch returns. if (!predicate(slice)) return false; cb(slice); return true; https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:142: var busyTime = 0; Can you use Statistics.sum()? https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:145: unnecessary blank line https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:161: unnecessary blank line https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:171: remainingCount -= (duration < 0 ? -1 : 1); Can you flip this to '+=' to avoid the double negative? https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:199: * Note: This is adapted from Will Lighthouse be able to use this instead of their own implementation? It would be great to deduplicate this code. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:283: EILCalculateRiskPercentiles: calculateRiskPercentiles, "EIL" makes it look like a class name. Can you name the function "calculateEILRiskPercentiles" and export it by that name? https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... File tracing/tracing/metrics/estimated_input_latency_metric_test.html (right): https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric_test.html:106: taskList.forEach(task => { can you use for..of instead? https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric_test.html:107: assert(task.duration <= tr.metrics.sh.RESPONSIVENESS_THRESHOLD); can you use assert.isAbove() or assert.isBelow() instead? https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... File tracing/tracing/metrics/system_health/loading_metric.html (right): https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/system_health/loading_metric.html:400: RESPONSIVENESS_THRESHOLD: RESPONSIVENESS_THRESHOLD, I assume this and the next line are numbers of milliseconds? Can you add "_MS" to the names of both variables and the exported names? Sorry we didn't catch this when this code was written :-/
FYI: https://codereview.chromium.org/2334233003 If this CL lands, you'll need to merge your NumericValues with your Histograms and use values.addHistogram() instead of addValue().
Addressed all the comments. PTAL. Looks like https://codereview.chromium.org/2334233003 had a try job failure so I didn't change to addHistogram yet. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:3: Copyright (c) 2015 The Chromium Authors. All rights reserved. On 2016/09/15 20:28:39, benjhayden wrote: > 2016 Done https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:24: var BASE_RESPONSE_LATENCY = 0; On 2016/09/15 20:28:39, benjhayden wrote: > What is this? > > Also, if it's a number of milliseconds, can you add "_MS" to the name? I got rid of it since I decided to keep it at 0. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:41: if (succeededOnSomeDescendant) return true; On 2016/09/15 20:28:38, benjhayden wrote: > Please put the return on the next line Done. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:43: if (predicate(slice)) { On 2016/09/15 20:28:39, benjhayden wrote: > We find it easier to read if the normal flow of execution is outside of > branches, and you don't need to use 'else' when the 'if-then' branch returns. > > if (!predicate(slice)) > return false; > > cb(slice); > return true; Done. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:142: var busyTime = 0; On 2016/09/15 20:28:39, benjhayden wrote: > Can you use Statistics.sum()? Done. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:145: On 2016/09/15 20:28:38, benjhayden wrote: > unnecessary blank line Done. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:161: On 2016/09/15 20:28:38, benjhayden wrote: > unnecessary blank line Done. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:171: remainingCount -= (duration < 0 ? -1 : 1); On 2016/09/15 20:28:38, benjhayden wrote: > Can you flip this to '+=' to avoid the double negative? Done. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:199: * Note: This is adapted from On 2016/09/15 20:28:39, benjhayden wrote: > Will Lighthouse be able to use this instead of their own implementation? It > would be great to deduplicate this code. Potentially. If we can land this we can see if importing works nicely with lighthouse. I exported the right functions and ported all the tests from there, so I sure hope it will work. https://codereview.chromium.org/2323533003/diff/80001/tracing/tracing/metrics... tracing/tracing/metrics/estimated_input_latency_metric.html:283: EILCalculateRiskPercentiles: calculateRiskPercentiles, On 2016/09/15 20:28:38, benjhayden wrote: > "EIL" makes it look like a class name. Can you name the function > "calculateEILRiskPercentiles" and export it by that name? Done.
Actually looks like I missed a couple comments. Don't review yet. Sorry about this.
https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:68: for (var bin of tr.b.getOnlyElement(ttiValues).numeric.allBins) curly braces, please https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:238: // The largest pid renderer is probably the one we're interested in Sorry, why? Why not look at all of them? Ideally, metrics would be useful for all traces, not just those from telemetry. https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:250: var eqtTargetPercentiles = [.9]; Should this be a file-level constant? Why .9? https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:261: if (EQT90th !== undefined) EQT90thPercentile.addSample(EQT90th.time); Please put the body of the condition on the next line. https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:276: CalculateEILRiskPercentiles: calculateRiskPercentiles, The capital C makes it look like a class name. It would be ideal if the exported name were the same as the function name.
Ready for review. PTAL. https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:68: for (var bin of tr.b.getOnlyElement(ttiValues).numeric.allBins) On 2016/09/15 22:04:47, benjhayden wrote: > curly braces, please Done. https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:238: // The largest pid renderer is probably the one we're interested in On 2016/09/15 22:04:48, benjhayden wrote: > Sorry, why? Why not look at all of them? > Ideally, metrics would be useful for all traces, not just those from telemetry. This is what TTI/FMP is doing. I don't think I can get TTI for all renderers. I'm also not sure how to report TTI for each renderer individually instead of putting all of them in one histogram. If I put everything in one histogram I think it becomes difficult for telemetry to get the interesting data out. https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:250: var eqtTargetPercentiles = [.9]; On 2016/09/15 22:04:47, benjhayden wrote: > Should this be a file-level constant? > > Why .9? Because it's 90th percentile - The metric I'm calculating (for now) is named EQT90thPercentile. I could make `VAR NINETIETH = 0.9` but I think it looks a little silly - I can do it if you want. https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:261: if (EQT90th !== undefined) EQT90thPercentile.addSample(EQT90th.time); On 2016/09/15 22:04:47, benjhayden wrote: > Please put the body of the condition on the next line. Done. https://codereview.chromium.org/2323533003/diff/100001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:276: CalculateEILRiskPercentiles: calculateRiskPercentiles, On 2016/09/15 22:04:48, benjhayden wrote: > The capital C makes it look like a class name. > It would be ideal if the exported name were the same as the function name. Done. Sorry about this - should've noticed.
My personal vote is to push off using let for now: I'd rather focus on doing some of the migrations of features that are already enabled and enabling eslint rules that are currently disabled. My rationale is that, if we move things over one by one, we'll feel like we're making real progress on moving the codebase over to ES6, whereas if we just start everything at once, it's likely to be a migration that never really ends.
Thanks for your patience with all my questions and nits. :-) https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:251: var eqtTargetPercentiles = [.9]; Ok, maybe not a file constant, maybe just a comment like // Compute the 90th percentile. https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:253: var EQT90thPercentile = new tr.v.Histogram( lowercase "eqt" https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:260: var filtered = eqtPercentiles.filter(res => res.percentile === 0.9); Using == or === for floats might not be a great idea. What are you trying to do here? Why would getEQTPercentilesForWindow return results for any percentiles besides eqtTargetPercentiles? https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:261: var EQT90th = eqtPercentiles.filter(res => res.percentile === 0.9)[0]; lowercase "eqt"
It looks like the two 'let's in estimated_input_latency_metric.html can be changed to var for now while we gradually modernize tracing if that sounds ok?
Ready for review. PTAL. On 2016/09/15 23:24:41, benjhayden wrote: > It looks like the two 'let's in estimated_input_latency_metric.html can be > changed to var for now while we gradually modernize tracing if that sounds ok? Oh yeah totally. I did a s/let/var/ on the whole file that hilariously changed my `completedTime` to `compvaredTime`. I don't know how those two escaped. > Thanks for your patience with all my questions and nits. :-) Thank you for being so thorough with the review! https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:251: var eqtTargetPercentiles = [.9]; On 2016/09/15 23:23:04, benjhayden wrote: > Ok, maybe not a file constant, maybe just a comment like > // Compute the 90th percentile. Done. https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:253: var EQT90thPercentile = new tr.v.Histogram( On 2016/09/15 23:23:04, benjhayden wrote: > lowercase "eqt" Done. https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:260: var filtered = eqtPercentiles.filter(res => res.percentile === 0.9); On 2016/09/15 23:23:04, benjhayden wrote: > Using == or === for floats might not be a great idea. What are you trying to do > here? Why would getEQTPercentilesForWindow return results for any percentiles > besides eqtTargetPercentiles? riskPercentile computes multiple risk percentiles at once, although currently for this patch we're only computing the 90th percentile. I think I should change the float .90 to integer 90, and return a Map<percentile -> time> instead of Array<{percentile: Number, time: Number}>. That should make things look less uncanny. https://codereview.chromium.org/2323533003/diff/140001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:261: var EQT90th = eqtPercentiles.filter(res => res.percentile === 0.9)[0]; On 2016/09/15 23:23:04, benjhayden wrote: > lowercase "eqt" Done.
https://codereview.chromium.org/2323533003/diff/160001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/160001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:251: var eqtTargetPercentiles = [90]; Sorry, I didn't mean to mislead you. Using ints 0-100 to represent percentiles has caused confusion, leading to excruciatingly explicit things like this: https://github.com/catapult-project/catapult/blob/master/tracing/tracing/base... Can you keep all percentiles between 0 and 1? https://codereview.chromium.org/2323533003/diff/160001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:260: var eqt90th = eqtPercentiles.get(90); I like the ES6 Map! Maybe generalize this code so it doesn't look funny if you change eqtTargetPercentiles? for (var [pctile, eqt] of eqtPercentiles) { histogram.addSample(eqt); }
I rebased it on master. The metrics sidepanel is broken for me right now so I can't test this in the sidepanel anymore, but the tests pass. https://codereview.chromium.org/2323533003/diff/160001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/160001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:251: var eqtTargetPercentiles = [90]; On 2016/09/16 22:49:29, benjhayden wrote: > Sorry, I didn't mean to mislead you. Using ints 0-100 to represent percentiles > has caused confusion, leading to excruciatingly explicit things like this: > https://github.com/catapult-project/catapult/blob/master/tracing/tracing/base... > Can you keep all percentiles between 0 and 1? Ah I see. Ok I changed it back. https://codereview.chromium.org/2323533003/diff/160001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:260: var eqt90th = eqtPercentiles.get(90); On 2016/09/16 22:49:29, benjhayden wrote: > I like the ES6 Map! > Maybe generalize this code so it doesn't look funny if you change > eqtTargetPercentiles? > for (var [pctile, eqt] of eqtPercentiles) { > histogram.addSample(eqt); > } Let's do that in the patch where we actually add a second percentile. I'll need to make a separate histogram for each percentile and generating the names dynamically will make the code less amenable to string search.
Sorry for all of the nits: we're in the processing of moving to a lot of these being enforced as presubmits, which would make the process of learning the tracing codebase less painful, but we're not quite there yet. My biggest high-level comment is that this is a *massive* CL. Over 600 new lines of code are added, which makes it really tough to do a thorough review of the code. In order to do a proper code review here, which involves actually grokking what the purpose of each function is, checking to make sure that there aren't unused functions, trying to see if we have existing functions that would make writing this easier, making sure that the high-level goal is necessary, etc... all of this will take 1-2 hours for this CL. Even this quick run-through took more than half an hour. For the sake of catching errors earlier when they're easier to fix and making your and reviewers' lives easier, please try to send code reviews in smaller batches in the future. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:3: Copyright (c) 2015 The Chromium Authors. All rights reserved. nit: this should be: Copyright 2016 The Chromium Authors. All rights reserved. The new copyright doesn't require a (c), and should reflect the current year. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:25: tr.b.getCategoryParts(event.category).indexOf(category) !== -1; nit: line continuations are indented by 4 characters, not 2 https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:28: function runConditionallyOnInnermostDescendants(slice, predicate, cb) { nit: what is an innermost descendant? Are these like leaves of the tree? https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:32: child, predicate, cb); same (line continuations) https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:34: succeededOnThisChild || succeededOnSomeDescendant; same (line continuations) https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:38: return true; nit: no need for braces around a single line branch https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:42: return false; same (single line branch) https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:50: var topLevelPredicate = slice => tr.b.getCategoryParts(slice.category) nit: I don't think we ever break at the ., but instead prefer to break inside the parentheses (like Python) my vote would be for something like: var topLevelPredicate = slice => tr.b.getCategoryParts(slice.category).includes( TOPLEVEL_CATEGORY_NAME) https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:53: for (var slice of thread.sliceGroup.topLevelSlices) { same (no need for braces) https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:65: var ttiValues = values.getValuesNamed('timeToFirstInteractive'); wrong indentation? https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:65: var ttiValues = values.getValuesNamed('timeToFirstInteractive'); wrong indentation? https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:104: // This is a violation of core assumption. nit: not need for braces https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:125: * Note: Taken from nit: the first line of a jsdoc should be what the function does. I'd definitely say move "Calculate duration at specified percentiles..." as the first line so that people don't have to scan further to see what the purpose is. That means the jsdoc should look something like: /** * Calculate duration at specified percentiles for given population of * durations. * * If one of the durations overlaps the end of the window, the full duration * should bein the duration array, ... * * Note: taken from goo.gl/12KedaW. */ https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:126: * https://github.com/GoogleChrome/lighthouse/blob/a5bbe2338fa474c94bb8758494087... nit: please create bit.ly or goo.gl shortlink and use that so that the line isn't too long https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:128: * Calculate duration at specified percentiles for given population of I'm not sure I understand what duration we're calculating or what a population of durations is. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:130: * If one of the durations overlaps the end of the window, the full nit: if you want different paragraphs within a jsdoc, please add a blank line (just ' *') between them. If you don't want different paragraphs, move "If one of..." to the end of the previous line. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:135: * @see https://docs.google.com/document/d/18gvP-CBA2BiBpi3Rz1I1ISciKGhniTSZ9TY0XCnXS... nit: line is too long. Please create a shortlink to the doc https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:138: durations, totalTime, percentiles, clippedLength) { nit: line continuations get four indents https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:144: // Start with idle time already compvare. nit: compvare => compare https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:147: var cdfTime = completedTime; I can kind of vaguely guess what cdfTime is, but I don't think it's a fair assumption to assume everyone reading the code will have the same knowledge of statistics. It probably warrants a comment. Also, I'm not sure "cdfTime" is a great variable name here. Even if you write out the abbreviation, "cumulativeDistributionFunctionTime"... what the heck is that? I know that the cumulative distribution function gives the probability that the function will take on a value less than X, but what does that make a "cdfTime"? All of these questions make me think that a better variable name is in order. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:153: // If there was a clipped duration, one less in count nit: no need for braces https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:183: percentile, nit: line continuations get four indents https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:192: * https://github.com/GoogleChrome/lighthouse/blob/a5bbe2338fa474c94bb8758494087... nit: shortlink. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:194: function getEQTPercentilesForWindow( nit: what the heck is an EQT? https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:231: * @param {!Object=} opt_options what type of options are available? This might also be a good place to explain what the estimated input latency metric *is*. You may know, but people looking at this file a year from now likely will not. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:234: nit: unnecessary blank line https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:245: interactiveTimestamps, nit: line continuations https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:253: 'Expected Queueing Time 90th Percentile', nit: line continuations https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:255: tr.v.HistogramBinBoundaries.createExponential(0.1, 1e3, 100)); nit: I think the best practice is to create these boundaries as a constant near the top of the namespace scope. It'd also be good to provide some justification as to why .1 and 1000 are reasonable choices: you've likely thought about this more than anyone else, and trying to figure this out in reverse in the future might be painful https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:259: eqtTargetPercentiles, rendererHelper, taskWindow.start, taskWindow.end); nit: line continuations https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:262: eqt90thPercentile.addSample(eqt90th); nit: single line branch https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric_test.html (right): https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:17: var FPTOLERANCE = 0.1; no need for this constant. Just about everywhere else in the tracing codebase, we just inline it. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:32: // Need to make sure the tasks in the interactive window are less that nit: comments in tracing/ are complete sentences that end with a period. (in other words, please add "We" at the beginning of the sentence." https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:39: task.duration, tr.metrics.sh.RESPONSIVENESS_THRESHOLD_MS); nit: line continuations https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:65: 'ptr', 'loading', 'FrameLoader', 300, nit: line continuations https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:67: documentLoaderURL: 'http://example.com'}); nit: wrong indent here https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:71: rendererProcess, startNavigationTime, ttiTime) { nit: line continuations https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:93: newSchedulerTask(ttiTime + INTERACTIVE_WINDOW_SIZE_MS, 0)); nit: line continuations https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:111: 'Expected Queueing Time 90th Percentile')); same https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:121: postTTITasks.push({start: startTime, duration: taskDur}); nit: no need for braces in one line branch https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:132: var postTTITasks = []; // No tasks after TTI nit: comments in TV are a complete sentence that end with a period. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:157: postTTITasks.push({start: startTime, duration: taskDur}); nit: one line branch https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:170: nit: no need for blank line https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:189: win4Tasks.push({start: startTime, duration: taskDur}); nit: no need for braces https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:216: nit: no need for blank line https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:255: schedulerDoWork.subSlices.push(newSchedulerTask(start, taskDur)); nit: no need for braces https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:267: nit: extra blank line https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:372: // Note(dproy): Since we now supply the percentiles as ints instead of Please make this NOTE: and not Note(dproy). We usually omit the author's name because we can always look back at who wrote the note anyhow. Unlike a TODO, there's not a guarantee that we'll come back to this. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:379: [duration1, duration2], totalTime, [1], 0); nit: line continuations (here and elsewhere) https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/model/... File tracing/tracing/model/helpers/chrome_model_helper.html (right): https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/model/... tracing/tracing/model/helpers/chrome_model_helper.html:119: // This is often used to find the target renderer for metrics calculation Note: write jsdocs for function headers. Also, is there some guarantee that the largets pid corresponds to the target renderer? If so, that's definitely worth noting. In other words, this should probably look something like: /** * Returns the renderer with the largest PID. * * This is useful in metrics calculation because <reason>. */
On 2016/09/19 at 13:26:24, dproy wrote: > I rebased it on master. The metrics sidepanel is broken for me right now so I can't test this in the sidepanel anymore, but the tests pass. It's working for me, except I broke the longTasksMetric. Can you rebase and file a bug if it's still broken?
https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:144: // Start with idle time already compvare. On 2016/09/19 at 14:35:33, charliea wrote: > nit: compvare => compare Actually, I think this is supposed to be "complete". He had replaced all "let" with "var" :-) https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:231: * @param {!Object=} opt_options On 2016/09/19 at 14:35:32, charliea wrote: > what type of options are available? This might also be a good place to explain what the estimated input latency metric *is*. You may know, but people looking at this file a year from now likely will not. This metric doesn't support any opt_options, so I think it's appropriate for this jsdoc to not explain opt_options.rangeOfInterest, which is currently the only option. I think that the metrics that do support it should document it.
I'm sorry this CL is getting so difficult to review. The actual new stuff in this CL is the part where I iterate the innermost toplevel slices (meant to address https://github.com/GoogleChrome/lighthouse/issues/489), which is less than 100 lines. Everything else is either copy pasted from lighthouse, or copy pasted from other metrics. When the metric was initially up for review, I didn't include tests (because in my mind this was already a tested metric in lighthouse, and therefore adding tests here were more a formality than reassurance of correctness), and I more explicitly annotated the copy pasta so it would be easier to focus on the actual new thing here, but it seems I probably should have exercised better judgment and broken down the CL more. I would abandon this CL and send smaller broken down ones, but this CL has already gone through too many cycles of review and at this point it's probably less time consuming for everyone involved to try to land this. I've added lots of comments in the latest patch to make things a little easier to understand, but if there are more things to fix in actual internals of the metric calculation, perhaps they can be done in a subsequent patch if needed? It is not even clear at this point if the current definition of the metric is useful at all - we're somewhat at the prototype/rapid-iteration phase and there is a nontrivial probability all this code will be thrown away in three weeks. The point of trying to land this metric was so that other people can start playing around with it and iterate - I'm thinking now I should have done the prototyping in a separate branch instead of trying to land on master. Anyway, PTAL. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:3: Copyright (c) 2015 The Chromium Authors. All rights reserved. On 2016/09/19 14:35:33, charliea wrote: > nit: this should be: > > Copyright 2016 The Chromium Authors. All rights reserved. > > The new copyright doesn't require a (c), and should reflect the current year. > Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:25: tr.b.getCategoryParts(event.category).indexOf(category) !== -1; On 2016/09/19 14:35:32, charliea wrote: > nit: line continuations are indented by 4 characters, not 2 Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:28: function runConditionallyOnInnermostDescendants(slice, predicate, cb) { On 2016/09/19 14:35:33, charliea wrote: > nit: what is an innermost descendant? Are these like leaves of the tree? This function is a little unfortunately named but I couldn't find a better word. It works like this: The callback cb is executed on this slice only if the predicate is true on this slice and the predicate is not true on any descendant. Then recurse this on all the children. What this function is used for in this CL is to find the innermost toplevel slices. We have some slices containing the category toplevel and we want to act on those, but if a toplevel slice contains other toplevel tasks I want to ignore the outer toplevel task and only want to act on the inner toplevel slices. I added a js-doc comment for this function. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:32: child, predicate, cb); On 2016/09/19 14:35:33, charliea wrote: > same (line continuations) Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:34: succeededOnThisChild || succeededOnSomeDescendant; On 2016/09/19 14:35:33, charliea wrote: > same (line continuations) Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:38: return true; On 2016/09/19 14:35:33, charliea wrote: > nit: no need for braces around a single line branch Does our style guide forbid this? I really dislike not putting braces around if blocks even if it's one line because I switch often between writing python and js and it becomes really easy for me to just add another indented statement below without remembering the wrap the whole block with braces, leading to many accumulated hours of debugging frustration. Related SO thread: http://stackoverflow.com/a/2125078/2367211 Of course, I'm all for consistent styling across codebase too, so if you feel this makes the codebase too inconsistent, I'll remove the braces. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:50: var topLevelPredicate = slice => tr.b.getCategoryParts(slice.category) On 2016/09/19 14:35:32, charliea wrote: > nit: I don't think we ever break at the ., but instead prefer to break inside > the parentheses (like Python) > > my vote would be for something like: > > var topLevelPredicate = slice => > tr.b.getCategoryParts(slice.category).includes( > TOPLEVEL_CATEGORY_NAME) Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:65: var ttiValues = values.getValuesNamed('timeToFirstInteractive'); On 2016/09/19 14:35:32, charliea wrote: > wrong indentation? Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:125: * Note: Taken from On 2016/09/19 14:35:33, charliea wrote: > nit: the first line of a jsdoc should be what the function does. I'd definitely > say move "Calculate duration at specified percentiles..." as the first line so > that people don't have to scan further to see what the purpose is. That means > the jsdoc should look something like: > > /** > * Calculate duration at specified percentiles for given population of > * durations. > * > * If one of the durations overlaps the end of the window, the full duration > * should bein the duration array, ... > * > * Note: taken from goo.gl/12KedaW. > */ Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:126: * https://github.com/GoogleChrome/lighthouse/blob/a5bbe2338fa474c94bb8758494087... On 2016/09/19 14:35:33, charliea wrote: > nit: please create bit.ly or goo.gl shortlink and use that so that the line > isn't too long Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:128: * Calculate duration at specified percentiles for given population of On 2016/09/19 14:35:32, charliea wrote: > I'm not sure I understand what duration we're calculating or what a population > of durations is. Clarified. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:130: * If one of the durations overlaps the end of the window, the full On 2016/09/19 14:35:32, charliea wrote: > nit: if you want different paragraphs within a jsdoc, please add a blank line > (just ' *') between them. If you don't want different paragraphs, move "If one > of..." to the end of the previous line. Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:135: * @see https://docs.google.com/document/d/18gvP-CBA2BiBpi3Rz1I1ISciKGhniTSZ9TY0XCnXS... On 2016/09/19 14:35:33, charliea wrote: > nit: line is too long. Please create a shortlink to the doc Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:138: durations, totalTime, percentiles, clippedLength) { On 2016/09/19 14:35:33, charliea wrote: > nit: line continuations get four indents Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:144: // Start with idle time already compvare. On 2016/09/19 20:00:48, benjhayden wrote: > On 2016/09/19 at 14:35:33, charliea wrote: > > nit: compvare => compare > > Actually, I think this is supposed to be "complete". He had replaced all "let" > with "var" :-) Yes this was indeed complete. s/let/var got me again - I need to find/write an AST based text substitution package. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:147: var cdfTime = completedTime; On 2016/09/19 14:35:33, charliea wrote: > I can kind of vaguely guess what cdfTime is, but I don't think it's a fair > assumption to assume everyone reading the code will have the same knowledge of > statistics. It probably warrants a comment. > > Also, I'm not sure "cdfTime" is a great variable name here. Even if you write > out the abbreviation, "cumulativeDistributionFunctionTime"... what the heck is > that? I know that the cumulative distribution function gives the probability > that the function will take on a value less than X, but what does that make a > "cdfTime"? All of these questions make me think that a better variable name is > in order. I didn't write this code, and I'm finding it very hard to find a name that captures what this variable actually is. Very vaguely, it's cdf multiplied by total time at the end of each iteration, but (1) that's not even strictly true because it's more like "projected cdfTime at the end of this iteration if we had processed all the tasks with same length of the current task", and (2) cdfTimesTime is hardly an improvement. I added copious amounts of comments before its definition, but the best way to understand this variable is just to pretend it's called "x" or something opaque, and see its value change through the loop iteration. If we know we will stick with this definition of EIL in the long run it may be worth it to revisit this function and write it in a slightly more easier-to-understand way, but I'm reluctant to invest the time at the moment. The tests are quite comprehensive and prove beyond reasonable doubt to me that it does what it's supposed to do. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:183: percentile, On 2016/09/19 14:35:32, charliea wrote: > nit: line continuations get four indents Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:192: * https://github.com/GoogleChrome/lighthouse/blob/a5bbe2338fa474c94bb8758494087... On 2016/09/19 14:35:33, charliea wrote: > nit: shortlink. Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:194: function getEQTPercentilesForWindow( On 2016/09/19 14:35:33, charliea wrote: > nit: what the heck is an EQT? Added comments https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:231: * @param {!Object=} opt_options On 2016/09/19 20:00:48, benjhayden wrote: > On 2016/09/19 at 14:35:32, charliea wrote: > > what type of options are available? This might also be a good place to explain > what the estimated input latency metric *is*. You may know, but people looking > at this file a year from now likely will not. > > This metric doesn't support any opt_options, so I think it's appropriate for > this jsdoc to not explain opt_options.rangeOfInterest, which is currently the > only option. I think that the metrics that do support it should document it. Let me know if I should remove opt_options altogether. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:234: On 2016/09/19 14:35:33, charliea wrote: > nit: unnecessary blank line Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:245: interactiveTimestamps, On 2016/09/19 14:35:33, charliea wrote: > nit: line continuations Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:253: 'Expected Queueing Time 90th Percentile', On 2016/09/19 14:35:33, charliea wrote: > nit: line continuations Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:255: tr.v.HistogramBinBoundaries.createExponential(0.1, 1e3, 100)); On 2016/09/19 14:35:33, charliea wrote: > nit: I think the best practice is to create these boundaries as a constant near > the top of the namespace scope. It'd also be good to provide some justification > as to why .1 and 1000 are reasonable choices: you've likely thought about this > more than anyone else, and trying to figure this out in reverse in the future > might be painful Done: made these top level constants with some comments. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:259: eqtTargetPercentiles, rendererHelper, taskWindow.start, taskWindow.end); On 2016/09/19 14:35:32, charliea wrote: > nit: line continuations Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric_test.html (right): https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:17: var FPTOLERANCE = 0.1; On 2016/09/19 14:35:34, charliea wrote: > no need for this constant. Just about everywhere else in the tracing codebase, > we just inline it. Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:39: task.duration, tr.metrics.sh.RESPONSIVENESS_THRESHOLD_MS); On 2016/09/19 14:35:34, charliea wrote: > nit: line continuations Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:65: 'ptr', 'loading', 'FrameLoader', 300, On 2016/09/19 14:35:34, charliea wrote: > nit: line continuations Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:67: documentLoaderURL: 'http://example.com'}); On 2016/09/19 14:35:34, charliea wrote: > nit: wrong indent here What's the correct indent here? Line 67 shifted four spaces to the right? https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:71: rendererProcess, startNavigationTime, ttiTime) { On 2016/09/19 14:35:34, charliea wrote: > nit: line continuations Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:93: newSchedulerTask(ttiTime + INTERACTIVE_WINDOW_SIZE_MS, 0)); On 2016/09/19 14:35:34, charliea wrote: > nit: line continuations Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:111: 'Expected Queueing Time 90th Percentile')); On 2016/09/19 14:35:34, charliea wrote: > same Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:170: On 2016/09/19 14:35:33, charliea wrote: > nit: no need for blank line Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:216: On 2016/09/19 14:35:34, charliea wrote: > nit: no need for blank line Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:267: On 2016/09/19 14:35:33, charliea wrote: > nit: extra blank line Done. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:372: // Note(dproy): Since we now supply the percentiles as ints instead of On 2016/09/19 14:35:34, charliea wrote: > Please make this NOTE: and not Note(dproy). We usually omit the author's name > because we can always look back at who wrote the note anyhow. Unlike a TODO, > there's not a guarantee that we'll come back to this. This note is obselete because I changed percentiles back to float. Thanks for catching. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric_test.html:379: [duration1, duration2], totalTime, [1], 0); On 2016/09/19 14:35:34, charliea wrote: > nit: line continuations (here and elsewhere) Done (here and elsewhere) https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/model/... File tracing/tracing/model/helpers/chrome_model_helper.html (right): https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/model/... tracing/tracing/model/helpers/chrome_model_helper.html:119: // This is often used to find the target renderer for metrics calculation On 2016/09/19 14:35:34, charliea wrote: > Note: write jsdocs for function headers. Also, is there some guarantee that the > largets pid corresponds to the target renderer? If so, that's definitely worth > noting. In other words, this should probably look something like: > > /** > * Returns the renderer with the largest PID. > * > * This is useful in metrics calculation because <reason>. > */ I added more detailed comment in the latest patch. I really wish I didn't have to use this hack, but I followed https://github.com/catapult-project/catapult/blob/master/tracing/tracing/metr.... If you know of any way to avoid this approach all together I would be very very happy to know.
https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:28: function runConditionallyOnInnermostDescendants(slice, predicate, cb) { On 2016/09/20 at 01:11:43, dproy1 wrote: > On 2016/09/19 14:35:33, charliea wrote: > > nit: what is an innermost descendant? Are these like leaves of the tree? > > This function is a little unfortunately named but I couldn't find a better word. It works like this: The callback cb is executed on this slice only if the predicate is true on this slice and the predicate is not true on any descendant. Then recurse this on all the children. > > What this function is used for in this CL is to find the innermost toplevel slices. We have some slices containing the category toplevel and we want to act on those, but if a toplevel slice contains other toplevel tasks I want to ignore the outer toplevel task and only want to act on the inner toplevel slices. > > I added a js-doc comment for this function. Thanks for the explanation! Would it be possible to simplify this code by modifying Chrome's tracing macros? Is there a conceivable future where there's a single layer of "toplevel" slices, or a few layers of "roof" and "penthouse" slices that would be easier to find and distinguish between? If so, can you follow-up with fmeawad about discussing this at the instrumentation weekly? I won't block landing this function, but I think it would be better if this function and the whole idea of innermost descendant were not needed. https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:38: return true; On 2016/09/20 at 01:11:43, dproy1 wrote: > On 2016/09/19 14:35:33, charliea wrote: > > nit: no need for braces around a single line branch > > Does our style guide forbid this? I really dislike not putting braces around if blocks even if it's one line because I switch often between writing python and js and it becomes really easy for me to just add another indented statement below without remembering the wrap the whole block with braces, leading to many accumulated hours of debugging frustration. Related SO thread: http://stackoverflow.com/a/2125078/2367211 > > Of course, I'm all for consistent styling across codebase too, so if you feel this makes the codebase too inconsistent, I'll remove the braces. For what it's worth, Ethan and I also wish that the style guide required putting braces around single-line blocks, but this CL isn't the time or place to hash it out. I'll start a bug, but it's probably best to go with the flow for now and remove the braces.
https://codereview.chromium.org/2323533003/diff/200001/tracing/tracing/model/... File tracing/tracing/model/helpers/chrome_model_helper.html (right): https://codereview.chromium.org/2323533003/diff/200001/tracing/tracing/model/... tracing/tracing/model/helpers/chrome_model_helper.html:130: * renderers and this helper method should be removed. Can you link to this bug? https://github.com/catapult-project/catapult/issues/2820
benjhayden@chromium.org changed reviewers: - dproy@chromium.org
After you add that link to #2820, lgtm, but please wait for charliea and tdresser. If they're still finding it difficult to review such a big patch, it's never too late to split it. I suspect that a design sketch and a single CL will be all that is required to modify the toplevel trace events so that we don't need innermost descendants, but I don't have the spare brainpower to figure it out. The instrumentation weekly can probably help you there. Figuring out whether and how to split a CL can be a tough call. My rule of thumb is to err on the side of splitting if I have any doubt. Another trick that can help is splitting it into multiple patch sets in the same CL like https://codereview.chromium.org/2329843002#msg18 An even better trick is to schedule a VC to walk through the patch. In this case, porting the necessary functions from lighthouse would have been a good first CL. We aren't lighthouse devs, we aren't familiar with those functions or their style or how they're tested, so it's brand new code to us. It's our fault for not suggesting splitting it earlier, don't sweat it. :-)
dproy@chromium.org changed reviewers: + dproy@chromium.org
https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/180001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:28: function runConditionallyOnInnermostDescendants(slice, predicate, cb) { On 2016/09/20 05:52:35, benjhayden wrote: > On 2016/09/20 at 01:11:43, dproy1 wrote: > > On 2016/09/19 14:35:33, charliea wrote: > > > nit: what is an innermost descendant? Are these like leaves of the tree? > > > > This function is a little unfortunately named but I couldn't find a better > word. It works like this: The callback cb is executed on this slice only if the > predicate is true on this slice and the predicate is not true on any descendant. > Then recurse this on all the children. > > > > What this function is used for in this CL is to find the innermost toplevel > slices. We have some slices containing the category toplevel and we want to act > on those, but if a toplevel slice contains other toplevel tasks I want to ignore > the outer toplevel task and only want to act on the inner toplevel slices. > > > > I added a js-doc comment for this function. > > Thanks for the explanation! > Would it be possible to simplify this code by modifying Chrome's tracing macros? > Is there a conceivable future where there's a single layer of "toplevel" slices, > or a few layers of "roof" and "penthouse" slices that would be easier to find > and distinguish between? If so, can you follow-up with fmeawad about discussing > this at the instrumentation weekly? > I won't block landing this function, but I think it would be better if this > function and the whole idea of innermost descendant were not needed. One way to get around this would be to just look for "TaskQueueManager::ProcessTaskFromWorkQueue" events (which is what TTI does: https://github.com/catapult-project/catapult/blob/master/tracing/tracing/metr...) but the pedantic inside me thinks that is not 100% correct since there are still a few message loop cycles that are not blink scheduler cycles, and that approach will miss those tasks. skyostil's comment here (https://github.com/GoogleChrome/lighthouse/issues/489) also made me think that it's more futureproof to not hardcode the slice name in the code. But yes, we should probably discuss this more to see if we can do better. https://codereview.chromium.org/2323533003/diff/200001/tracing/tracing/model/... File tracing/tracing/model/helpers/chrome_model_helper.html (right): https://codereview.chromium.org/2323533003/diff/200001/tracing/tracing/model/... tracing/tracing/model/helpers/chrome_model_helper.html:130: * renderers and this helper method should be removed. On 2016/09/20 05:58:17, benjhayden wrote: > Can you link to this bug? > https://github.com/catapult-project/catapult/issues/2820 Done.
https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:24: var smallestInputLatencyMs = 0.1; nit: please use CONSTANT_CASE for constants (here and below). https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:29: // TODO(dproy): Remove after we close #2784 nit: comments end with a period. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:56: function forEachInnermostTopLevelSlices(thread, cb) { I still think this function needs a jsdoc. I'm not sure what an "innermost top-level slice" is https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:85: var list = []; nit: this variable name could be better. in the spirit of "interactiveTsList" above, how about "navStartTsList"? (my preference for all of these would be to just make them "interactiveTimestamps" and "navStartTimestamps". I don't think "list" is necessary if the names themselves are plural, but your choice) https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:86: for (var ev of rendererHelper.mainThread.sliceGroup.childEvents()) { nit: I think we generally use "e" instead of "ev" for a short event name https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:95: // A task window is defined as time from TTI until either Comments before functions should be jsdocs: http://usejsdoc.org/, not normal javascript comments that start with // A couple of things about jsdocs: - They should generally start with "Returns..." when the function works by returning a value (like most good functions should) - They only have to list their parameters if they're not obvious, or their types aren't obvious I'd say a good jsdoc for this would look something like: /** * Returns an array of task windows defined by the supplied data. * * A task window is defined as the range of time from TTI until either * * 1. The beginning of the next navigationStart event or * 2. The end of the trace * * @param {!Array.<number>} interactiveTsList * @param {!Array.<number>} navStartTimeList * @param {!number} endOfTraces * @returns {!Array.<Object>} An array of objects of the form: * {start: startTs, end: endTs} */ Note that the ! before the parameter and return types indicates that passing a null value isn't allowed, which is true based on the code we have here (we don't do any null checks) A couple of notes for my preferences: - As noted above, I'd prefer "ttiTimestamps, navStartTimestamps, traceEndTimestamps" or something like that to the current variable names - "endOfTraces" sounds like it might be plural. I'd prefer "traceEndTs" - Even though they're both arrays of timestamps, "interactiveTsList" and "navStartTimeList" have different naming schemes. They should definitely be made consistent ("navStartTsList/interactiveTsList", or follow my suggestion above) https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:100: var endOfLastWindow = -Infinity; I'd say "undefined" would be a better sentinel value than -Infinity here https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:102: for (var curTTI of interactiveTsList) { If I follow this logic correctly, you're basically saying: - For each TTI, look at the next curNavStartTime or traceEnd. If either is before the next TTI, push a task window and move on. I think this could be more simply expressed with: var lastTaskWindowEndTs = undefined; for (var curTTI of interactiveTsList) { while (navStartTimeIndex < navStartTimeList.length && navStartTimeList[navStartTimeIndex] < curTTI) { // It's possible to have multiple navigationStart timestamps // between two interactive timestamps - the previous page load // could have never reached interactive status. navStartTimeIndex++; } var taskWindowEndTs = navStartTimeIndex < navStartTimeList.length ? navStartTimeList[navStartTimeIndex] : traceEndTs; if (taskWindowEndTs === lastTaskWindowEndTs) { // Throw your error... } taskWindows.push({start: curTTI, end: taskWindowEndTs}); } https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:103: var curNavStartTime = navStartTimeList[curNavStartTimeIndex]; Note: in some cases, you're intentionally indexing off the end of an array and expecting it to return undefined. You really shouldn't do this, and should instead explicitly check the current index against the array length (see my sample code above). https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:107: // never reach interactive status. supernit: s/never reach/never have reached https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:113: // This is a violation of core assumption. supernit: when we have a comment and a TODO, we generally put the TODO above the comment, whereas here's it's the other way around https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:114: // TODO: When two pages share a render process, we can possibly nit: TODOs need the name of the person that has the most context about the TODO This should probably be // TODO(dproy): https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:118: throw new Error("Two TTI timestamps with no navigation between them"); I think that, generally, when we fail to compute a metric, we return nothing if we're not able to rather than throwing an error if it's a case we expect to happen. It's not clear from your comment: do you expect this to happen in benchmarks? Also, if we want this error, we should really reword it to indicate that something's wrong. As it stands, if you saw this message with no other context, you might not think it's a problem. Something like: "Encountered an unexpected two TTIs without a navigationStart between them" https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:122: taskWindows.push({start: curTTI, end: endOfTraces}); I think we should be returning ranges here (https://cs.corp.google.com/github/catapult-project/catapult/tracing/tracing/b...) rather than a custom JS object for this. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:134: * Calculate estimated queueing time of a hypothetical input event at nit: as suggested above, please start jsdoc with "Returns..." if the function works primarily through its return value and not side-effects. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:137: * For example, if percentiles = [0.5, 0.9], we're calculating the 50th and Great explanation! https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:149: * 50ms duration occurs 10ms before the end of the window, `50` should be in nit: `50` doesn't need backticks around it if 40 doesn't on the next line. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:153: * @see https://goo.gl/AnGtVK nit: looks like this doc has moved? https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:155: * @param {!Array<number>} durations - Array of durations of toplevel tasks of It seems like a much more natural interface for this function would be to accept three parameters: - {!tr.b.Range} windowOfInterest - Range over which the EIL is being calculated. - {!Array.<tr.b.Range>} durations - An array of top-level durations that intersect the window. - {!Array.<number>} percentiles - Array of percentiles of interest in ascending order. Then you perform the necessary clipping in this function right at the start with: var clippedDurations = durations.map((duration) => windowOfInterest.findIntersection(duration)); That seems like it gets rid of a lot of the complexity of the clipping stuff, but keeps everything functionally the same. Or maybe I'm misunderstanding something? As it stands right now, the method contract is kind of weird: some of the stuff you're requiring to be precomputed, but some of it is computed in the function. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:158: * we're calculating estimated queuing time. is this estimated queuing time of estimated input latency? https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:166: durations, totalTime, percentiles, clippedLength) { Looks like clippedLength is optional. If so, please prefix with "opt_", and then have the next line be: var clippedLength = opt_clippedLength || 0; https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:174: var duration = 0; nit: looks like maybe this should be declared way below, inside of the for loop? https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:176: // cdfTime, vaguely speaking, is the current value of Cumulative nit: s/Cumulative/the Cumulative https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:176: // cdfTime, vaguely speaking, is the current value of Cumulative nit: s/Cumulative/the Cumulative also: don't think cumulative distribution function should be capitalized https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:178: // Given a duration t, CDF(t) = Probablity(latency of input event <= t) nit: this probably needs a period at the end of the sentence. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:186: // It may be helpful to look at the design doc: https://goo.gl/AnGtVK Looks like this is the same doc that's moved https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:194: // If there was a clipped duration, one less in count I'm not sure I understand this rationale. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:260: nit: extra line
Sorry, passing by reviewer. From a high level point of view, can we factor the code so that we have a method that look like: chrome_model_helper.GetEstimatedInputLatency(startTime, endTime)? A very common request from teams that work on metrics is that they want to know how "responsive" browser is when they do certain things, e.g: gc, tab switching, loading, playing video.... I think for that purpose, Estimated Input Latency is a great metrics to reflect that need.
On 2016/09/29 21:06:53, nednguyen wrote: > Sorry, passing by reviewer. > > From a high level point of view, can we factor the code so that we have a method > that look like: > > chrome_model_helper.GetEstimatedInputLatency(startTime, endTime)? > > > A very common request from teams that work on metrics is that they want to know > how "responsive" browser is when they do certain things, e.g: gc, tab switching, > loading, playing video.... I think for that purpose, Estimated Input Latency is > a great metrics to reflect that need. I'm ok with doing that. @benjhayden: is it ok if I add that method to chrome model helper?
On 2016/09/29 21:06:53, nednguyen wrote: > Sorry, passing by reviewer. > > From a high level point of view, can we factor the code so that we have a method > that look like: > > chrome_model_helper.GetEstimatedInputLatency(startTime, endTime)? > > > A very common request from teams that work on metrics is that they want to know > how "responsive" browser is when they do certain things, e.g: gc, tab switching, > loading, playing video.... I think for that purpose, Estimated Input Latency is > a great metrics to reflect that need. I'm ok with doing that. @benjhayden: is it ok if I add that method to chrome model helper?
On 2016/09/29 at 21:12:19, dproy wrote: > On 2016/09/29 21:06:53, nednguyen wrote: > > Sorry, passing by reviewer. > > > > From a high level point of view, can we factor the code so that we have a method > > that look like: > > > > chrome_model_helper.GetEstimatedInputLatency(startTime, endTime)? > > > > > > A very common request from teams that work on metrics is that they want to know > > how "responsive" browser is when they do certain things, e.g: gc, tab switching, > > loading, playing video.... I think for that purpose, Estimated Input Latency is > > a great metrics to reflect that need. > > I'm ok with doing that. @benjhayden: is it ok if I add that method to chrome model helper? It seems to me that that function should be exported from estimated_input_latency_metric.html, not at the lower level of ChromeModelHelper. Let's not turn ChromeModelHelper into a grab-bag of metrics. Let's keep metrics in metrics/.
On 2016/09/29 22:59:26, benjhayden wrote: > On 2016/09/29 at 21:12:19, dproy wrote: > > On 2016/09/29 21:06:53, nednguyen wrote: > > > Sorry, passing by reviewer. > > > > > > From a high level point of view, can we factor the code so that we have a > method > > > that look like: > > > > > > chrome_model_helper.GetEstimatedInputLatency(startTime, endTime)? > > > > > > > > > A very common request from teams that work on metrics is that they want to > know > > > how "responsive" browser is when they do certain things, e.g: gc, tab > switching, > > > loading, playing video.... I think for that purpose, Estimated Input Latency > is > > > a great metrics to reflect that need. > > > > I'm ok with doing that. @benjhayden: is it ok if I add that method to chrome > model helper? > > It seems to me that that function should be exported from > estimated_input_latency_metric.html, not at the lower level of > ChromeModelHelper. > Let's not turn ChromeModelHelper into a grab-bag of metrics. Let's keep metrics > in metrics/. Hmhh, then how do you suggest we share that code among the metrics, and the estimated input latency track view?
On 2016/09/29 at 23:25:39, nednguyen wrote: > On 2016/09/29 22:59:26, benjhayden wrote: > > On 2016/09/29 at 21:12:19, dproy wrote: > > > On 2016/09/29 21:06:53, nednguyen wrote: > > > > Sorry, passing by reviewer. > > > > > > > > From a high level point of view, can we factor the code so that we have a > > method > > > > that look like: > > > > > > > > chrome_model_helper.GetEstimatedInputLatency(startTime, endTime)? > > > > > > > > > > > > A very common request from teams that work on metrics is that they want to > > know > > > > how "responsive" browser is when they do certain things, e.g: gc, tab > > switching, > > > > loading, playing video.... I think for that purpose, Estimated Input Latency > > is > > > > a great metrics to reflect that need. > > > > > > I'm ok with doing that. @benjhayden: is it ok if I add that method to chrome > > model helper? > > > > It seems to me that that function should be exported from > > estimated_input_latency_metric.html, not at the lower level of > > ChromeModelHelper. > > Let's not turn ChromeModelHelper into a grab-bag of metrics. Let's keep metrics > > in metrics/. > > Hmhh, then how do you suggest we share that code among the metrics, and the estimated input latency track view? Export the function, with jsdoc comments, and have the other metrics and input-latency-track-view import the metric file and call the exported function. If you're worried about having tracing/ui depend on metrics/, that is already the case with metrics_side_panel. I think it's fine for one metric to import another and call functions that it intentionally exports, except when those functions really belong on other features of the model such as UserExpectations. Where I draw the line is that I think it's not ok for one metric to call another metric and dig through its histograms. Histograms are to be consumed by metricMapFunction and value-set-table and tests, not other metrics. But metrics can use and share handy bits of functionality with each other, as long as that functionality doesn't belong with existing features of the model. WDYT? :-)
nednguyen@google.com changed reviewers: + nednguyen@google.com
On 2016/09/29 23:51:29, benjhayden wrote: > On 2016/09/29 at 23:25:39, nednguyen wrote: > > On 2016/09/29 22:59:26, benjhayden wrote: > > > On 2016/09/29 at 21:12:19, dproy wrote: > > > > On 2016/09/29 21:06:53, nednguyen wrote: > > > > > Sorry, passing by reviewer. > > > > > > > > > > From a high level point of view, can we factor the code so that we have > a > > > method > > > > > that look like: > > > > > > > > > > chrome_model_helper.GetEstimatedInputLatency(startTime, endTime)? > > > > > > > > > > > > > > > A very common request from teams that work on metrics is that they want > to > > > know > > > > > how "responsive" browser is when they do certain things, e.g: gc, tab > > > switching, > > > > > loading, playing video.... I think for that purpose, Estimated Input > Latency > > > is > > > > > a great metrics to reflect that need. > > > > > > > > I'm ok with doing that. @benjhayden: is it ok if I add that method to > chrome > > > model helper? > > > > > > It seems to me that that function should be exported from > > > estimated_input_latency_metric.html, not at the lower level of > > > ChromeModelHelper. > > > Let's not turn ChromeModelHelper into a grab-bag of metrics. Let's keep > metrics > > > in metrics/. > > > > Hmhh, then how do you suggest we share that code among the metrics, and the > estimated input latency track view? > > Export the function, with jsdoc comments, and have the other metrics and > input-latency-track-view import the metric file and call the exported function. > If you're worried about having tracing/ui depend on metrics/, that is already > the case with metrics_side_panel. > I think it's fine for one metric to import another and call functions that it > intentionally exports, except when those functions really belong on other > features of the model such as UserExpectations. > Where I draw the line is that I think it's not ok for one metric to call another > metric and dig through its histograms. > Histograms are to be consumed by metricMapFunction and value-set-table and > tests, not other metrics. > But metrics can use and share handy bits of functionality with each other, as > long as that functionality doesn't belong with existing features of the model. > WDYT? :-) I think I can buy that. So Deep would add export like: GetEstimatedInputLatency(chrome_model_helper, startTime, endTime) in estimated_input_latency_metric.html, is that correct? Another idea is maybe we add a module like: chrome_metrics_helper.html in tracing/tracing/metrics/.. but that could go for another day. For now, I just want to make sure that we share the math for computing EIL between the track view & this metric.
On 2016/09/30 at 18:21:21, nednguyen wrote: > On 2016/09/29 23:51:29, benjhayden wrote: > > On 2016/09/29 at 23:25:39, nednguyen wrote: > > > On 2016/09/29 22:59:26, benjhayden wrote: > > > > On 2016/09/29 at 21:12:19, dproy wrote: > > > > > On 2016/09/29 21:06:53, nednguyen wrote: > > > > > > Sorry, passing by reviewer. > > > > > > > > > > > > From a high level point of view, can we factor the code so that we have > > a > > > > method > > > > > > that look like: > > > > > > > > > > > > chrome_model_helper.GetEstimatedInputLatency(startTime, endTime)? > > > > > > > > > > > > > > > > > > A very common request from teams that work on metrics is that they want > > to > > > > know > > > > > > how "responsive" browser is when they do certain things, e.g: gc, tab > > > > switching, > > > > > > loading, playing video.... I think for that purpose, Estimated Input > > Latency > > > > is > > > > > > a great metrics to reflect that need. > > > > > > > > > > I'm ok with doing that. @benjhayden: is it ok if I add that method to > > chrome > > > > model helper? > > > > > > > > It seems to me that that function should be exported from > > > > estimated_input_latency_metric.html, not at the lower level of > > > > ChromeModelHelper. > > > > Let's not turn ChromeModelHelper into a grab-bag of metrics. Let's keep > > metrics > > > > in metrics/. > > > > > > Hmhh, then how do you suggest we share that code among the metrics, and the > > estimated input latency track view? > > > > Export the function, with jsdoc comments, and have the other metrics and > > input-latency-track-view import the metric file and call the exported function. > > If you're worried about having tracing/ui depend on metrics/, that is already > > the case with metrics_side_panel. > > I think it's fine for one metric to import another and call functions that it > > intentionally exports, except when those functions really belong on other > > features of the model such as UserExpectations. > > Where I draw the line is that I think it's not ok for one metric to call another > > metric and dig through its histograms. > > Histograms are to be consumed by metricMapFunction and value-set-table and > > tests, not other metrics. > > But metrics can use and share handy bits of functionality with each other, as > > long as that functionality doesn't belong with existing features of the model. > > WDYT? :-) > > I think I can buy that. So Deep would add export like: > > GetEstimatedInputLatency(chrome_model_helper, startTime, endTime) in estimated_input_latency_metric.html, is that correct? > > Another idea is maybe we add a module like: chrome_metrics_helper.html in tracing/tracing/metrics/.. but that could go for another day. > > For now, I just want to make sure that we share the math for computing EIL between the track view & this metric. Yep, I think it's ok to have tracing/ui call functions exported from metrics/. Someday we can try to conceptualize a layer in extras/chrome/, but I wouldn't want to over-generalize from this single function. Everybody currently thinks of this as a metric, so I think it's fine to keep it in metrics/.
I addressed all the review comments. I also rewrote the calculateEILRiskPercentile function to make it more readable. I'm splitting up this CL. Let's do the next round of reviews in the smaller CLs. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:24: var smallestInputLatencyMs = 0.1; On 2016/09/22 15:58:50, charliea wrote: > nit: please use CONSTANT_CASE for constants (here and below). Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:29: // TODO(dproy): Remove after we close #2784 On 2016/09/22 15:58:50, charliea wrote: > nit: comments end with a period. Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:56: function forEachInnermostTopLevelSlices(thread, cb) { On 2016/09/22 15:58:49, charliea wrote: > I still think this function needs a jsdoc. I'm not sure what an "innermost > top-level slice" is Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:85: var list = []; On 2016/09/22 15:58:50, charliea wrote: > nit: this variable name could be better. in the spirit of "interactiveTsList" > above, how about "navStartTsList"? > > (my preference for all of these would be to just make them > "interactiveTimestamps" and "navStartTimestamps". I don't think "list" is > necessary if the names themselves are plural, but your choice) I changed them to List. My personal preference is '*List' because I keep making lots of typos with leaving out / adding an extra 's', and also every once in while there's always that odd word (news/focus/most words ending with s) that's really awkward to pluralize. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:86: for (var ev of rendererHelper.mainThread.sliceGroup.childEvents()) { On 2016/09/22 15:58:49, charliea wrote: > nit: I think we generally use "e" instead of "ev" for a short event name Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:95: // A task window is defined as time from TTI until either On 2016/09/22 15:58:51, charliea wrote: > Comments before functions should be jsdocs: http://usejsdoc.org/, not normal > javascript comments that start with // > > A couple of things about jsdocs: > > - They should generally start with "Returns..." when the function works by > returning a value (like most good functions should) > - They only have to list their parameters if they're not obvious, or their > types aren't obvious > > I'd say a good jsdoc for this would look something like: > > /** > * Returns an array of task windows defined by the supplied data. > * > * A task window is defined as the range of time from TTI until either > * > * 1. The beginning of the next navigationStart event or > * 2. The end of the trace > * > * @param {!Array.<number>} interactiveTsList > * @param {!Array.<number>} navStartTimeList > * @param {!number} endOfTraces > * @returns {!Array.<Object>} An array of objects of the form: > * {start: startTs, end: endTs} > */ > > Note that the ! before the parameter and return types indicates that passing a > null value isn't allowed, which is true based on the code we have here (we don't > do any null checks) > > A couple of notes for my preferences: > > - As noted above, I'd prefer "ttiTimestamps, navStartTimestamps, > traceEndTimestamps" or something like that to the current variable names > - "endOfTraces" sounds like it might be plural. I'd prefer "traceEndTs" > - Even though they're both arrays of timestamps, "interactiveTsList" and > "navStartTimeList" have different naming schemes. They should definitely be made > consistent ("navStartTsList/interactiveTsList", or follow my suggestion above) Added jsdoc. Changed to traceEndTs. Changed to navStartTsList. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:100: var endOfLastWindow = -Infinity; On 2016/09/22 15:58:49, charliea wrote: > I'd say "undefined" would be a better sentinel value than -Infinity here Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:102: for (var curTTI of interactiveTsList) { On 2016/09/22 15:58:51, charliea wrote: > If I follow this logic correctly, you're basically saying: > > - For each TTI, look at the next curNavStartTime or traceEnd. If either is > before the next TTI, push a task window and move on. > > I think this could be more simply expressed with: > > var lastTaskWindowEndTs = undefined; > for (var curTTI of interactiveTsList) { > while (navStartTimeIndex < navStartTimeList.length && > navStartTimeList[navStartTimeIndex] < curTTI) { > // It's possible to have multiple navigationStart timestamps > // between two interactive timestamps - the previous page load > // could have never reached interactive status. > navStartTimeIndex++; > } > > var taskWindowEndTs = navStartTimeIndex < navStartTimeList.length ? > navStartTimeList[navStartTimeIndex] : traceEndTs; > > if (taskWindowEndTs === lastTaskWindowEndTs) { > // Throw your error... > } > > taskWindows.push({start: curTTI, end: taskWindowEndTs}); > } This is much better. Thanks! :) https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:103: var curNavStartTime = navStartTimeList[curNavStartTimeIndex]; On 2016/09/22 15:58:51, charliea wrote: > Note: in some cases, you're intentionally indexing off the end of an array and > expecting it to return undefined. You really shouldn't do this, and should > instead explicitly check the current index against the array length (see my > sample code above). Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:113: // This is a violation of core assumption. On 2016/09/22 15:58:50, charliea (OOO until 10-03-16) wrote: > supernit: when we have a comment and a TODO, we generally put the TODO above the > comment, whereas here's it's the other way around I reworded this entire paragraph and removed the TODO. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:118: throw new Error("Two TTI timestamps with no navigation between them"); On 2016/09/22 15:58:49, charliea wrote: > I think that, generally, when we fail to compute a metric, we return nothing if > we're not able to rather than throwing an error if it's a case we expect to > happen. It's not clear from your comment: do you expect this to happen in > benchmarks? > > Also, if we want this error, we should really reword it to indicate that > something's wrong. As it stands, if you saw this message with no other context, > you might not think it's a problem. Something like: > > "Encountered an unexpected two TTIs without a navigationStart between them" I made it return nothing instead of throwing error and added some comments on how this case can happen. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:122: taskWindows.push({start: curTTI, end: endOfTraces}); On 2016/09/22 15:58:50, charliea wrote: > I think we should be returning ranges here > (https://cs.corp.google.com/github/catapult-project/catapult/tracing/tracing/b...) > rather than a custom JS object for this. Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:134: * Calculate estimated queueing time of a hypothetical input event at On 2016/09/22 15:58:50, charliea wrote: > nit: as suggested above, please start jsdoc with "Returns..." if the function > works primarily through its return value and not side-effects. Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:149: * 50ms duration occurs 10ms before the end of the window, `50` should be in On 2016/09/22 15:58:50, charliea wrote: > nit: `50` doesn't need backticks around it if 40 doesn't on the next line. Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:153: * @see https://goo.gl/AnGtVK On 2016/09/22 15:58:50, charliea wrote: > nit: looks like this doc has moved? Done. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:155: * @param {!Array<number>} durations - Array of durations of toplevel tasks of On 2016/09/22 15:58:50, charliea (OOO until 10-03-16) wrote: > It seems like a much more natural interface for this function would be to accept > three parameters: > > - {!tr.b.Range} windowOfInterest - Range over which the EIL is being > calculated. > - {!Array.<tr.b.Range>} durations - An array of top-level durations that > intersect the window. > - {!Array.<number>} percentiles - Array of percentiles of interest in > ascending order. > > Then you perform the necessary clipping in this function right at the start > with: > > var clippedDurations = durations.map((duration) => > windowOfInterest.findIntersection(duration)); > > That seems like it gets rid of a lot of the complexity of the clipping stuff, > but keeps everything functionally the same. Or maybe I'm misunderstanding > something? > > As it stands right now, the method contract is kind of weird: some of the stuff > you're requiring to be precomputed, but some of it is computed in the function. That won't do - we only want to clip tasks to the left and not to the right. When a task is clipped on the right, we take its full length but keep track of the portion that's out of the window. The interface you mentioned is about the same as the interface for the getEQTPercentileForWindow function (with an array of durations swapped out for the whole rendererHelper). The calculateEIL/EQTRiskPercentile function is meant to be a function that just does the tricky math so we can test it easily - I'd rather not put task clipping logic in it. I would make this function private, but I had to export it so the test runner could access it. I added some more comments to the function, and moved all the sorting logic into the function to make the input contract simpler. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:158: * we're calculating estimated queuing time. On 2016/09/22 15:58:50, charliea wrote: > is this estimated queuing time of estimated input latency? Did you mean 'or' instead of 'of'? I changed it to estimated input latency. That seemed more appropriate. https://codereview.chromium.org/2323533003/diff/240001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:166: durations, totalTime, percentiles, clippedLength) { On 2016/09/22 15:58:50, charliea (OOO until 10-03-16) wrote: > Looks like clippedLength is optional. If so, please prefix with "opt_", and then > have the next line be: > > var clippedLength = opt_clippedLength || 0; Done.
https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:326: // The largest pid renderer is probably the one we're interested in Hmhh, why not just compute the statistics on all pids instead? Loading folks are also looking into doing the same for their loading metrics. The reason is we have benchmarking cases that involve navigating to pages of different domains, hence it's better to not assume that there is a single pid to be interested in.
https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:326: // The largest pid renderer is probably the one we're interested in On 2016/10/03 19:53:50, nednguyen wrote: > Hmhh, why not just compute the statistics on all pids instead? > > Loading folks are also looking into doing the same for their loading metrics. > The reason is we have benchmarking cases that involve navigating to pages of > different domains, hence it's better to not assume that there is a single pid to > be interested in. Yeah I'd love that. I had a dependency on TTI, which only computed it for the renderer with largest pid. I also don't know how to actually report the values from separate renderers. Should I just lump them into one histogram?
Looking a lot closer! I added some additional comments. I still want to take a look at calculateEILRiskPercentiles, but don't have the time to right now. I still need to convince myself that there's not a more natural interface to that function. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:30: function hasCategoryAndName(event, category, title) { Should this be hasTitleAndCategory()? https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:63: TOPLEVEL_CATEGORY_NAME); nit: four additional spaces, not two (I used to make this mistake, as this is a divergence from the normal Java indentation rules) https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:70: function getAllInteractiveTimestampsSorted(model) { maybe "getSortedInteractiveTimestamps"? (I would assume "all" unless other specified") https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:78: var interactiveTsList = []; If you really want the type at the end of these variable names, I'd recommend "Array" instead of "List" to match Javascript's actual type name. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:85: return interactiveTsList.sort((x, y) => x - y); nit: isn't this the default sort() behavior? https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:88: function getAllNavStartTimesSorted(rendererHelper) { should be "timestamps" to match "getAllInteractiveTimestampsSorted" above. also, maybe "getSortedNavStartTimestamps"? (I would assume "all" unless other specified") https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:91: if (!hasCategoryAndName(e, 'blink.user_timing', 'navigationStart')) { I think: for (var e of rendererHelper.mainThread.sliceGroup.childEvents()) { if (!hasCategoryAndName(e, 'blink.user_timing', 'navigationStart')) { navStartTsList.push(e.start); } } is more comprehensible. Generally, Chrome only uses the: for (var x of arr) { if (!predicate(x)) { continue; } // Do foo and bar. } style if "foo" and "bar" contain a lot more logic in them than what would go inside of the "if" https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:96: return navStartTsList.sort((x, y) => x - y); nit: isn't this the default sort() behavior? https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:100: * Returns an array of task windows defined by the supplied interactive s/array/Array Also, I think it might make more accurate to say: "Returns an Array of task windows that start with the supplied interactive timestamps." https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:112: * {start: startTs, end: endTs} TODO fix this TODO fix this? https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:118: for (var curTTI of interactiveTsList) { supernit: curTTI => currTTI (just to be more consistent with the rest of trace viewer) https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:121: // Find the first navigation start event after the interactive timestamp nit: comments end in a period. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:136: console.warn("Encountered two consecutive interactive timestamps " + In line with https://github.com/catapult-project/catapult/blob/master/docs/style-guide.md#..., I'd suggest throwing an error here if you expect this to be an exceptional case. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:177: * @return {!Map<number, number}>} Map of percentile -> EQT at that percentile nit: use "to" instead of "->" https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:181: durations.sort((a, b) => a - b); nit: isn't this the default sort behavior? https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:182: percentiles.sort((a, b) => a - b); nit: isn't this the default sort behavior? https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:185: nit: no need for second blank line https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:188: // input latency, i.e. CDF(t) = Pr(estimated input latency <= t). The CDF I think P() is the accepted probability function name, not Pr() https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:189: // monotonically increases from 0 to 1 as we process the task durations in nit: "monotonically" increases is redundant. just "increases" should work https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:196: var curCDF = freeTime / totalTime; supernit (like above): curCDF => currCDF https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:201: // If curCDF already crossed a percentile value that means estimated input This isn't intuitive, at least for me. Could you elaborate? https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:229: // All the remaining tasks has a chunk of at least width `durationDelta` s/has/have https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:273: * @return {!Map<number, number}>} Map of percentile -> EQT at that percentile s/->/to https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:321: * @param {!Object=} opt_options Does it make sense to document what options are available here? https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:325: tr.model.helpers.ChromeModelHelper); nit: four spaces, not two https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:334: interactiveTimestamps, nit: four spaces, not two https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:344: tr.v.HistogramBinBoundaries.createExponential( nit: most of the other metrics create this exponential as a constant above, rather than creating each of the boundaries as constants and creating the exponential inline. see: https://cs.corp.google.com/github/catapult-project/catapult/tracing/tracing/m...
https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:326: // The largest pid renderer is probably the one we're interested in On 2016/10/03 20:04:44, dproy1 wrote: > On 2016/10/03 19:53:50, nednguyen wrote: > > Hmhh, why not just compute the statistics on all pids instead? > > > > Loading folks are also looking into doing the same for their loading metrics. > > The reason is we have benchmarking cases that involve navigating to pages of > > different domains, hence it's better to not assume that there is a single pid > to > > be interested in. > > Yeah I'd love that. I had a dependency on TTI, which only computed it for the > renderer with largest pid. I also don't know how to actually report the values > from separate renderers. Should I just lump them into one histogram? yeah, just lump all the number from different renderer processes to one histogram. I don't have immediate reason why that would not work. Tim, wdyt?
On 2016/10/03 20:08:47, nednguyen wrote: > https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... > File tracing/tracing/metrics/estimated_input_latency_metric.html (right): > > https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... > tracing/tracing/metrics/estimated_input_latency_metric.html:326: // The largest > pid renderer is probably the one we're interested in > On 2016/10/03 20:04:44, dproy1 wrote: > > On 2016/10/03 19:53:50, nednguyen wrote: > > > Hmhh, why not just compute the statistics on all pids instead? > > > > > > Loading folks are also looking into doing the same for their loading > metrics. > > > The reason is we have benchmarking cases that involve navigating to pages of > > > different domains, hence it's better to not assume that there is a single > pid > > to > > > be interested in. > > > > Yeah I'd love that. I had a dependency on TTI, which only computed it for the > > renderer with largest pid. I also don't know how to actually report the values > > from separate renderers. Should I just lump them into one histogram? > > yeah, just lump all the number from different renderer processes to one > histogram. I don't have immediate reason why that would not work. > > Tim, wdyt? That means that if I have two renderer processes, one super busy, and one super quiet, the quiet one would respond fast, but report high EQT, correct?
On 2016/10/03 20:13:16, tdresser wrote: > On 2016/10/03 20:08:47, nednguyen wrote: > > > https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... > > File tracing/tracing/metrics/estimated_input_latency_metric.html (right): > > > > > https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... > > tracing/tracing/metrics/estimated_input_latency_metric.html:326: // The > largest > > pid renderer is probably the one we're interested in > > On 2016/10/03 20:04:44, dproy1 wrote: > > > On 2016/10/03 19:53:50, nednguyen wrote: > > > > Hmhh, why not just compute the statistics on all pids instead? > > > > > > > > Loading folks are also looking into doing the same for their loading > > metrics. > > > > The reason is we have benchmarking cases that involve navigating to pages > of > > > > different domains, hence it's better to not assume that there is a single > > pid > > > to > > > > be interested in. > > > > > > Yeah I'd love that. I had a dependency on TTI, which only computed it for > the > > > renderer with largest pid. I also don't know how to actually report the > values > > > from separate renderers. Should I just lump them into one histogram? > > > > yeah, just lump all the number from different renderer processes to one > > histogram. I don't have immediate reason why that would not work. > > > > Tim, wdyt? > > That means that if I have two renderer processes, one super busy, and one super > quiet, the quiet one would respond fast, but report high EQT, correct? What I have in mind is we don't combine the EQT computation across processes. Instead, each renderer will output a EQT number, then we add all of them to the histogram. Within the histogram, one can track the 95 percentile & 5 percentile statistics to track the busy process's EQT & quiet process's EQT separately.
SGTM
Sorry I said I would split the CL but since you commented here it was easier to fix it here. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... File tracing/tracing/metrics/estimated_input_latency_metric.html (right): https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:30: function hasCategoryAndName(event, category, title) { On 2016/10/03 20:05:48, charliea (OOO until 10-03-16) wrote: > Should this be hasTitleAndCategory()? Absolutely https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:63: TOPLEVEL_CATEGORY_NAME); On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > nit: four additional spaces, not two > > (I used to make this mistake, as this is a divergence from the normal Java > indentation rules) Yeah my emacs is set up to put two spaces for line continuations. I haven't had the chance to fix it yet. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:70: function getAllInteractiveTimestampsSorted(model) { On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > maybe "getSortedInteractiveTimestamps"? (I would assume "all" unless other > specified") Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:78: var interactiveTsList = []; On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > If you really want the type at the end of these variable names, I'd recommend > "Array" instead of "List" to match Javascript's actual type name. Ew. No I think I'll go with plurals. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:85: return interactiveTsList.sort((x, y) => x - y); On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > nit: isn't this the default sort() behavior? By default JS sort converts elements to strings and then compares them. It's slower, and also wrong when there are negative numbers. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:88: function getAllNavStartTimesSorted(rendererHelper) { On 2016/10/03 20:05:50, charliea (OOO until 10-03-16) wrote: > should be "timestamps" to match "getAllInteractiveTimestampsSorted" above. > > also, maybe "getSortedNavStartTimestamps"? (I would assume "all" unless other > specified") Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:91: if (!hasCategoryAndName(e, 'blink.user_timing', 'navigationStart')) { On 2016/10/03 20:05:50, charliea (OOO until 10-03-16) wrote: > I think: > > for (var e of rendererHelper.mainThread.sliceGroup.childEvents()) { > if (!hasCategoryAndName(e, 'blink.user_timing', 'navigationStart')) { > navStartTsList.push(e.start); > } > } > > is more comprehensible. Generally, Chrome only uses the: > > for (var x of arr) { > if (!predicate(x)) { > continue; > } > > // Do foo and bar. > } > > style if "foo" and "bar" contain a lot more logic in them than what would go > inside of the "if" Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:100: * Returns an array of task windows defined by the supplied interactive On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > s/array/Array > > Also, I think it might make more accurate to say: > > "Returns an Array of task windows that start with the supplied interactive > timestamps." Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:112: * {start: startTs, end: endTs} TODO fix this On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > TODO fix this? Sorry this slipped through. I meant to change this to range. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:118: for (var curTTI of interactiveTsList) { On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > supernit: curTTI => currTTI (just to be more consistent with the rest of trace > viewer) Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:121: // Find the first navigation start event after the interactive timestamp On 2016/10/03 20:05:50, charliea (OOO until 10-03-16) wrote: > nit: comments end in a period. Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:136: console.warn("Encountered two consecutive interactive timestamps " + On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > In line with > https://github.com/catapult-project/catapult/blob/master/docs/style-guide.md#..., > I'd suggest throwing an error here if you expect this to be an exceptional case. I'm torn with this one. This case is not exceptional in the sense that I expect it to be infrequent - I expect this to happen all the time. But if this happens there's no way to make sense of estimated input latency, and there is no way to fix this problem short of redesigning chrome renderer scheduler and giving each page it's own renderer scheduler. Would a loud error be too inconvenient / crash trace viewer/telemetry? https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:177: * @return {!Map<number, number}>} Map of percentile -> EQT at that percentile On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > nit: use "to" instead of "->" Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:177: * @return {!Map<number, number}>} Map of percentile -> EQT at that percentile On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > nit: use "to" instead of "->" Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:185: On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > nit: no need for second blank line Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:185: On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > nit: no need for second blank line Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:188: // input latency, i.e. CDF(t) = Pr(estimated input latency <= t). The CDF On 2016/10/03 20:05:48, charliea (OOO until 10-03-16) wrote: > I think P() is the accepted probability function name, not Pr() P looks weird to me. I changed it to spell out Probability. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:189: // monotonically increases from 0 to 1 as we process the task durations in On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > nit: "monotonically" increases is redundant. just "increases" should work Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:196: var curCDF = freeTime / totalTime; On 2016/10/03 20:05:50, charliea (OOO until 10-03-16) wrote: > supernit (like above): curCDF => currCDF Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:201: // If curCDF already crossed a percentile value that means estimated input On 2016/10/03 20:05:50, charliea (OOO until 10-03-16) wrote: > This isn't intuitive, at least for me. Could you elaborate? Ah really? I'm not sure how to explain this more without a whiteboard - consider the case of a taskWindow which is half filled with tasks (for concreteness, say 100 ms window with a 50ms task at the beginning.) If I'm calculating the 10th percentile, I'll first process the free time and the cdf will already be at 0.50. That means the 10th percentile of input latency is 0. This makes sense because 10th percentile of EQT means I'm asking what's the duration d such that Probability(EQT <= d) = .10. Well half of the window is empty, which means Probability(EQT <= d) = .50 for d = 0, and as you increase the value of d the CDF only goes up. More intuitively, 50% of the time EQT will be 0 since 50% of the window is empty. Therefore for all percentiles below 50, EQT is also 0. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:229: // All the remaining tasks has a chunk of at least width `durationDelta` On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > s/has/have Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:273: * @return {!Map<number, number}>} Map of percentile -> EQT at that percentile On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > s/->/to Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:321: * @param {!Object=} opt_options On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > Does it make sense to document what options are available here? There are no options. Since this is the second time you're asking about this, I'm guessing existence of this argument is needlessly confusing. I'm deleting it. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:325: tr.model.helpers.ChromeModelHelper); On 2016/10/03 20:05:50, charliea (OOO until 10-03-16) wrote: > nit: four spaces, not two Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:334: interactiveTimestamps, On 2016/10/03 20:05:49, charliea (OOO until 10-03-16) wrote: > nit: four spaces, not two Done. https://codereview.chromium.org/2323533003/diff/280001/tracing/tracing/metric... tracing/tracing/metrics/estimated_input_latency_metric.html:344: tr.v.HistogramBinBoundaries.createExponential( On 2016/10/03 20:05:48, charliea (OOO until 10-03-16) wrote: > nit: most of the other metrics create this exponential as a constant above, > rather than creating each of the boundaries as constants and creating the > exponential inline. see: > https://cs.corp.google.com/github/catapult-project/catapult/tracing/tracing/m... Done.
(Removing myself if this is being split: happy to review the split CLs, or add me back to this one if it's still relevant)
benjhayden@chromium.org changed reviewers: - benjhayden@chromium.org
charliea@chromium.org changed reviewers: - charliea@chromium.org |