|
|
Created:
5 years, 9 months ago by benjhayden Modified:
5 years, 8 months ago CC:
chromium-reviews, telemetry-reviews_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionSimplify LayoutMetric.
Report durations of performLayout directly without doing any analysis.
Ignore other events.
Use interactions.
A "load" event may be implemented in another cl, possibly using https://code.google.com/p/chromium/issues/detail?id=345845
BUG=460208
Committed: https://crrev.com/a108aa74312c934fadf3f88cfe9674ba7231271d
Cr-Commit-Position: refs/heads/master@{#323984}
Patch Set 1 #
Total comments: 2
Patch Set 2 : . #
Total comments: 2
Patch Set 3 : . #
Total comments: 6
Patch Set 4 : . #
Total comments: 16
Patch Set 5 : comments #
Total comments: 4
Patch Set 6 : . #Patch Set 7 : . #Patch Set 8 : . #Patch Set 9 : . #Patch Set 10 : . #
Total comments: 6
Patch Set 11 : . #Patch Set 12 : . #
Total comments: 3
Patch Set 13 : . #
Total comments: 5
Patch Set 14 : . #
Total comments: 2
Patch Set 15 : . #
Total comments: 7
Patch Set 16 : . #Patch Set 17 : . #
Total comments: 1
Patch Set 18 : . #Patch Set 19 : . #Patch Set 20 : . #
Messages
Total messages: 71 (18 generated)
benjhayden@chromium.org changed reviewers: + nednguyen@google.com
Don't you also need to update layout_unittest for this change? https://codereview.chromium.org/979243003/diff/1/tools/telemetry/telemetry/we... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/1/tools/telemetry/telemetry/we... tools/telemetry/telemetry/web_perf/metrics/layout.py:23: events = [event for event in renderer_thread.parent.IterAllSlices() s/events/layout_events
https://codereview.chromium.org/979243003/diff/1/tools/telemetry/telemetry/we... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/1/tools/telemetry/telemetry/we... tools/telemetry/telemetry/web_perf/metrics/layout.py:23: events = [event for event in renderer_thread.parent.IterAllSlices() On 2015/03/09 17:14:10, nednguyen wrote: > s/events/layout_events Done.
https://codereview.chromium.org/979243003/diff/40001/tools/telemetry/telemetr... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/40001/tools/telemetry/telemetr... tools/telemetry/telemetry/web_perf/metrics/layout.py:23: self._AddResults(renderer_thread.parent.IterAllSlices(), results) What happen if there are multiple renderer_thread? We think that's actually a bug that the signature of AddResults take in a renderer_thread instance. We will remove it soon.
https://codereview.chromium.org/979243003/diff/40001/tools/telemetry/telemetr... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/40001/tools/telemetry/telemetr... tools/telemetry/telemetry/web_perf/metrics/layout.py:23: self._AddResults(renderer_thread.parent.IterAllSlices(), results) On 2015/03/09 17:53:41, nednguyen wrote: > What happen if there are multiple renderer_thread? > > We think that's actually a bug that the signature of AddResults take in a > renderer_thread instance. We will remove it soon. I'm not sure what I should do if there are multiple renderer threads. Aggregate events from all of them? Look for the main renderer thread only? What would you suggest?
On 2015/03/09 17:57:35, benjhayden_chromium wrote: > https://codereview.chromium.org/979243003/diff/40001/tools/telemetry/telemetr... > File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): > > https://codereview.chromium.org/979243003/diff/40001/tools/telemetry/telemetr... > tools/telemetry/telemetry/web_perf/metrics/layout.py:23: > self._AddResults(renderer_thread.parent.IterAllSlices(), results) > On 2015/03/09 17:53:41, nednguyen wrote: > > What happen if there are multiple renderer_thread? > > > > We think that's actually a bug that the signature of AddResults take in a > > renderer_thread instance. We will remove it soon. > > I'm not sure what I should do if there are multiple renderer threads. Aggregate > events from all of them? Look for the main renderer thread only? What would you > suggest? If we have a multi tab test case, how would you compute the layout metrics?
On 2015/03/09 at 18:21:45, nednguyen wrote: > On 2015/03/09 17:57:35, benjhayden_chromium wrote: > > https://codereview.chromium.org/979243003/diff/40001/tools/telemetry/telemetr... > > File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): > > > > https://codereview.chromium.org/979243003/diff/40001/tools/telemetry/telemetr... > > tools/telemetry/telemetry/web_perf/metrics/layout.py:23: > > self._AddResults(renderer_thread.parent.IterAllSlices(), results) > > On 2015/03/09 17:53:41, nednguyen wrote: > > > What happen if there are multiple renderer_thread? > > > > > > We think that's actually a bug that the signature of AddResults take in a > > > renderer_thread instance. We will remove it soon. > > > > I'm not sure what I should do if there are multiple renderer threads. Aggregate > > events from all of them? Look for the main renderer thread only? What would you > > suggest? > > If we have a multi tab test case, how would you compute the layout metrics? Sorry, this is completely new to me. I'm not aware of anybody planning on adding a multi-tab test case to any page sets in blink_perf or thread_times benchmarks. I'm not aware of what kinds of metrics anybody would want for such a test case. I can imagine a few ways of handling multi-tab cases, and I can imagine problems with them all. LayoutMetrics just exposes the layout times directly. If per-tab-ness is important for the authors of such test cases, then I could add a tab id to the result value name like "load_layout_0", "load_layout_1", etc. But I don't know what systems might break if some pages have "load_layout" but others have "load_layout_0", or if some have more tabs than others. Or maybe I could return a list of lists of durations like load_layout=[[10,1,1],[20,3],[15,2,2,3,2,1,2]], but again, I don't know what systems might not be able to handle nested lists. If per-tab-ness is not important, then I could just flatten the list of layout durations so that the results would look like they look now after this patch, but I don't know if that would make reading the results or debugging regressions difficult somehow, e.g. if tab order is stable or unimportant. So how would other metrics handle multi-tab cases?
nednguyen@google.com changed reviewers: + nduca@chromium.org, skyostil@google.com
I think maybe the test case need to specify interaction record properly so that there is only single renderer thread per time ranges of interaction records. +Nat, Sammi: thoughts?
On 2015/03/09 19:43:09, nednguyen wrote: > I think maybe the test case need to specify interaction record properly so that > there is only single renderer thread per time ranges of interaction records. > > +Nat, Sammi: thoughts? Sorry, I think we shouldn't block this change whereas the direction is not clear yet. lgtm
Patchset #4 (id:60001) has been deleted
Patchset #2 (id:20001) has been deleted
Patchset #3 (id:80001) has been deleted
benjhayden@chromium.org changed reviewers: - nduca@chromium.org
benjhayden@chromium.org changed reviewers: - skyostil@google.com
Patchset #3 (id:100001) has been deleted
Patchset #3 (id:120001) has been deleted
PTAL, I changed it to use the first interaction record instead of requiring a trace event argument.
On 2015/03/19 22:41:03, benjhayden_chromium wrote: > PTAL, I changed it to use the first interaction record instead of requiring a > trace event argument. Do we have any existing benchmark that use layout yet? I think this will affect their metrics, right?
On 2015/03/19 at 22:47:07, nednguyen wrote: > On 2015/03/19 22:41:03, benjhayden_chromium wrote: > > PTAL, I changed it to use the first interaction record instead of requiring a > > trace event argument. > > Do we have any existing benchmark that use layout yet? I think this will affect their metrics, right? thread_times is currently the only user.
https://codereview.chromium.org/979243003/diff/140001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/140001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:10: def FirstInteraction(interactions): Instead of using the first interaction, the way we often handling the multiple interactions is join the metrics from them. For example: if metric value is a list, we append multiple lists together. if metric value is max, we take max() of all metric from the interaction records... https://codereview.chromium.org/979243003/diff/140001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:36: if event.start < first_interaction: Instead of this, the convenient method to use is: if event.GetBounds().Intersect(interaction.GetBounds()) You can add a GetBounds() method to event. https://codereview.chromium.org/979243003/diff/140001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:37: preload.append(event.end - event.start) We can join the the preload value from multiple interaction records.
https://codereview.chromium.org/979243003/diff/140001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/140001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:10: def FirstInteraction(interactions): On 2015/03/19 23:02:21, nednguyen wrote: > Instead of using the first interaction, the way we often handling the multiple > interactions is join the metrics from them. > For example: > if metric value is a list, we append multiple lists together. > if metric value is max, we take max() of all metric from the interaction > records... I understand that that is what is often done. I don't understand why. Layout events could happen in between interactions, and I don't understand why they should be dropped on the floor. They happened, so they might be interesting and/or significant. Also, dropping them on the floor might increase flakiness if they sometimes overlap interaction records and sometimes don't overlap. https://codereview.chromium.org/979243003/diff/140001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:37: preload.append(event.end - event.start) On 2015/03/19 23:02:21, nednguyen wrote: > We can join the the preload value from multiple interaction records. Sorry, I don't understand this. There is at most one preload and at most one postload collected per trace.
https://codereview.chromium.org/979243003/diff/140001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/140001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:10: def FirstInteraction(interactions): On 2015/03/19 23:41:28, benjhayden_chromium wrote: > On 2015/03/19 23:02:21, nednguyen wrote: > > Instead of using the first interaction, the way we often handling the multiple > > interactions is join the metrics from them. > > For example: > > if metric value is a list, we append multiple lists together. > > if metric value is max, we take max() of all metric from the interaction > > records... The reason we have these interaction record is to reduce noise. For example: + A users make a scroll, then wait, then click on a button that trigger the animation. These timeline interaction records provide a flexible way to make sure that we don't take the idle time into calculation. + In power benchmark, we have cases whereas browser does things that increase the noise a lot. We use the interaction record to exclude the time range when that happen. > > I understand that that is what is often done. > I don't understand why. > Layout events could happen in between interactions, and I don't understand why > they should be dropped on the floor. They happened, so they might be interesting > and/or significant. I am working on creating a "cover-all" interaction record. https://code.google.com/p/chromium/issues/detail?id=460206 > Also, dropping them on the floor might increase flakiness if > they sometimes overlap interaction records and sometimes don't overlap. To fix the flakiness in this case, we would need to find a better interaction range, i.e: either narrowing or widening the range. Or you can also remove the interaction either in the page itself, or by using ValueCanBeAddedPredicate(..) to filtering out metrics from interaction ranges that you don't want to see in case you reuse the pages from elsewhere.
PTAL
Ping?
On 2015/03/26 21:15:27, benjhayden_chromium wrote: > Ping? Sorry, I am participating the trace-viewer hackathon this week so am a little slow in code review.
https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:33: load_layouts.append(event.end - event.start) What does this mean? Why does the load_layouts contain event's duration if the it intersect with the first interaction? What about other interaction records? https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:37: animations.append(event.end - event.start) why does this animations list contain repeating data? https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:41: results.current_page, 'load_layout', 'ms', load_layouts)) Please add description field to describe what these metrics are about and how do you obtain them. https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py (right): https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:44: FakeInteraction._make((10, 20)), FakeInteraction(10, 20) https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:47: results = RunLayoutMetric(events, interactions) Can you please add some comment explain why the test results are this way? You can follow the examples here: https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te...
https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:33: load_layouts.append(event.end - event.start) On 2015/03/27 21:08:10, nednguyen wrote: > What does this mean? This line means "Append the duration of the current layout event to the load_layouts list." The previous line means "The current layout event is a 'load layout' if either there are no interaction records or if the layout event starts before the first interaction record." > Why does the load_layouts list contain event's duration if it intersects with the first interaction? If the current layout event started before the first interaction record, then the first interaction record could not possibly have caused the layout event, regardless of whether the layout event ends after the first interaction starts. If the layout was not caused by an interaction, then it must be part of loading the page. > What about other interaction records? Other interaction records are handled 2-5 lines later. https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:37: animations.append(event.end - event.start) On 2015/03/27 21:08:10, nednguyen wrote: > why does this animations list contain repeating data? I was under the impression that interaction records cannot overlap. Can interaction records overlap? If interaction records cannot overlap, then no layout event can start during multiple interaction records, so no layout event will be appended to the animations list more than once. Either way, it wouldn't hurt to add a break. https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:41: results.current_page, 'load_layout', 'ms', load_layouts)) On 2015/03/27 21:08:10, nednguyen wrote: > Please add description field to describe what these metrics are about and how do > you obtain them. Done. https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py (right): https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:44: FakeInteraction._make((10, 20)), On 2015/03/27 at 21:08:10, nednguyen wrote: > FakeInteraction(10, 20) Done. https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:47: results = RunLayoutMetric(events, interactions) On 2015/03/27 at 21:08:10, nednguyen wrote: > Can you please add some comment explain why the test results are this way? > > You can follow the examples here: > https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te... Done.
https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:33: load_layouts.append(event.end - event.start) On 2015/03/27 21:59:10, benjhayden_chromium wrote: > On 2015/03/27 21:08:10, nednguyen wrote: > > What does this mean? > > This line means "Append the duration of the current layout event to the > load_layouts list." > The previous line means "The current layout event is a 'load layout' if either > there are no interaction records or if the layout event starts before the first > interaction record." I think you should infer the load layout with a different approach. There is no guarantee about when the interaction records will happen. > > > Why does the load_layouts list contain event's duration if it intersects with > the first interaction? > > If the current layout event started before the first interaction record, then > the first interaction record could not possibly have caused the layout event, > regardless of whether the layout event ends after the first interaction starts. Why is there a causal relationship between the interaction & layout events? > If the layout was not caused by an interaction, then it must be part of loading > the page. Why? > > > What about other interaction records? > > Other interaction records are handled 2-5 lines later. https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:37: animations.append(event.end - event.start) On 2015/03/27 21:59:10, benjhayden_chromium wrote: > On 2015/03/27 21:08:10, nednguyen wrote: > > why does this animations list contain repeating data? > > I was under the impression that interaction records cannot overlap. > > Can interaction records overlap? > > If interaction records cannot overlap, then no layout event can start during > multiple interaction records, so no layout event will be appended to the > animations list more than once. > > Either way, it wouldn't hurt to add a break. You can use a check to make sure that interaction don't overlap: https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te... Also what happen for these cases? 1) [ interaction ] [ Layout event ] 2) [ interaction ] [ Layout event ] I think you'd be better using interaction_record.GetOverllapedThreadTimesForSlice method.
https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:33: load_layouts.append(event.end - event.start) On 2015/03/27 23:22:37, nednguyen wrote: > On 2015/03/27 21:59:10, benjhayden_chromium wrote: > > On 2015/03/27 21:08:10, nednguyen wrote: > > > What does this mean? > > > > This line means "Append the duration of the current layout event to the > > load_layouts list." > > The previous line means "The current layout event is a 'load layout' if either > > there are no interaction records or if the layout event starts before the > first > > interaction record." > > I think you should infer the load layout with a different approach. There is no > guarantee about when the interaction records will happen. What approach should I use instead? > > > > > > Why does the load_layouts list contain event's duration if it intersects > with > > the first interaction? > > > > If the current layout event started before the first interaction record, then > > the first interaction record could not possibly have caused the layout event, > > regardless of whether the layout event ends after the first interaction > starts. > > Why is there a causal relationship between the interaction & layout events? > > > If the layout was not caused by an interaction, then it must be part of > loading > > the page. > > Why? Layouts are not spontaneous, they always have a cause. Chrome does not re-layout when it doesn't need to. Some interactions can cause layouts, either synchronously or asynchronously, using javascript. This article describes how modifying some style properties can cause layout. http://www.html5rocks.com/en/tutorials/speed/high-performance-animations/ If a layout was not triggered indirectly by an interaction, then it must have been caused by loading the page itself. There are no other causes for layout that I know of besides loading and interaction javascript. https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:37: animations.append(event.end - event.start) On 2015/03/27 23:22:37, nednguyen wrote: > On 2015/03/27 21:59:10, benjhayden_chromium wrote: > > On 2015/03/27 21:08:10, nednguyen wrote: > > > why does this animations list contain repeating data? > > > > I was under the impression that interaction records cannot overlap. > > > > Can interaction records overlap? > > > > If interaction records cannot overlap, then no layout event can start during > > multiple interaction records, so no layout event will be appended to the > > animations list more than once. > > > > Either way, it wouldn't hurt to add a break. > > You can use a check to make sure that interaction don't overlap: > https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te... > > Also what happen for these cases? > > 1) > [ interaction ] > > [ Layout event ] The layout starts during the interaction, so it is assumed that the interaction caused the layout. It doesn't matter that the interaction didn't wait for the layout to finish. It is technically possible that the layout was not caused by the interaction, and that the layout would have happened if the interaction had not happened. However, in that case, then either the RunPageInteractions method ought to be modified to wait until loading is finished before issuing the interaction, or else the page should be modified to only run animations during the interaction. > > 2) > [ interaction ] > > [ Layout event ] The layout starts before the interaction, so the interaction can not have caused the layout. It doesn't matter that the interaction didn't wait to start until after the layout ended. It might be technically feasible to make interactions roughly align with frames using requestAnimationFrame, but it seems like it would be difficult and unnecessary. > > I think you'd be better using > interaction_record.GetOverlappedThreadTimeForSlice method. That method looks like it would be perfect for computing how much of an interaction is spent in a particular kind of trace event, or how much of a trace event overlaps with an interaction. However, as far as I know, we are not interested in how much of the interaction is spent doing layout. We are interested in how long each layout takes and whether it was caused by an interaction. Layout is not continuous, it is atomic. Everything is laid out and then it's done until the layout tree is damaged again. I don't see how it makes sense to talk about fractions of a layout.
https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:33: load_layouts.append(event.end - event.start) On 2015/03/30 20:15:02, benjhayden_chromium wrote: > On 2015/03/27 23:22:37, nednguyen wrote: > > On 2015/03/27 21:59:10, benjhayden_chromium wrote: > > > On 2015/03/27 21:08:10, nednguyen wrote: > > > > What does this mean? > > > > > > This line means "Append the duration of the current layout event to the > > > load_layouts list." > > > The previous line means "The current layout event is a 'load layout' if > either > > > there are no interaction records or if the layout event starts before the > > first > > > interaction record." > > > > I think you should infer the load layout with a different approach. There is > no > > guarantee about when the interaction records will happen. > > What approach should I use instead? In your patch #1, I see this logic: results.AddValue(list_of_scalar_values.ListOfScalarValues( results.current_page, 'load_layout', 'ms', [e.end - e.start for e in events if not e.args['loaded']])) If I interpret this correctly, it means an event e with name 'FrameView::performLayout' is a load_layout if e.args['loaded'] is True. Can we use that logic here? > > > > > > > > > > Why does the load_layouts list contain event's duration if it intersects > > with > > > the first interaction? > > > > > > If the current layout event started before the first interaction record, > then > > > the first interaction record could not possibly have caused the layout > event, > > > regardless of whether the layout event ends after the first interaction > > starts. > > > > Why is there a causal relationship between the interaction & layout events? > > > > > If the layout was not caused by an interaction, then it must be part of > > loading > > > the page. > > > > Why? > > Layouts are not spontaneous, they always have a cause. Chrome does not re-layout > when it doesn't need to. > Some interactions can cause layouts, either synchronously or asynchronously, > using javascript. > This article describes how modifying some style properties can cause layout. > http://www.html5rocks.com/en/tutorials/speed/high-performance-animations/ > > If a layout was not triggered indirectly by an interaction, then it must have > been caused by loading the page itself. > There are no other causes for layout that I know of besides loading and > interaction javascript. Thanks for explaining this. I think the misunderstanding here is caused by the fact that there is no guaranteed the first interaction record in |interactions| is also the first interaction of the page. Hence we cannot say if a layout event happens in the first interaction record, it's a load layout. Let me explain this in more details: TimelineBasedMeasurement will group all the interaction records ever created in the pages into groups of interaction with same names. i.e: let says that these interaction records are created (ordered by time) [ A ] [ B ] [ C ] [ A ] [ C ] [ B ] [ B ] ('A', 'B', 'C' are interaction records name). then 3 groups of interaction records are: g1 = ([A], [A]), g2 =([B], [B], [B]), g3([C], [C]) TBM then will call: layout.AddResults(.. g1..), layout.AddResults(.. g2..), layout.AddResults(.. g3..) in random order. * This is TBM's fault for not specifying this clearly, I will add this documentation to TBM. https://codereview.chromium.org/979243003/diff/160001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:37: animations.append(event.end - event.start) On 2015/03/30 20:15:02, benjhayden_chromium wrote: > On 2015/03/27 23:22:37, nednguyen wrote: > > On 2015/03/27 21:59:10, benjhayden_chromium wrote: > > > On 2015/03/27 21:08:10, nednguyen wrote: > > > > why does this animations list contain repeating data? > > > > > > I was under the impression that interaction records cannot overlap. > > > > > > Can interaction records overlap? > > > > > > If interaction records cannot overlap, then no layout event can start during > > > multiple interaction records, so no layout event will be appended to the > > > animations list more than once. > > > > > > Either way, it wouldn't hurt to add a break. > > > > You can use a check to make sure that interaction don't overlap: > > > https://code.google.com/p/chromium/codesearch#chromium/src/tools/telemetry/te... > > > > Also what happen for these cases? > > > > 1) > > [ interaction ] > > > > [ Layout event ] > > The layout starts during the interaction, so it is assumed that the interaction > caused the layout. It doesn't matter that the interaction didn't wait for the > layout to finish. > > It is technically possible that the layout was not caused by the interaction, > and that the layout would have happened if the interaction had not happened. > However, in that case, then either the RunPageInteractions method ought to be > modified to wait until loading is finished before issuing the interaction, or > else the page should be modified to only run animations during the interaction. > > > > > 2) > > [ interaction ] > > > > [ Layout event ] > > The layout starts before the interaction, so the interaction can not have caused > the layout. It doesn't matter that the interaction didn't wait to start until > after the layout ended. > > It might be technically feasible to make interactions roughly align with frames > using requestAnimationFrame, but it seems like it would be difficult and > unnecessary. > > > > > I think you'd be better using > > interaction_record.GetOverlappedThreadTimeForSlice method. > > That method looks like it would be perfect for computing how much of an > interaction is spent in a particular kind of trace event, or how much of a trace > event overlaps with an interaction. > However, as far as I know, we are not interested in how much of the interaction > is spent doing layout. > We are interested in how long each layout takes and whether it was caused by an > interaction. Can you documentation this semantic in the description field of 'layout' & 'load_layout' below? > > Layout is not continuous, it is atomic. Everything is laid out and then it's > done until the layout tree is damaged again. I don't see how it makes sense to > talk about fractions of a layout. https://codereview.chromium.org/979243003/diff/180001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/180001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:42: results.current_page, 'load_layout', 'ms', load_layouts, can you add the interaction record's label to the value? Ethan just added a patch that teach the value system the notion of interaction record recently. https://codereview.chromium.org/979243003/diff/180001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:43: description='List of durations of layouts before first interaction')) As discussed above, this description is probably no longer true. https://codereview.chromium.org/979243003/diff/180001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:45: results.AddValue(list_of_scalar_values.ListOfScalarValues( Ditto. https://codereview.chromium.org/979243003/diff/180001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:47: description='List of durations of layouts during interactions')) Explain the term "during" in more details.
PTAL
PTAL 2
What is our short term plan for creating 'load' interaction record? https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:39: if interactions[0].label == 'load': This sounds about right. However, to check whether an interaction is load, I think we should rely on something more strictly like type rather than using its label. Note that anyone can define interaction with any labels in the pages, so we can get name collision problem with "load". https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:42: label += '_load' label is supposed to be just interaction record label.
nednguyen@google.com changed reviewers: + eakuefner@chromium.org
https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:39: if interactions[0].label == 'load': On 2015/03/31 22:49:03, nednguyen wrote: > This sounds about right. However, to check whether an interaction is load, I > think we should rely on something more strictly like type rather than using its > label. Note that anyone can define interaction with any labels in the pages, so > we can get name collision problem with "load". Let's punt the load event to another CL. https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:42: label += '_load' On 2015/03/31 22:49:03, nednguyen wrote: > label is supposed to be just interaction record label. It sounds like you're saying that the result label should be exactly equal to the interaction label, but if the result label is exactly equal to the interaction label, then all of the results would have the same label, and that label wouldn't say anything about what the results are ("layout"). Right? So I assume you mean that result labels should include the interaction label (e.g. "ScrollAction") as well as something to indicate the meaning of the result, e.g. "layout". I assume that is recommended because there may be multiple kinds of interactions in a single page, and TBM would call LayoutMetric once for each kind of interaction, and the different results should not all have exactly the same label: neither "layout" nor "ScrollAction", but rather "layout_ScrollAction". Is that correct? Here is the problem with including the interaction record labels in the result labels. KeySilkCases uses 8 different interaction record names: ScrollAction SwipeAction Action_TapAction TapAction Wait Action_SwipeAction animation_interaction RunSmoothAllActions So some pages would report their animation layout durations as "layout_ScrollAction", some as "layout_SwipeAction", etc. which would be confusing since they're all conceptually the same thing, i.e. animation layout durations. chromeperf would not be able to present them together in a single graph. If I have to look at 8 different graphs when I should be able to look at one, then I probably won't ever look at any of them. We might as well not have any layout metrics. Should I relabel all of the interactions in KeySilkCases to have the same label? Or teach TBM to call LayoutMetric once with all of the interaction records together? Or is there another way to let chromeperf group all the animation layout durations from all the pages together?
https://codereview.chromium.org/979243003/diff/320001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/320001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:39: label += '_' + interactions[0].label You don't need this. The label is supposed to be interaction.label always. The 'layout' part should be the value's name
+Ethan: I think we should modify the name label to interaction_record_label to avoid the confusion when we don't have a clear plan for the ValueBuilder yet :-( https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:39: if interactions[0].label == 'load': On 2015/04/01 20:22:45, benjhayden_chromium wrote: > On 2015/03/31 22:49:03, nednguyen wrote: > > This sounds about right. However, to check whether an interaction is load, I > > think we should rely on something more strictly like type rather than using > its > > label. Note that anyone can define interaction with any labels in the pages, > so > > we can get name collision problem with "load". > > Let's punt the load event to another CL. I like this. Fixing the TODO is add the Load interaction record may be a sizable patch. https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:42: label += '_load' On 2015/04/01 20:22:45, benjhayden_chromium wrote: > On 2015/03/31 22:49:03, nednguyen wrote: > > label is supposed to be just interaction record label. > > It sounds like you're saying that the result label should be exactly equal to > the interaction label, but if the result label is exactly equal to the > interaction label, then all of the results would have the same label, and that > label wouldn't say anything about what the results are ("layout"). Right? > > So I assume you mean that result labels should include the interaction label > (e.g. "ScrollAction") as well as something to indicate the meaning of the > result, e.g. "layout". > > I assume that is recommended because there may be multiple kinds of interactions > in a single page, and TBM would call LayoutMetric once for each kind of > interaction, and the different results should not all have exactly the same > label: neither "layout" nor "ScrollAction", but rather "layout_ScrollAction". Is > that correct? > > Here is the problem with including the interaction record labels in the result > labels. > > KeySilkCases uses 8 different interaction record names: > ScrollAction > SwipeAction > Action_TapAction > TapAction > Wait > Action_SwipeAction > animation_interaction > RunSmoothAllActions > > So some pages would report their animation layout durations as > "layout_ScrollAction", some as "layout_SwipeAction", etc. which would be > confusing since they're all conceptually the same thing, i.e. animation layout > durations. chromeperf would not be able to present them together in a single > graph. If I have to look at 8 different graphs when I should be able to look at > one, then I probably won't ever look at any of them. We might as well not have > any layout metrics. > > Should I relabel all of the interactions in KeySilkCases to have the same label? > Or teach TBM to call LayoutMetric once with all of the interaction records > together? > Or is there another way to let chromeperf group all the animation layout > durations from all the pages together? label is interaction record label, name is for the metric name.
On 2015/04/01 at 20:25:50, nednguyen wrote: > +Ethan: > > I think we should modify the name label to interaction_record_label to avoid the confusion when we don't have a clear plan for the ValueBuilder yet :-( Agree -- in in-person discussion we agreed on tir_label as the name. I think another potential source of confusion here might be that the field in Value is called interaction_record at the moment, where in timeline_based_metric it's called label. tir_label is a good middle ground that we can standardize in both cases. > > https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... > File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): > > https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... > tools/telemetry/telemetry/web_perf/metrics/layout.py:39: if interactions[0].label == 'load': > On 2015/04/01 20:22:45, benjhayden_chromium wrote: > > On 2015/03/31 22:49:03, nednguyen wrote: > > > This sounds about right. However, to check whether an interaction is load, I > > > think we should rely on something more strictly like type rather than using > > its > > > label. Note that anyone can define interaction with any labels in the pages, > > so > > > we can get name collision problem with "load". > > > > Let's punt the load event to another CL. > > I like this. Fixing the TODO is add the Load interaction record may be a sizable patch. > > https://codereview.chromium.org/979243003/diff/280001/tools/telemetry/telemet... > tools/telemetry/telemetry/web_perf/metrics/layout.py:42: label += '_load' > On 2015/04/01 20:22:45, benjhayden_chromium wrote: > > On 2015/03/31 22:49:03, nednguyen wrote: > > > label is supposed to be just interaction record label. > > > > It sounds like you're saying that the result label should be exactly equal to > > the interaction label, but if the result label is exactly equal to the > > interaction label, then all of the results would have the same label, and that > > label wouldn't say anything about what the results are ("layout"). Right? > > > > So I assume you mean that result labels should include the interaction label > > (e.g. "ScrollAction") as well as something to indicate the meaning of the > > result, e.g. "layout". > > > > I assume that is recommended because there may be multiple kinds of interactions > > in a single page, and TBM would call LayoutMetric once for each kind of > > interaction, and the different results should not all have exactly the same > > label: neither "layout" nor "ScrollAction", but rather "layout_ScrollAction". Is > > that correct? > > > > Here is the problem with including the interaction record labels in the result > > labels. > > > > KeySilkCases uses 8 different interaction record names: > > ScrollAction > > SwipeAction > > Action_TapAction > > TapAction > > Wait > > Action_SwipeAction > > animation_interaction > > RunSmoothAllActions > > > > So some pages would report their animation layout durations as > > "layout_ScrollAction", some as "layout_SwipeAction", etc. which would be > > confusing since they're all conceptually the same thing, i.e. animation layout > > durations. chromeperf would not be able to present them together in a single > > graph. If I have to look at 8 different graphs when I should be able to look at > > one, then I probably won't ever look at any of them. We might as well not have > > any layout metrics. > > > > Should I relabel all of the interactions in KeySilkCases to have the same label? > > Or teach TBM to call LayoutMetric once with all of the interaction records > > together? > > Or is there another way to let chromeperf group all the animation layout > > durations from all the pages together? > > label is interaction record label, name is for the metric name.
https://codereview.chromium.org/979243003/diff/320001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/320001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:39: label += '_' + interactions[0].label On 2015/04/01 at 20:23:18, nednguyen wrote: > You don't need this. The label is supposed to be interaction.label always. > The 'layout' part should be the value's name I am using label as the result value name. Do you want the result value name to be 'layout' or interaction.label?
https://codereview.chromium.org/979243003/diff/320001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/320001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:39: label += '_' + interactions[0].label On 2015/04/01 20:43:47, benjhayden_chromium wrote: > On 2015/04/01 at 20:23:18, nednguyen wrote: > > You don't need this. The label is supposed to be interaction.label always. > > The 'layout' part should be the value's name > > I am using label as the result value name. > > Do you want the result value name to be 'layout' or interaction.label? > It should just be layout. At present, TBM automatically mutates the names of added values to be prefixed with the label of the current interaction record.
https://codereview.chromium.org/979243003/diff/340001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/340001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:36: description = 'List of durations of layouts' "List of durations of layouts. These layouts events are created during the interaction records' time range, i.e: the start time of them are in between the start & end wall-time of interaction records". https://codereview.chromium.org/979243003/diff/340001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:40: results.current_page, 'layout', 'ms', layouts, description=description)) For now, let's make it: list_of_scalar_values.ListOfScalarValues( page=results.current_page, label=interactions[0].label, name='layout', unit='ms', description=description)
https://codereview.chromium.org/979243003/diff/340001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/340001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:36: description = 'List of durations of layouts' On 2015/04/01 20:51:47, nednguyen wrote: > "List of durations of layouts. These layouts events are created during the > interaction records' time range, i.e: the start time of them are in between the > start & end wall-time of interaction records". I think the new description summarizes the important parts without essentially duplicating the code in English. https://codereview.chromium.org/979243003/diff/340001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:40: results.current_page, 'layout', 'ms', layouts, description=description)) On 2015/04/01 20:51:47, nednguyen wrote: > For now, let's make it: > list_of_scalar_values.ListOfScalarValues( > page=results.current_page, > label=interactions[0].label, > name='layout', > unit='ms', description=description) I don't see 'label' among the arguments to ListOfScalarValues. I do see 'interaction_record'. When I set interaction_record=interactions[0].label, then I see this exception: Traceback (most recent call last): Run at tools/telemetry/telemetry/benchmark.py:202 max_failures=self._max_failures) Run at tools/telemetry/telemetry/user_story/user_story_runner.py:221 expectations, user_story, results, state) _RunUserStoryAndProcessErrorIfNeeded at tools/telemetry/telemetry/user_story/user_story_runner.py:88 state.RunUserStory(results) RunUserStory at tools/telemetry/telemetry/page/shared_page_state.py:254 self._test.RunPage(self._current_page, self._current_tab, results) RunPage at tools/telemetry/telemetry/page/page_test.py:219 self.ValidateAndMeasurePage(page, tab, results) ValidateAndMeasurePage at tools/perf/measurements/thread_times.py:40 self._timeline_controller.smooth_records, results) AddResults at tools/telemetry/telemetry/web_perf/metrics/layout.py:24 interactions, results) _AddResultsInternal at tools/telemetry/telemetry/web_perf/metrics/layout.py:47 interaction_record=interaction_record)) AddValue at tools/telemetry/telemetry/results/page_test_results.py:172 self._ValidateValue(value) _ValidateValue at tools/telemetry/telemetry/results/page_test_results.py:197 assert value.IsMergableWith(representative_value) AssertionError I modified page_test_results.py:197 to print value and representative_value: ListOfScalarValues(http://jsfiddle.net/3yDKh/15/show/, layout, ms, [...], important=True, description=List of durations of layouts that were caused by and start during interactions, interaction_record=RunSmoothAllActions, same_page_merge_policy=CONCATENATE) ListOfScalarValues(http://groupcloned.com/test/plain/list-recycle-transform.html, layout, ms, [...], important=True, description=List of durations of layouts that were caused by and start during interactions, interaction_record=Gesture_ScrollAction, same_page_merge_policy=CONCATENATE) In case that's difficult to read, the reason that the two values cannot be merged is that their interaction records are different.
https://codereview.chromium.org/979243003/diff/340001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/340001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:40: results.current_page, 'layout', 'ms', layouts, description=description)) On 2015/04/01 23:32:58, benjhayden_chromium wrote: > On 2015/04/01 20:51:47, nednguyen wrote: > > For now, let's make it: > > list_of_scalar_values.ListOfScalarValues( > > page=results.current_page, > > label=interactions[0].label, > > name='layout', > > unit='ms', description=description) > > I don't see 'label' among the arguments to ListOfScalarValues. I do see > 'interaction_record'. > > When I set interaction_record=interactions[0].label, then I see this exception: > > Traceback (most recent call last): > Run at tools/telemetry/telemetry/benchmark.py:202 > max_failures=self._max_failures) > Run at tools/telemetry/telemetry/user_story/user_story_runner.py:221 > expectations, user_story, results, state) > _RunUserStoryAndProcessErrorIfNeeded at > tools/telemetry/telemetry/user_story/user_story_runner.py:88 > state.RunUserStory(results) > RunUserStory at tools/telemetry/telemetry/page/shared_page_state.py:254 > self._test.RunPage(self._current_page, self._current_tab, results) > RunPage at tools/telemetry/telemetry/page/page_test.py:219 > self.ValidateAndMeasurePage(page, tab, results) > ValidateAndMeasurePage at tools/perf/measurements/thread_times.py:40 > self._timeline_controller.smooth_records, results) > AddResults at tools/telemetry/telemetry/web_perf/metrics/layout.py:24 > interactions, results) > _AddResultsInternal at tools/telemetry/telemetry/web_perf/metrics/layout.py:47 > interaction_record=interaction_record)) > AddValue at tools/telemetry/telemetry/results/page_test_results.py:172 > self._ValidateValue(value) > _ValidateValue at tools/telemetry/telemetry/results/page_test_results.py:197 > assert value.IsMergableWith(representative_value) > AssertionError > > > I modified page_test_results.py:197 to print value and representative_value: > ListOfScalarValues(http://jsfiddle.net/3yDKh/15/show/, layout, ms, [...], > important=True, description=List of durations of layouts that were caused by and > start during interactions, interaction_record=RunSmoothAllActions, > same_page_merge_policy=CONCATENATE) > ListOfScalarValues(http://groupcloned.com/test/plain/list-recycle-transform.html, > layout, ms, [...], important=True, description=List of durations of layouts that > were caused by and start during interactions, > interaction_record=Gesture_ScrollAction, same_page_merge_policy=CONCATENATE) > > In case that's difficult to read, the reason that the two values cannot be > merged is that their interaction records are different. Yeah, we still have some work to do with the merge policy. Don't worry about setting the interaction_record field in this CL. It will still be added to the name correctly by a hack we currently have in place.
I think this is landable if you add unittest. https://codereview.chromium.org/979243003/diff/360001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/360001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:37: if interactions: Nits: you don't need this. Let's make description the same for all layout metric values. https://codereview.chromium.org/979243003/diff/360001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py (right): https://codereview.chromium.org/979243003/diff/360001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:28: pass Please fill the test.
PTAL
lgtm with nit, please wait for Ned. https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py (right): https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:28: self.assertEqual(dict(), RunLayoutMetric([ Nit: style guide prefers assertFalse instead of comparing to empty dict.
https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout.py (right): https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout.py:41: description='List of durations of layouts that were caused by and ' + Nit: Our convention for dealing with long line is using ('foo' ' bar') https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py (right): https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:12: from telemetry.web_perf.timeline_interaction_record import \ import ( TimelineInteractionRecord) https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:18: def RunLayoutMetric(events, interactions): s/RunLayoutMetric/GetLayoutMetrics https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:29: FakeEvent('FrameView::performLayout', 0, 1), Since we already use FakeEvent('FrameView::performLayout', 0, 1) everywhere, how about making this FakeLayoutEvent(0, 1) https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:34: ], [ This code is a bit hard to read. Can you format it the following ways: fake_events_1 = [FakeLayoutEvent(0, 1), ...] interactions_1 = [TimelineInteractionRecord('test', 10, 20), ..] self.assertEquals({}, GetLayoutMetric(fake_events_1, interactions_1)) https://codereview.chromium.org/979243003/diff/380001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:49: self.assertEqual(dict(layout=[2, 4]), RunLayoutMetric([ Can you add documentation explaining how this returns [2, 4]? something like: # FakeLayout(10, 12) is created during Interaction(10, 20), hence the layout duration is 12 - 10 = 2. # FakeLayout(30, 34) is created during Interaction(30, 40), hence the layout duration is 34 - 30 = 4.
PTAL
https://codereview.chromium.org/979243003/diff/420001/tools/telemetry/telemet... File tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py (right): https://codereview.chromium.org/979243003/diff/420001/tools/telemetry/telemet... tools/telemetry/telemetry/web_perf/metrics/layout_unittest.py:47: self.assertEqual({}, GetLayoutMetrics(events, [])) Again, please use assertFalse here as per https://google-styleguide.googlecode.com/svn/trunk/pyguide.html?showone=True/...
lgtm
The CQ bit was checked by benjhayden@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from eakuefner@chromium.org, nednguyen@google.com Link to the patchset: https://codereview.chromium.org/979243003/#ps440001 (title: ".")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/979243003/440001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_chromium_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by benjhayden@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from eakuefner@chromium.org, nednguyen@google.com Link to the patchset: https://codereview.chromium.org/979243003/#ps460001 (title: ".")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/979243003/460001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_chromium_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
On 2015/04/06 20:13:00, I haz the power (commit-bot) wrote: > Try jobs failed on following builders: > linux_chromium_rel_ng on tryserver.chromium.linux (JOB_FAILED, > http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) I think you have a legit failure. Also for the next patches, can you update the patch title with the description so it's easier to know what the patch addresses?
On 2015/04/06 at 20:43:51, nednguyen wrote: > On 2015/04/06 20:13:00, I haz the power (commit-bot) wrote: > > Try jobs failed on following builders: > > linux_chromium_rel_ng on tryserver.chromium.linux (JOB_FAILED, > > http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) > > I think you have a legit failure. Also for the next patches, can you update the patch title with the description so it's easier to know what the patch addresses? By the way, since it seems like you might end up touching the code for the failing test, it looks like there's a style issue: in thread_times_unittest.py, instead of 'from telemetry.web_perf.metrics.layout import LayoutMetric', can you write 'from telemetry.web_perf.metrics import layout' and then refer to LayoutMetric as layout.LayoutMetric? Imports should always be module-level in Python source code.
On 2015/04/06 at 21:08:33, eakuefner wrote: > On 2015/04/06 at 20:43:51, nednguyen wrote: > > On 2015/04/06 20:13:00, I haz the power (commit-bot) wrote: > > > Try jobs failed on following builders: > > > linux_chromium_rel_ng on tryserver.chromium.linux (JOB_FAILED, > > > http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) > > > > I think you have a legit failure. Also for the next patches, can you update the patch title with the description so it's easier to know what the patch addresses? > > By the way, since it seems like you might end up touching the code for the failing test, it looks like there's a style issue: in thread_times_unittest.py, instead of 'from telemetry.web_perf.metrics.layout import LayoutMetric', can you write 'from telemetry.web_perf.metrics import layout' and then refer to LayoutMetric as layout.LayoutMetric? Imports should always be module-level in Python source code. The only layouts in those pages were load layouts, so now there are no layout results to be tested.
The CQ bit was checked by benjhayden@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from eakuefner@chromium.org, nednguyen@google.com Link to the patchset: https://codereview.chromium.org/979243003/#ps480001 (title: ".")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/979243003/480001
Message was sent while issue was closed.
Committed patchset #20 (id:480001)
Message was sent while issue was closed.
Patchset 20 (id:??) landed as https://crrev.com/a108aa74312c934fadf3f88cfe9674ba7231271d Cr-Commit-Position: refs/heads/master@{#323984} |