|
|
Created:
4 years, 8 months ago by tdresser Modified:
4 years, 5 months ago CC:
chromium-reviews, scheduler-bugs_chromium.org, asvitkine+watch_chromium.org, dtapuska, Kunihiko Sakamoto Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionReport expected task queueing time via UMA
Reported to:
RendererScheduler.ExpectedTaskQueueingDuration
This is recorded for each 5 second window. This is the expected
queueing duration for a high priority task.
BUG=607151
TEST=QueueingTimeEstimatorTest
Committed: https://crrev.com/06f261ea4b8db484ffd50d96676dd34e203fa626
Cr-Commit-Position: refs/heads/master@{#404185}
Patch Set 1 #
Total comments: 34
Patch Set 2 : Address skyostil@'s comments. #
Total comments: 5
Patch Set 3 : Style nit. #Patch Set 4 : Super rough alternative approach. #
Total comments: 10
Patch Set 5 : Rebase PS 3 #Patch Set 6 : panicker@'s patch. #Patch Set 7 : Combine approaches, add tests. #Patch Set 8 : Fix perf test #
Total comments: 29
Patch Set 9 : Address Alex's feedback. #
Total comments: 2
Patch Set 10 : Partway through review feedback. #Patch Set 11 : RenderSchedulerImpl owns all the things. #
Total comments: 16
Patch Set 12 : Address feedback. #
Total comments: 3
Patch Set 13 : Address nits. #
Total comments: 4
Patch Set 14 : Address comments. #
Total comments: 2
Patch Set 15 : State window size in histograms.xml. #Patch Set 16 : Remove export. Fix windows. #Messages
Total messages: 99 (30 generated)
The CQ bit was checked by tdresser@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1898233002/1 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1898233002/1
tdresser@chromium.org changed reviewers: + skyostil@chromium.org
https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/queueing_time_estimator_unittest.cc (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator_unittest.cc:115: /*TEST_F(QueueingTimeEstimatorTest, Clear) { I don't think we should clear this estimator on navigation. https://code.google.com/p/chromium/codesearch#chromium/src/components/schedul... Does that seem correct to you?
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_android_rel_ng on tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/linux_androi...)
Looks great. Added a few comments. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/queueing_time_estimator.cc (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:32: base::TimeDelta ExpectedQueueingTimeFromTask(base::TimeTicks task_start, Could you move this into an unnamed namespace? https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:37: DCHECK(window_start < window_end); DCHECK(task_start <= window_end); DCHECK(task_end >= window_start); ? https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:41: std::min(task_end, window_end); DCHECK(start <= end)? https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:43: double probability_of_this_task = You could use, e.g., InMillisecondsF() to avoid the static_cast. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:48: base::TimeDelta expected_queueing_duration_within_task = I'm not sure I understand this equation. Shouldn't this just be the total length of the task (minus any part that is before the start of the window) divided by two? https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:53: return base::TimeDelta::FromMicroseconds( FromMillisecondsD to avoid some truncation? https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:55: expected_queueing_duration_within_task.InMicroseconds()); InMillisecondsF if you decide to make the above change. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:66: time_source_->NowTicks() > window_start_time_ + window_duration_; We like to minimize the number of calls to Now() since it can be pretty expensive. I suggest caching the snapshotted time at the start of this function since this should be more or less instantaneous. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:76: window_start_time_ += window_duration_; We only really care about the window we are currently in, right? Could we make this just compute that instead of working through the intermediate windows? Usually this shouldn't be a problem but there are cases where we could be in a very long running task which spans lots of windows (e.g., stopped on a JS breakpoint, which IIRC spins a nested run loop). https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/queueing_time_estimator.h (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.h:25: class QueueingTimeEstimatorClient { naming nit: This could just be called |Client| since it's an inner class. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.h:26: public: Please add a virtual destructor. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/queueing_time_estimator_unittest.cc (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator_unittest.cc:1: // Copyright 2015 The Chromium Authors. All rights reserved. ++year https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator_unittest.cc:115: /*TEST_F(QueueingTimeEstimatorTest, Clear) { On 2016/04/19 17:23:46, tdresser wrote: > I don't think we should clear this estimator on navigation. > https://code.google.com/p/chromium/codesearch#chromium/src/components/schedul... > > Does that seem correct to you? Not clearing it should give more accurate results in cases where we have multiple render views in the same renderer and only one of them navigates. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/renderer_scheduler_impl.cc (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/renderer_scheduler_impl.cc:78: AddTaskObserver(&MainThreadOnly().queueing_time_estimator); Please also remove this in the destructor. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/renderer_scheduler_impl.cc:1261: UMA_HISTOGRAM_TIMES("RendererScheduler.ExpectedTaskQueueingDuration", Would you mind adding a trace counter[1] for this estimate to see how it behaves over time? [1] See TRACE_COUNTER elsewhere in this file.
https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/queueing_time_estimator.cc (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:32: base::TimeDelta ExpectedQueueingTimeFromTask(base::TimeTicks task_start, On 2016/04/20 09:51:27, Sami wrote: > Could you move this into an unnamed namespace? Done. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:37: DCHECK(window_start < window_end); On 2016/04/20 09:51:27, Sami wrote: > DCHECK(task_start <= window_end); > DCHECK(task_end >= window_start); > > ? Done. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:41: std::min(task_end, window_end); On 2016/04/20 09:51:27, Sami wrote: > DCHECK(start <= end)? Done. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:43: double probability_of_this_task = On 2016/04/20 09:51:27, Sami wrote: > You could use, e.g., InMillisecondsF() to avoid the static_cast. Won't that cause loss of precision? Dividing both sides by kMicrosecondsPerMillisecond seems unnecessary. Let me know if you think the loss of precision isn't that important. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:48: base::TimeDelta expected_queueing_duration_within_task = On 2016/04/20 09:51:26, Sami wrote: > I'm not sure I understand this equation. Shouldn't this just be the total length > of the task (minus any part that is before the start of the window) divided by > two? Here's an example where that doesn't work. With a 5 second window, consider a 20 second task starting at the beginning of the window. Total length of task = 20 seconds. Divided by 2 = 10 seconds. But the expected queueing time is more than 10 seconds. The queueing time at the start of the window is 20 seconds, and at the end of the window is 15 seconds. Thus the expected queueing time is (20 + 15)/2 = 17.5 seconds. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:53: return base::TimeDelta::FromMicroseconds( On 2016/04/20 09:51:27, Sami wrote: > FromMillisecondsD to avoid some truncation? Done. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:55: expected_queueing_duration_within_task.InMicroseconds()); On 2016/04/20 09:51:27, Sami wrote: > InMillisecondsF if you decide to make the above change. Done. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:66: time_source_->NowTicks() > window_start_time_ + window_duration_; On 2016/04/20 09:51:27, Sami wrote: > We like to minimize the number of calls to Now() since it can be pretty > expensive. I suggest caching the snapshotted time at the start of this function > since this should be more or less instantaneous. Fixed. This is better from a readability perspective too. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:76: window_start_time_ += window_duration_; On 2016/04/20 09:51:27, Sami wrote: > We only really care about the window we are currently in, right? Could we make > this just compute that instead of working through the intermediate windows? > Usually this shouldn't be a problem but there are cases where we could be in a > very long running task which spans lots of windows (e.g., stopped on a JS > breakpoint, which IIRC spins a nested run loop). The intermediate windows definitely matter. We need to sample regularly if we want to actually know what the expected time is across multiple windows. Given that the time window is 5 seconds, I doubt this should be an issue. Based on your question about the formula, I suspect there's some confusion over exactly what we're measuring. Does this make sense now? https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/queueing_time_estimator.h (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.h:25: class QueueingTimeEstimatorClient { On 2016/04/20 09:51:27, Sami wrote: > naming nit: This could just be called |Client| since it's an inner class. Done. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.h:26: public: On 2016/04/20 09:51:27, Sami wrote: > Please add a virtual destructor. Done. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/queueing_time_estimator_unittest.cc (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator_unittest.cc:1: // Copyright 2015 The Chromium Authors. All rights reserved. On 2016/04/20 09:51:27, Sami wrote: > ++year Done. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator_unittest.cc:115: /*TEST_F(QueueingTimeEstimatorTest, Clear) { On 2016/04/20 09:51:27, Sami wrote: > On 2016/04/19 17:23:46, tdresser wrote: > > I don't think we should clear this estimator on navigation. > > > https://code.google.com/p/chromium/codesearch#chromium/src/components/schedul... > > > > Does that seem correct to you? > > Not clearing it should give more accurate results in cases where we have > multiple render views in the same renderer and only one of them navigates. Acknowledged. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/renderer_scheduler_impl.cc (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/renderer_scheduler_impl.cc:78: AddTaskObserver(&MainThreadOnly().queueing_time_estimator); On 2016/04/20 09:51:27, Sami wrote: > Please also remove this in the destructor. Whoops, thanks! https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/renderer_scheduler_impl.cc:1261: UMA_HISTOGRAM_TIMES("RendererScheduler.ExpectedTaskQueueingDuration", On 2016/04/20 09:51:27, Sami wrote: > Would you mind adding a trace counter[1] for this estimate to see how it behaves > over time? > > [1] See TRACE_COUNTER elsewhere in this file. Done. https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... File components/scheduler/renderer/renderer_scheduler_impl.cc (right): https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... components/scheduler/renderer/renderer_scheduler_impl.cc:1264: TRACE_COUNTER1(TRACE_DISABLED_BY_DEFAULT("renderer.scheduler"), Is this correct? Where in the trace should I see this?
Thanks, I think this looks excellent. One last thought I have is whether we should make this sample every Nth task instead of every task. The other estimators either sample a specific queue or a queue which already involves time lookups (timers), while this patch is doing a time lookup for every task. Previously base::PendingTask was doing the same for every posted task, and then someone realized this was pretty expensive on Android (IIRC), so that tracking is only done on other platforms now. Do you think we'd still get accurate enough data if the estimator ran only for every 10th task or so? https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... File components/scheduler/renderer/queueing_time_estimator.cc (right): https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:43: double probability_of_this_task = On 2016/04/20 15:13:32, tdresser wrote: > On 2016/04/20 09:51:27, Sami wrote: > > You could use, e.g., InMillisecondsF() to avoid the static_cast. > > Won't that cause loss of precision? Dividing both sides by > kMicrosecondsPerMillisecond seems unnecessary. > > Let me know if you think the loss of precision isn't that important. Good point, let's keep the code you have. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:48: base::TimeDelta expected_queueing_duration_within_task = On 2016/04/20 15:13:32, tdresser wrote: > On 2016/04/20 09:51:26, Sami wrote: > > I'm not sure I understand this equation. Shouldn't this just be the total > length > > of the task (minus any part that is before the start of the window) divided by > > two? > > Here's an example where that doesn't work. > With a 5 second window, consider a 20 second task starting at the beginning of > the window. > Total length of task = 20 seconds. > Divided by 2 = 10 seconds. > > But the expected queueing time is more than 10 seconds. > > The queueing time at the start of the window is 20 seconds, and at the end of > the window is 15 seconds. Thus the expected queueing time is (20 + 15)/2 = 17.5 > seconds. Thanks for the explanation, that made it clear. https://codereview.chromium.org/1898233002/diff/1/components/scheduler/render... components/scheduler/renderer/queueing_time_estimator.cc:76: window_start_time_ += window_duration_; On 2016/04/20 15:13:32, tdresser wrote: > On 2016/04/20 09:51:27, Sami wrote: > > We only really care about the window we are currently in, right? Could we make > > this just compute that instead of working through the intermediate windows? > > Usually this shouldn't be a problem but there are cases where we could be in a > > very long running task which spans lots of windows (e.g., stopped on a JS > > breakpoint, which IIRC spins a nested run loop). > > The intermediate windows definitely matter. > > We need to sample regularly if we want to actually know what the expected time > is across multiple windows. > > Given that the time window is 5 seconds, I doubt this should be an issue. > > Based on your question about the formula, I suspect there's some confusion over > exactly what we're measuring. Does this make sense now? Ah, I didn't realize the client was expected to sample multiple measurements. Reporting intermediate windows makes sense then. https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... File components/scheduler/renderer/queueing_time_estimator.h (right): https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... components/scheduler/renderer/queueing_time_estimator.h:29: virtual ~Client(){}; nit: Remove the trailing ';' so git cl format will format this correctly. https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... File components/scheduler/renderer/renderer_scheduler_impl.cc (right): https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... components/scheduler/renderer/renderer_scheduler_impl.cc:1264: TRACE_COUNTER1(TRACE_DISABLED_BY_DEFAULT("renderer.scheduler"), On 2016/04/20 15:13:33, tdresser wrote: > Is this correct? Where in the trace should I see this? Looks good to me. You should see it as one of the counter tracks above the renderer main thread thread slices. I think the counters show up in random order every time you load a trace so you might have to scan everything to find it. Make sure you also enable the disabled-by-default renderer.scheduler category. One minor drawback is that the counter values won't show up where the window is on the time axis if the tasks are much longer than a window (should be rare).
I suspect that sampling every N tasks would make this number impossible to interpret. Suppose we sample 1 in every 10 tasks, and then have a 5 second window with a single 1 second task in it. How do we interpret that? I can't think of any reasonable way of extrapolating from a regular sampling of the tasks. We might be able to extrapolate by looking at a subset of the windows. Here's a proposal. 1. Start recording for a 5 second window. 2. Wait for the window to be flushed, and report the window queueing time. 3. Wait N tasks, without looking at their timestamps. 4. Check a task's timestamps, if it's been > M seconds since the last flush, go to 1, otherwise, go to 3. I guess the problem with that is it'll make the device horrible for 5 seconds, and then acceptable for a while... Can you think of any way we could extrapolate from a regular sampling of the tasks? Could we get a course grained time for the tasks? https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... File components/scheduler/renderer/queueing_time_estimator.h (right): https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... components/scheduler/renderer/queueing_time_estimator.h:29: virtual ~Client(){}; On 2016/04/21 10:04:01, Sami wrote: > nit: Remove the trailing ';' so git cl format will format this correctly. Done. https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... File components/scheduler/renderer/renderer_scheduler_impl.cc (right): https://codereview.chromium.org/1898233002/diff/20001/components/scheduler/re... components/scheduler/renderer/renderer_scheduler_impl.cc:1264: TRACE_COUNTER1(TRACE_DISABLED_BY_DEFAULT("renderer.scheduler"), On 2016/04/21 10:04:01, Sami wrote: > On 2016/04/20 15:13:33, tdresser wrote: > > Is this correct? Where in the trace should I see this? > > Looks good to me. You should see it as one of the counter tracks above the > renderer main thread thread slices. I think the counters show up in random order > every time you load a trace so you might have to scan everything to find it. > Make sure you also enable the disabled-by-default renderer.scheduler category. > > One minor drawback is that the counter values won't show up where the window is > on the time axis if the tasks are much longer than a window (should be rare). Ah, thanks I was thinking it was on the same track as the snapshots, which it wasn't. It's working fine.
On 2016/04/21 12:37:28, tdresser wrote: > I suspect that sampling every N tasks would make this number impossible to > interpret. > > Suppose we sample 1 in every 10 tasks, and then have a 5 second window with a > single 1 second task in it. > > How do we interpret that? > > I can't think of any reasonable way of extrapolating from a regular sampling of > the tasks. > > We might be able to extrapolate by looking at a subset of the windows. > > Here's a proposal. > > 1. Start recording for a 5 second window. > 2. Wait for the window to be flushed, and report the window queueing time. > 3. Wait N tasks, without looking at their timestamps. > 4. Check a task's timestamps, if it's been > M seconds since the last flush, go > to 1, otherwise, go to 3. > > I guess the problem with that is it'll make the device horrible for 5 seconds, > and then acceptable for a while... > > Can you think of any way we could extrapolate from a regular sampling of the > tasks? Could we get a course grained time for the tasks? Yeah, that seems like it might be too slow to react to changes. The only other way I can think of is sampling task durations randomly, building a distribution based on that and picking task durations randomly from the distribution instead of measuring their actual durations. I'm not sure if this would converge fast enough. The other option is that we prove clock_gettime() is fast enough these days and don't worry about it. I think there are some user space fast paths for it in recent kernels. Maybe you could insert a few time stamp lookups into https://code.google.com/p/chromium/codesearch#chromium/src/components/schedul... and see how it behaves on Android?
On 2016/04/21 14:25:35, Sami wrote: > On 2016/04/21 12:37:28, tdresser wrote: > > I suspect that sampling every N tasks would make this number impossible to > > interpret. > > > > Suppose we sample 1 in every 10 tasks, and then have a 5 second window with a > > single 1 second task in it. > > > > How do we interpret that? > > > > I can't think of any reasonable way of extrapolating from a regular sampling > of > > the tasks. > > > > We might be able to extrapolate by looking at a subset of the windows. > > > > Here's a proposal. > > > > 1. Start recording for a 5 second window. > > 2. Wait for the window to be flushed, and report the window queueing time. > > 3. Wait N tasks, without looking at their timestamps. > > 4. Check a task's timestamps, if it's been > M seconds since the last flush, > go > > to 1, otherwise, go to 3. > > > > I guess the problem with that is it'll make the device horrible for 5 seconds, > > and then acceptable for a while... > > > > Can you think of any way we could extrapolate from a regular sampling of the > > tasks? Could we get a course grained time for the tasks? > > Yeah, that seems like it might be too slow to react to changes. The only other > way I can think of is sampling task durations randomly, building a distribution > based on that and picking task durations randomly from the distribution instead > of measuring their actual durations. I'm not sure if this would converge fast > enough. > > The other option is that we prove clock_gettime() is fast enough these days and > don't worry about it. I think there are some user space fast paths for it in > recent kernels. Maybe you could insert a few time stamp lookups into > https://code.google.com/p/chromium/codesearch#chromium/src/components/schedul... > and see how it behaves on Android? We'd need it to be fast enough on old devices too, wouldn't we? Any approach that involves sampling a random set of tasks is going to bias towards short frequent tasks, and against the long, infrequent tasks we care most about.
On 2016/04/21 14:51:05, tdresser wrote: > We'd need it to be fast enough on old devices too, wouldn't we? Sure, although Chrome recently dropped support for ICS based devices so old doesn't mean super old anymore :) > Any approach that involves sampling a random set of tasks is going to bias > towards short frequent tasks, and against the long, infrequent tasks we care > most about. Strictly speaking there's no bias but just more noise -- you need a lot of samples for discover the true distribution. But yeah, the effect would be that we'd initially get an optimistic estimate.
On 2016/04/21 14:56:04, Sami wrote: > On 2016/04/21 14:51:05, tdresser wrote: > > We'd need it to be fast enough on old devices too, wouldn't we? > > Sure, although Chrome recently dropped support for ICS based devices so old > doesn't mean super old anymore :) > > > Any approach that involves sampling a random set of tasks is going to bias > > towards short frequent tasks, and against the long, infrequent tasks we care > > most about. > > Strictly speaking there's no bias but just more noise -- you need a lot of > samples for discover the true distribution. But yeah, the effect would be that > we'd initially get an optimistic estimate. Are you sure there's no bias? If we sampled the queueing time at regular time intervals than we wouldn't be biased, but if we sample at regular task intervals, I'm pretty sure we are. For instance, we require the property that adding 0 length tasks should never alter the expected queueing time (on average, with a large enough number of samples). If we look at every N'th task, then we don't have this property.
On 2016/04/21 16:31:29, tdresser wrote: > Are you sure there's no bias? If we sampled the queueing time at regular time > intervals than we wouldn't be biased, but if we sample at regular task > intervals, I'm pretty sure we are. > > For instance, we require the property that adding 0 length tasks should never > alter the expected queueing time (on average, with a large enough number of > samples). If we look at every N'th task, then we don't have this property. I think there's a distinction here between "what was the expect queueing time in the last window" vs. "what is the estimated queueing time in the future". 0 length tasks won't affect the former but will affect the latter. Anyway, this is getting a bit academic. One potential low resolution time source we have are compositor frames. We'd just need a fallback for when we're not rendering. Not sure it's worth the complexity though.
On 2016/04/21 16:54:37, Sami wrote: > On 2016/04/21 16:31:29, tdresser wrote: > > Are you sure there's no bias? If we sampled the queueing time at regular time > > intervals than we wouldn't be biased, but if we sample at regular task > > intervals, I'm pretty sure we are. > > > > For instance, we require the property that adding 0 length tasks should never > > alter the expected queueing time (on average, with a large enough number of > > samples). If we look at every N'th task, then we don't have this property. > > I think there's a distinction here between "what was the expect queueing time in > the last window" vs. "what is the estimated queueing time in the future". 0 > length tasks won't affect the former but will affect the latter. > > Anyway, this is getting a bit academic. One potential low resolution time source > we have are compositor frames. We'd just need a fallback for when we're not > rendering. Not sure it's worth the complexity though. You're thinking we'd just measure how many frames each task lasted? We're going to care about the 9X'th percentile of this, so only having 16ms precision is probably fine. We don't have any per vsync signal sent if we aren't rendering? When you mentioned it might not be worth the complexity, I'm not sure what you were referring to. I'm not convinced there's any reasonable option that doesn't involve a lower resolution timer (unless we can get away with looking at the duration of every task).
On 2016/04/21 17:22:20, tdresser wrote: > You're thinking we'd just measure how many frames each task lasted? > We're going to care about the 9X'th percentile of this, so only having 16ms > precision is probably fine. > > We don't have any per vsync signal sent if we aren't rendering? > > When you mentioned it might not be worth the complexity, I'm not sure what you > were referring to. > > I'm not convinced there's any reasonable option that doesn't involve a lower > resolution timer (unless we can get away with looking at the duration of every > task). One complexity is that the frame number is owned by the compositor thread, so some synchronization is needed. Of course there's a frame number on the main thread too, but it won't increase except inside the BeginMainFrame task.
Here's a super rough alternative approach. The basic idea is: - The MessageLoop already keeps an approximate idea of the current time. - Observe the compositor thread MessageLoop's notion of the current time, and use it to form an underestimate of the elapsed time per task. - Make the MessageLoop update it's notion of the current time once per pump, regardless of whether or not there's anything in the timer task queue. One of my other ideas involved repeatedly starting a 5 second timer - this seems strictly less expensive than that. Does this seem at all reasonable? Should I polish it up? https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... File base/message_loop/message_loop.cc (right): https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... base/message_loop/message_loop.cc:624: recent_time_ = TimeTicks::Now(); This is probably the scariest part of this patch. We could make sure this only ran if there was a recent time observer. This is only going to run once per pump of the message loop, which seems pretty reasonable. https://codereview.chromium.org/1898233002/diff/60001/components/scheduler/re... File components/scheduler/renderer/coarse_duration_timer.h (right): https://codereview.chromium.org/1898233002/diff/60001/components/scheduler/re... components/scheduler/renderer/coarse_duration_timer.h:14: // TODO - add locking things. ^ https://codereview.chromium.org/1898233002/diff/60001/components/scheduler/re... File components/scheduler/renderer/renderer_scheduler_impl.cc (right): https://codereview.chromium.org/1898233002/diff/60001/components/scheduler/re... components/scheduler/renderer/renderer_scheduler_impl.cc:463: } ^
tdresser@chromium.org changed reviewers: + danakj@chromium.org
Hi Dana, I'm definitely not convinced that mucking about with MessageLoop is the correct solution here, but it seems to have some nice properties. The problem I'm trying to solve is that I need to know the approximate current time, but calling TimeTicks::Now() is too expensive. The MessageLoop already has an approximation of the current time which is almost adequate for my needs. Whenever a MessageLoop is pumped and there are outstanding delayed tasks, it updates its notion of the current time. This patch makes MessageLoop update its notion of the current time at minimum once per pump. (It could do this only if there is something observing the MessagePump's notion of the current time). Does this seem reasonable to you? I suspect that this would be useful in lots of places where we want some approximate notion of the current time. If you think might be reasonable, I'll polish up the patch. Thanks!
tdresser@chromium.org changed reviewers: + thakis@chromium.org - danakj@chromium.org
Dana is out, Nico, what do you think of this idea?
panicker@chromium.org changed reviewers: + panicker@chromium.org
https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... File base/message_loop/message_loop.cc (right): https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... base/message_loop/message_loop.cc:624: recent_time_ = TimeTicks::Now(); On 2016/05/30 19:07:46, tdresser wrote: > This is probably the scariest part of this patch. > > We could make sure this only ran if there was a recent time observer. > > This is only going to run once per pump of the message loop, which seems pretty > reasonable. Looks like this is undoing the optimization added in cr/4247003 for overload case (no concrete perf improvement numbers linked there though), and in addition also calling Now() even when the work queue is empty. However it seems like one could quantify the performance hit with running some benchmarks? https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... base/message_loop/message_loop.cc:627: OnUpdateRecentTime(recent_time_)); To me it seems iffy to call an override-able method here, future developers could add arbitrarily complex / expensive code here. Have you considered making this non-overridable? (In that sense it is scarier than calling Now() - which is known fixed cost)
Description was changed from ========== Report expected task queueing time via UMA Reported to: RendererScheduler.ExpectedTaskQueueingDuration This is recorded for each 5 second window. This is the expected queueing duration for a high priority task. BUG=599609 TEST=QueueingTimeEstimatorTest ========== to ========== Report expected task queueing time via UMA Reported to: RendererScheduler.ExpectedTaskQueueingDuration This is recorded for each 5 second window. This is the expected queueing duration for a high priority task. BUG=599609 TEST=QueueingTimeEstimatorTest ==========
panicker@chromium.org changed reviewers: + thestig@chromium.org
+thestig for comment
https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... File base/message_loop/message_loop.cc (right): https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... base/message_loop/message_loop.cc:624: recent_time_ = TimeTicks::Now(); On 2016/06/07 19:58:02, Shubhie wrote: > On 2016/05/30 19:07:46, tdresser wrote: > > This is probably the scariest part of this patch. > > > > We could make sure this only ran if there was a recent time observer. > > > > This is only going to run once per pump of the message loop, which seems > pretty > > reasonable. > > Looks like this is undoing the optimization added in cr/4247003 for overload > case (no concrete perf improvement numbers linked there though), and in addition > also calling Now() even when the work queue is empty. > However it seems like one could quantify the performance hit with running some > benchmarks? > I'm not very worried about calling now even when the work queue is empty (especially if we switch to only doing it when there's an observer). The alternative is adding a timer that's running most of the time, which would incur the same cost. You're right that this undoes that optimization though (and leaves a bit of essentially dead code). Another option would be to have the MessagePump itself compute the current time once per pump, and expose that widely. https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... base/message_loop/message_loop.cc:627: OnUpdateRecentTime(recent_time_)); On 2016/06/07 19:58:02, Shubhie wrote: > To me it seems iffy to call an override-able method here, future developers > could add arbitrarily complex / expensive code here. > Have you considered making this non-overridable? > > (In that sense it is scarier than calling Now() - which is known fixed cost) It isn't completely clear to me how you'd make this non-overrideable. What would the API look like? Another option would be to expose a global'ish getter, which I thought looked a bit uglier.
https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... File base/message_loop/message_loop.cc (right): https://codereview.chromium.org/1898233002/diff/60001/base/message_loop/messa... base/message_loop/message_loop.cc:627: OnUpdateRecentTime(recent_time_)); On 2016/06/07 20:23:15, tdresser wrote: > On 2016/06/07 19:58:02, Shubhie wrote: > > To me it seems iffy to call an override-able method here, future developers > > could add arbitrarily complex / expensive code here. > > Have you considered making this non-overridable? > > > > (In that sense it is scarier than calling Now() - which is known fixed cost) > > It isn't completely clear to me how you'd make this non-overrideable. What would > the API look like? Another option would be to expose a global'ish getter, which > I thought looked a bit uglier. Yeah a global-ish getter could work, although not pretty. As a strawman: RecentTimeObserver could be stateful (atomic or locking needed) and contain recent_time member with setter (called from here) and getter. Then derived classes would observe latest recent_time from base class member.
On 2016/06/07 20:03:41, Shubhie wrote: > +thestig for comment I don't have any background on the Android performance issue. Am I correct in understanding that calling clock_gettime() is slow for some Android devices, and the compositor thread is sufficiently latency sensitive that you don't want to call it. e.g. via a RepeatingTimer or what not. So instead, you are looking to hook into the MessageLoop, which has to call it anyway?
https://codereview.chromium.org/1898233002/diff/60001/components/scheduler/re... File components/scheduler/renderer/queueing_time_estimator.cc (right): https://codereview.chromium.org/1898233002/diff/60001/components/scheduler/re... components/scheduler/renderer/queueing_time_estimator.cc:17: DCHECK(task_start <= task_end); DCHECK_LE() and friends are preferred, BTW. They will give you more detailed logging when the DCHECKs fail.
Your understanding sounds correct. clock_gettime() is slow on the majority of Android devices, so we can't afford to call it at the beginning / end of every task. I could update our coarse notion of the current time at a fixed, but we'd then take the hit of always having a pending delayed task, and we want to avoid waking the thread up to check the current time, due to power constraints. We could avoid this with some careful bookkeeping of when we should have the timer running. Since the MessageLoop needs to check the time once per pump if there's a delayed pending task anyways, it seems like it would make sense to provide a general use coarse timer, updated once per pump of the MessageLoop. https://codereview.chromium.org/1898233002/diff/60001/components/scheduler/re... File components/scheduler/renderer/queueing_time_estimator.cc (right): https://codereview.chromium.org/1898233002/diff/60001/components/scheduler/re... components/scheduler/renderer/queueing_time_estimator.cc:17: DCHECK(task_start <= task_end); On 2016/06/08 01:05:27, Lei Zhang (Slow) wrote: > DCHECK_LE() and friends are preferred, BTW. They will give you more detailed > logging when the DCHECKs fail. Thanks, I'll clean this up if the overall approach is deemed reasonable.
On 2016/06/08 14:25:05, tdresser wrote: > Since the MessageLoop needs to check the time once per pump if there's a delayed > pending task anyways, it seems like it would make sense to provide a general use > coarse timer, updated once per pump of the MessageLoop. If DoWork() ends up running a task that makes the MessagePump quit, you may not get to DoDelayedWork(), and then RecentTimeObservers won't get called. How does this case affect things? From my somewhat far away perspective, it feels like you are depending on the implementation details of a MessageLoop. If the implementation details change in MessagePumps/Loops, it can affect what you are trying to do. It feels weird that a MessageLoop also have to act as a cheap source of time. So here's another idea: what if we cache the result of the most recent TimeTicks::Now() on a per-thread basis? Would that be good enough for your needs?
On 2016/06/09 03:26:19, Lei Zhang (Slow) wrote: > So here's another idea: what if we cache the result of the most recent > TimeTicks::Now() on a per-thread basis? Would that be good enough for your > needs? I like this idea. A TLS lookup is practically free on ARM and cheap enough for this purpose on the other platforms.
On 2016/06/09 10:36:26, Sami wrote: > On 2016/06/09 03:26:19, Lei Zhang (Slow) wrote: > > So here's another idea: what if we cache the result of the most recent > > TimeTicks::Now() on a per-thread basis? Would that be good enough for your > > needs? > > I like this idea. A TLS lookup is practically free on ARM and cheap enough for > this purpose on the other platforms. I'm not convinced this is going to work. We need to make sure that we've got a strict underestimate of the duration of each task, so we can't just use the most recent time, unless we have an error bound on the accuracy of the timer. If there haven't been any main thread tasks for 5 minutes, so we haven't had any reason to ask the current time for 5 minutes, and then we have a main thread task, we'll assume it started 5 minutes ago. My previous idea of looking at times coming from the compositor thread also won't work, as the compositor thread could be completely idle while the main thread does work. Reading once per message pump on the main thread doesn't fix this, because we'll likely only get one sample per task, which doesn't let us measure the tasks duration...
On 2016/06/09 13:41:09, tdresser wrote: > On 2016/06/09 10:36:26, Sami wrote: > > On 2016/06/09 03:26:19, Lei Zhang (Slow) wrote: > > > So here's another idea: what if we cache the result of the most recent > > > TimeTicks::Now() on a per-thread basis? Would that be good enough for your > > > needs? > > > > I like this idea. A TLS lookup is practically free on ARM and cheap enough for > > this purpose on the other platforms. > > I'm not convinced this is going to work. We need to make sure that we've got a > strict underestimate of the duration of each task, so we can't just use the most > recent time, unless we have an error bound on the accuracy of the timer. If > there haven't been any main thread tasks for 5 minutes, so we haven't had any > reason to ask the current time for 5 minutes, and then we have a main thread > task, we'll assume it started 5 minutes ago. I guess the tracker could invalidate the time value when going idle. > My previous idea of looking at times coming from the compositor thread also > won't work, as the compositor thread could be completely idle while the main > thread does work. > > Reading once per message pump on the main thread doesn't fix this, because we'll > likely only get one sample per task, which doesn't let us measure the tasks > duration... Is there even a way to get more than one sample per task without blowing the budget? One way would be to assume there are no gaps between tasks and use the ending time of the previous task as the start time for the current one. Similarly the estimate could be dropped when going idle. Just to throw one more idea out there, we could find the top N slowest tasks from deep reports and just instrument those. Theory being those tasks are rare enough that timing each one wouldn't hurt.
On 2016/06/09 14:03:16, Sami (slow -- travelling) wrote: > On 2016/06/09 13:41:09, tdresser wrote: > > On 2016/06/09 10:36:26, Sami wrote: > > > On 2016/06/09 03:26:19, Lei Zhang (Slow) wrote: > > > > So here's another idea: what if we cache the result of the most recent > > > > TimeTicks::Now() on a per-thread basis? Would that be good enough for your > > > > needs? > > > > > > I like this idea. A TLS lookup is practically free on ARM and cheap enough > for > > > this purpose on the other platforms. > > > > I'm not convinced this is going to work. We need to make sure that we've got a > > strict underestimate of the duration of each task, so we can't just use the > most > > recent time, unless we have an error bound on the accuracy of the timer. If > > there haven't been any main thread tasks for 5 minutes, so we haven't had any > > reason to ask the current time for 5 minutes, and then we have a main thread > > task, we'll assume it started 5 minutes ago. > > I guess the tracker could invalidate the time value when going idle. > > > My previous idea of looking at times coming from the compositor thread also > > won't work, as the compositor thread could be completely idle while the main > > thread does work. > > > > Reading once per message pump on the main thread doesn't fix this, because > we'll > > likely only get one sample per task, which doesn't let us measure the tasks > > duration... > > Is there even a way to get more than one sample per task without blowing the > budget? One way would be to assume there are no gaps between tasks and use the > ending time of the previous task as the start time for the current one. > Similarly the estimate could be dropped when going idle. > > Just to throw one more idea out there, we could find the top N slowest tasks > from deep reports and just instrument those. Theory being those tasks are rare > enough that timing each one wouldn't hurt. Based on catapult benchmark data and discussion we'd like to proceed with patch set 3. https://docs.google.com/spreadsheets/d/10ocKgdNW1D_11CQY5KNfCmRTFptxqtgJikzpJ... FYI the microbenchmark data is here: https://docs.google.com/spreadsheets/d/1xSkk8bMD7yjhqFxMWsklAq6SqnMgmHKyhD34g... Any concerns or objections?
tdresser@chromium.org changed reviewers: - thakis@chromium.org, thestig@chromium.org
Alright, so, back to the initial approach. I'm trying to figure out where I should put the computation of the task durations, and how to avoid computing the time twice in the case where one task ends and another begins at approximately the same time. I can't avoid double computation in the TaskQueue itself, because I need to do this across task queues. I could have the scheduler helper make it easy to ask if there's any pending immediate tasks in any of the task queues it knows about. It doesn't look like any of the existing methods on TaskQueue do what I need. TaskQueueImpl::HasPendingImmediateWork almost does what I need, but it looks like it returns true if there are any pending tasks (which doesn't make much sense). Should I add a new method to the TaskQueue for this? It's also not clear to me where these times should be computed. Because they're only associated with toplevel tasks, it doesn't feel like we should be plumbing them through everywhere. Any thoughts?
skyostil@chromium.org changed reviewers: + alexclarke@chromium.org
+alexclarke@ for thoughts. On 2016/06/22 19:52:39, tdresser wrote: > Alright, so, back to the initial approach. > > I'm trying to figure out where I should put the computation of the task > durations, and how to avoid computing the time twice in the case where one task > ends and another begins at approximately the same time. > > I can't avoid double computation in the TaskQueue itself, because I need to do > this across task queues. > > I could have the scheduler helper make it easy to ask if there's any pending > immediate tasks in any of the task queues it knows about. It doesn't look like > any of the existing methods on TaskQueue do what I need. > TaskQueueImpl::HasPendingImmediateWork almost does what I need, but it looks > like it returns true if there are any pending tasks (which doesn't make much > sense). That's "immediate work we could do right now" as opposed to stuff that is either waiting for a delay or hasn't been transferred from the incoming queue. That seems what you would want for this case, no? > Should I add a new method to the TaskQueue for this? > > It's also not clear to me where these times should be computed. Because they're > only associated with toplevel tasks, it doesn't feel like we should be plumbing > them through everywhere. Any thoughts? I'm wondering if we should add some kind of timing tracker next to TaskQueueManager which would provide the following data: - Estimated queueing on the whole thread. - Per task timings for queues which have it enabled. This class could hook in pretty deeply to TaskQueueManager to avoid double lookups. One question is whether this interface should be a low level one that just provides the necessary data to stats computation in scheduler/renderer/ or whether it should include everything. I could be convinced either way.
> That's "immediate work we could do right now" as opposed to stuff that is either > waiting for a delay or hasn't been transferred from the incoming queue. That > seems what you would want for this case, no? That seems to disagree with the implementation (or I'm misreading it). https://cs.chromium.org/chromium/src/components/scheduler/base/task_queue_imp... bool TaskQueueImpl::HasPendingImmediateWork() const { if (!main_thread_only().delayed_work_queue->Empty() || !main_thread_only().immediate_work_queue->Empty()) { return true; } return NeedsPumping(); } Doesn't that mean that if anything is in the delayed work queue, we'll say we have pending immediate work? > I'm wondering if we should add some kind of timing tracker next to > TaskQueueManager which would provide the following data: > > - Estimated queueing on the whole thread. > - Per task timings for queues which have it enabled. We only need top level tasks for this use case, so I don't think we want to enable this for all tasks. It might be fine to compute this near the TaskQueueManager. > This class could hook in pretty deeply to TaskQueueManager to avoid double > lookups. One question is whether this interface should be a low level one that > just provides the necessary data to stats computation in scheduler/renderer/ or > whether it should include everything. I could be convinced either way. I'll see what it looks like to hang the queueing time estimator off of the TaskQueueManager (or some intermediary).
On 2016/06/23 15:06:33, tdresser wrote: > > That's "immediate work we could do right now" as opposed to stuff that is > either > > waiting for a delay or hasn't been transferred from the incoming queue. That > > seems what you would want for this case, no? > > That seems to disagree with the implementation (or I'm misreading it). > https://cs.chromium.org/chromium/src/components/scheduler/base/task_queue_imp... > > bool TaskQueueImpl::HasPendingImmediateWork() const { > if (!main_thread_only().delayed_work_queue->Empty() || > !main_thread_only().immediate_work_queue->Empty()) { > return true; > } > > return NeedsPumping(); > } > > Doesn't that mean that if anything is in the delayed work queue, we'll say we > have pending immediate work? Right, there are two kinds of queues: work queues and incoming queues. Anything in the work queue is eligible to run immediately. For delayed work it means that their delay has expired. There are separate work queues for immediate and delayed work so that we can interleave them the same way MessageLoop does. > > I'm wondering if we should add some kind of timing tracker next to > > TaskQueueManager which would provide the following data: > > > > - Estimated queueing on the whole thread. > > - Per task timings for queues which have it enabled. > > We only need top level tasks for this use case, so I don't think we want to > enable this for all tasks. It might be fine to compute this near the > TaskQueueManager. > > > This class could hook in pretty deeply to TaskQueueManager to avoid double > > lookups. One question is whether this interface should be a low level one that > > just provides the necessary data to stats computation in scheduler/renderer/ > or > > whether it should include everything. I could be convinced either way. > > I'll see what it looks like to hang the queueing time estimator off of the > TaskQueueManager (or some intermediary). sgtm.
On 2016/06/23 15:10:40, Sami wrote: > On 2016/06/23 15:06:33, tdresser wrote: > > > That's "immediate work we could do right now" as opposed to stuff that is > > either > > > waiting for a delay or hasn't been transferred from the incoming queue. That > > > seems what you would want for this case, no? > > > > That seems to disagree with the implementation (or I'm misreading it). > > > https://cs.chromium.org/chromium/src/components/scheduler/base/task_queue_imp... > > > > bool TaskQueueImpl::HasPendingImmediateWork() const { > > if (!main_thread_only().delayed_work_queue->Empty() || > > !main_thread_only().immediate_work_queue->Empty()) { > > return true; > > } > > > > return NeedsPumping(); > > } > > > > Doesn't that mean that if anything is in the delayed work queue, we'll say we > > have pending immediate work? > > Right, there are two kinds of queues: work queues and incoming queues. Anything > in the work queue is eligible to run immediately. For delayed work it means that > their delay has expired. There are separate work queues for immediate and > delayed work so that we can interleave them the same way MessageLoop does. Ooooooh. Thanks!
The CQ bit was checked by tdresser@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: android_arm64_dbg_recipe on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/android_arm6...)
Description was changed from ========== Report expected task queueing time via UMA Reported to: RendererScheduler.ExpectedTaskQueueingDuration This is recorded for each 5 second window. This is the expected queueing duration for a high priority task. BUG=599609 TEST=QueueingTimeEstimatorTest ========== to ========== Report expected task queueing time via UMA Reported to: RendererScheduler.ExpectedTaskQueueingDuration This is recorded for each 5 second window. This is the expected queueing duration for a high priority task. BUG=607151 TEST=QueueingTimeEstimatorTest ==========
The CQ bit was checked by tdresser@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
I think this is ready for an actual review. PTAL! This patch builds on top of Shubhie's patch here https://codereview.chromium.org/2113893006/. I've removed the logic specific to long tasks, but kept the high level architecture.
Thanks Tim. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/queueing_time_estimator.cc (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:13: base::TimeDelta ExpectedQueueingTimeFromTask(base::TimeTicks task_start, Please add a comment explaining the maths behind how the estimator works. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:59: bool end_past_window = task_end_time > window_start_time_ + window_duration_; Maybe make end_past_window a private helper function and move start_past_window into the loop since it's only used there. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:61: while (end_past_window) { I hope this is unlikely (given |window_duration_| looks like it's 1s based on components/scheduler/base/task_time_tracker.cc) but what if it ended up being far past the window? Can we make this O(1)? https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:70: current_expected_queueing_time_ = base::TimeDelta(); Seems like we're loosing data here. Does this cause significant discontinuities in the estimated queuing time? https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:224: if (do_work_depth_ == 1 && from_main_thread) { Isn't do_work_depth_ == 1 equivalent to !delegate_->IsNested() ? https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:43: }; // scheduler nit: DISALLOW_COPY_AND_ASSIGN https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/c... File components/scheduler/child/scheduler_helper.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/c... components/scheduler/child/scheduler_helper.h:117: TaskTimeTracker task_time_tracker_; The patch might be a bit smaller if this was moved into the TQM. The SchedulerHelper is a bit of a vestigial class from earlier incarnations of the Scheduler that Sami and I have occasionally talked about getting rid of. Sami WDYT?
Thanks Tim. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/queueing_time_estimator.cc (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:13: base::TimeDelta ExpectedQueueingTimeFromTask(base::TimeTicks task_start, Please add a comment explaining the maths behind how the estimator works. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:59: bool end_past_window = task_end_time > window_start_time_ + window_duration_; Maybe make end_past_window a private helper function and move start_past_window into the loop since it's only used there. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:61: while (end_past_window) { I hope this is unlikely (given |window_duration_| looks like it's 1s based on components/scheduler/base/task_time_tracker.cc) but what if it ended up being far past the window? Can we make this O(1)? https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:70: current_expected_queueing_time_ = base::TimeDelta(); Seems like we're loosing data here. Does this cause significant discontinuities in the estimated queuing time? https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:224: if (do_work_depth_ == 1 && from_main_thread) { Isn't do_work_depth_ == 1 equivalent to !delegate_->IsNested() ? https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:43: }; // scheduler nit: DISALLOW_COPY_AND_ASSIGN https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/c... File components/scheduler/child/scheduler_helper.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/c... components/scheduler/child/scheduler_helper.h:117: TaskTimeTracker task_time_tracker_; The patch might be a bit smaller if this was moved into the TQM. The SchedulerHelper is a bit of a vestigial class from earlier incarnations of the Scheduler that Sami and I have occasionally talked about getting rid of. Sami WDYT?
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_chromium_compile_dbg_ng on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_comp...)
Thanks for the feedback. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/queueing_time_estimator.cc (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:13: base::TimeDelta ExpectedQueueingTimeFromTask(base::TimeTicks task_start, On 2016/07/05 13:47:45, alex clarke wrote: > Please add a comment explaining the maths behind how the estimator works. Done. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:59: bool end_past_window = task_end_time > window_start_time_ + window_duration_; On 2016/07/05 13:47:45, alex clarke wrote: > Maybe make end_past_window a private helper function and move start_past_window > into the loop since it's only used there. Used helper method for both. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:61: while (end_past_window) { On 2016/07/05 13:47:45, alex clarke wrote: > I hope this is unlikely (given |window_duration_| looks like it's 1s based on > components/scheduler/base/task_time_tracker.cc) but what if it ended up being > far past the window? Can we make this O(1)? We can't make this O(1), because we want to report the expected queueing time via UMA for every window, even if it's empty. We might be able to amortize this if we were so inclined, but this should be cheap enough that it isn't an issue. The easy fix here would be to not report empty windows, and just accept that we won't get data for empty windows. This might be fairly reasonable, though it would make the data a bit harder to interpret. I'm much more worried about the case where there's a massive gap between main thread tasks than I am about the case where there's a massively long main thread task. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.cc:70: current_expected_queueing_time_ = base::TimeDelta(); On 2016/07/05 13:47:45, alex clarke wrote: > Seems like we're loosing data here. Does this cause significant discontinuities > in the estimated queuing time? Sorry, what data are we losing? We just reported on the current window, so the known queueing time for the next window is 0. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:224: if (do_work_depth_ == 1 && from_main_thread) { On 2016/07/05 13:47:45, alex clarke wrote: > Isn't do_work_depth_ == 1 equivalent to !delegate_->IsNested() ? Awesome, thanks. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:43: }; // scheduler On 2016/07/05 13:47:45, alex clarke wrote: > nit: DISALLOW_COPY_AND_ASSIGN Eeep - and also add "private". Done. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/c... File components/scheduler/child/scheduler_helper.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/c... components/scheduler/child/scheduler_helper.h:117: TaskTimeTracker task_time_tracker_; On 2016/07/05 13:47:45, alex clarke wrote: > The patch might be a bit smaller if this was moved into the TQM. The > SchedulerHelper is a bit of a vestigial class from earlier incarnations of the > Scheduler that Sami and I have occasionally talked about getting rid of. > > Sami WDYT? Shubhie said via email: "I put the TaskTimeTracker in the SchedulerHelper so that I can query it from RendererSchedulerImpl as part of the usual plumbing drill for WebWidget (example my previous CL)" I don't think the ownership matters either way, I can move that if you want, but we're still going to need a reference to it from the SchedulerHelper, as far as I know.
Neat! Left a few comments. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:181: do_work_depth_++; Could we use a base::AutoReset for this? (might not work in the case where the TQM is deleted by a task.) https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:201: if (do_work_depth_ == 1 && from_main_thread) Why the check for |from_main_thread|? That flag says which thread this task was posted from -- all tasks run on the main thread. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_queue_manager.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_queue_manager.h:65: TaskTimeTracker* task_time_tracker); Could you make this optional (e.g., SetTaskTimeTracker). That would let use avoid doing unneeded measurements (e.g., on worker threads). https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:18: class TickClock; Unused? https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:23: class SCHEDULER_EXPORT TaskTimeTracker : public QueueingTimeEstimator::Client { I'm wondering if this should just be a virtual interface with just ReportTaskTime? That way the RendererScheduler could implement that interface, pipe the data into a QueueingTimeEstimator member and use the result as it wanted. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:41: TaskTimeObserver* task_time_observer_; // Not Owned Did you mean to make these private? This observer also seems unused. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:47: } This brace seems misplaced.
https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:23: class SCHEDULER_EXPORT TaskTimeTracker : public QueueingTimeEstimator::Client { On 2016/07/05 14:40:39, Sami wrote: > I'm wondering if this should just be a virtual interface with just > ReportTaskTime? That way the RendererScheduler could implement that interface, > pipe the data into a QueueingTimeEstimator member and use the result as it > wanted. +1 we'd like to use the timings for other purposes (e.g. throttling). I don't mind if the TQM or the Scheduler owns the TaskTimeTracker (the latter might be nice since we don't need it for workers). You might consider adding a setter method on the TQM (plumbed through the helper). https://codereview.chromium.org/1898233002/diff/160001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/160001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:200: task_start_time = real_time_domain()->Now(); UpdateWorkQueues can also call real_time_domain()->Now(). It might be nice to refactor it to remove the duplication. It's probably cleanest to pass task_start_time in to UpdateWorkQueues.
The CQ bit was checked by tdresser@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Partway through comments. Passing LazyNow from TaskManager::DoWork to TimeDomain::UpdateWorkQueues causes TaskQueueManagerTest.TimeDomainMigration and TaskQueueManagerTest.TimeDomainsAreIndependant to fail. I said I'd let Alex take a look at the code I've got currently, which fails those tests. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:181: do_work_depth_++; On 2016/07/05 14:40:38, Sami wrote: > Could we use a base::AutoReset for this? (might not work in the case where the > TQM is deleted by a task.) Alex pointed out this isn't needed. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:18: class TickClock; On 2016/07/05 14:40:39, Sami wrote: > Unused? Done. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:41: TaskTimeObserver* task_time_observer_; // Not Owned On 2016/07/05 14:40:39, Sami wrote: > Did you mean to make these private? This observer also seems unused. Yeah, they're private in the next patch. Removed observer. https://codereview.chromium.org/1898233002/diff/160001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/160001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:200: task_start_time = real_time_domain()->Now(); On 2016/07/05 14:59:35, alex clarke wrote: > UpdateWorkQueues can also call real_time_domain()->Now(). It might be nice to > refactor it to remove the duplication. It's probably cleanest to pass > task_start_time in to UpdateWorkQueues. Done.
The CQ bit was checked by tdresser@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:201: if (do_work_depth_ == 1 && from_main_thread) On 2016/07/05 14:40:38, Sami wrote: > Why the check for |from_main_thread|? That flag says which thread this task was > posted from -- all tasks run on the main thread. Done. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_queue_manager.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_queue_manager.h:65: TaskTimeTracker* task_time_tracker); On 2016/07/05 14:40:39, Sami wrote: > Could you make this optional (e.g., SetTaskTimeTracker). That would let use > avoid doing unneeded measurements (e.g., on worker threads). Done. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:23: class SCHEDULER_EXPORT TaskTimeTracker : public QueueingTimeEstimator::Client { On 2016/07/05 14:59:35, alex clarke wrote: > On 2016/07/05 14:40:39, Sami wrote: > > I'm wondering if this should just be a virtual interface with just > > ReportTaskTime? That way the RendererScheduler could implement that interface, > > pipe the data into a QueueingTimeEstimator member and use the result as it > > wanted. > > +1 we'd like to use the timings for other purposes (e.g. throttling). > > I don't mind if the TQM or the Scheduler owns the TaskTimeTracker (the latter > might be nice since we don't need it for workers). You might consider adding a > setter method on the TQM (plumbed through the helper). Done. https://codereview.chromium.org/1898233002/diff/140001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:47: } On 2016/07/05 14:40:39, Sami wrote: > This brace seems misplaced. Done.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
Looking much better now! https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:233: base::TimeTicks task_end_time = real_time_domain()->Now(); Can you do this? if (!delegate_->IsNested() && task_time_tracker_) { // Only report top level task durations. base::TimeTicks task_end_time = lazy_now.Now(); task_time_tracker_->ReportTaskTime(task_start_time, task_end_time); task_start_time = task_end_time } https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_queue_manager.h (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_queue_manager.h:86: void SetTaskTimeTracker(TaskTimeTracker* task_time_tracker) { Please add a brief comment explaining what happens if a TaskTimeTracker is added. Please also document that it must outlive the TQM (or be removed by calling SetTaskTimeTracker(nullptr). https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_queue_manager_perftest.cc (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_queue_manager_perftest.cc:117: TaskTimeTracker task_time_tracker_; Can you add a TODO(alexclarke): parameterize so we can measure with and without this https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:21: class SCHEDULER_EXPORT Observer { Nit: DISALLOW_COPY_AND_ASSIGN(Observer) https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:28: TaskTimeTracker(Observer* observer); nit: explicit https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/test_count_uses_time_source.h (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/test_count_uses_time_source.h:13: class TestCountUsesTimeSource : public base::TickClock { Neat :)
Looks great. Thanks for doing the LazyNow optimization. One more question about the tracker below. https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_queue_manager.h (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_queue_manager.h:235: // To reduce locking overhead we track pending calls to DoWork seperately for typo: separately :) https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:19: class SCHEDULER_EXPORT TaskTimeTracker { Is there a reason to keep TaskTimeTracker and TaskTimeTracker::Observer separate? I was thinking we could merge them into a pure virtual TaskTimeObserver that only has ReportTaskTime in it. WDYT?
Thanks for all the feedback! I think Shubhie was thinking that the TaskTimeTracker would be the start of the PCU, but that isn't clear at this point, and until we've got a compelling reason to have TaskTimeTracker be separate from the observer, it makes sense to just have a TaskTimeTracker interface. https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_queue_manager.cc (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_queue_manager.cc:233: base::TimeTicks task_end_time = real_time_domain()->Now(); On 2016/07/06 09:51:39, alex clarke wrote: > Can you do this? > > if (!delegate_->IsNested() && task_time_tracker_) { > // Only report top level task durations. > base::TimeTicks task_end_time = lazy_now.Now(); > task_time_tracker_->ReportTaskTime(task_start_time, task_end_time); > task_start_time = task_end_time > } Done. https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_queue_manager.h (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_queue_manager.h:86: void SetTaskTimeTracker(TaskTimeTracker* task_time_tracker) { On 2016/07/06 09:51:39, alex clarke wrote: > Please add a brief comment explaining what happens if a TaskTimeTracker is > added. Please also document that it must outlive the TQM (or be removed by > calling SetTaskTimeTracker(nullptr). Done. https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_queue_manager.h:235: // To reduce locking overhead we track pending calls to DoWork seperately for On 2016/07/06 10:08:29, Sami wrote: > typo: separately :) Done. https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_queue_manager_perftest.cc (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_queue_manager_perftest.cc:117: TaskTimeTracker task_time_tracker_; On 2016/07/06 09:51:39, alex clarke wrote: > Can you add a TODO(alexclarke): parameterize so we can measure with and without > this Done. https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:19: class SCHEDULER_EXPORT TaskTimeTracker { On 2016/07/06 10:08:29, Sami wrote: > Is there a reason to keep TaskTimeTracker and TaskTimeTracker::Observer > separate? I was thinking we could merge them into a pure virtual > TaskTimeObserver that only has ReportTaskTime in it. WDYT? Done. https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:21: class SCHEDULER_EXPORT Observer { On 2016/07/06 09:51:39, alex clarke wrote: > Nit: DISALLOW_COPY_AND_ASSIGN(Observer) No longer needed. https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:28: TaskTimeTracker(Observer* observer); On 2016/07/06 09:51:39, alex clarke wrote: > nit: explicit No longer needed. https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... File components/scheduler/base/test_count_uses_time_source.h (right): https://codereview.chromium.org/1898233002/diff/200001/components/scheduler/b... components/scheduler/base/test_count_uses_time_source.h:13: class TestCountUsesTimeSource : public base::TickClock { On 2016/07/06 09:51:39, alex clarke wrote: > Neat :) Acknowledged.
lgtm https://codereview.chromium.org/1898233002/diff/220001/components/scheduler/b... File components/scheduler/base/queueing_time_estimator.h (right): https://codereview.chromium.org/1898233002/diff/220001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.h:28: }; nit: DISALLOW_COPY_AND_ASSIGN(Client);
lgtm if Alex is happy. https://codereview.chromium.org/1898233002/diff/220001/components/scheduler/b... File components/scheduler/base/test_task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/220001/components/scheduler/b... components/scheduler/base/test_task_time_tracker.h:17: base::TimeTicks endTime) override{}; nit: no trailing ';' (this will also fix git cl format's formatting)
tdresser@chromium.org changed reviewers: + isherman@chromium.org, sdefresne@chromium.org
+isherman@ for histograms.xml +sdefresne@ for components/BUILD.gn (test change) https://codereview.chromium.org/1898233002/diff/220001/components/scheduler/b... File components/scheduler/base/queueing_time_estimator.h (right): https://codereview.chromium.org/1898233002/diff/220001/components/scheduler/b... components/scheduler/base/queueing_time_estimator.h:28: }; On 2016/07/06 13:51:32, alex clarke wrote: > nit: DISALLOW_COPY_AND_ASSIGN(Client); Done. https://codereview.chromium.org/1898233002/diff/220001/components/scheduler/b... File components/scheduler/base/test_task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/220001/components/scheduler/b... components/scheduler/base/test_task_time_tracker.h:17: base::TimeTicks endTime) override{}; On 2016/07/06 13:51:33, Sami wrote: > nit: no trailing ';' (this will also fix git cl format's formatting) Done.
components/BUILD.gn lgtm (though I made some comments on the rest of the code that I skimmed through). https://codereview.chromium.org/1898233002/diff/240001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/240001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:17: }; I think this class needs a virtual destructor too (otherwise you'll call wrong destructor if you delete a TaskTimeTracker*). https://codereview.chromium.org/1898233002/diff/240001/components/scheduler/r... File components/scheduler/renderer/idle_time_estimator_unittest.cc (right): https://codereview.chromium.org/1898233002/diff/240001/components/scheduler/r... components/scheduler/renderer/idle_time_estimator_unittest.cc:47: manager_ = base::WrapUnique(new TaskQueueManager( nit: would be better to use base::MakeUnique instead manager_ = base::MakeUnique<TaskQueueManager>( main_task_runner_, "test.scheduler", "test.scheduler", "test.scheduler.debug");
https://codereview.chromium.org/1898233002/diff/240001/components/scheduler/b... File components/scheduler/base/task_time_tracker.h (right): https://codereview.chromium.org/1898233002/diff/240001/components/scheduler/b... components/scheduler/base/task_time_tracker.h:17: }; On 2016/07/06 14:51:32, sdefresne wrote: > I think this class needs a virtual destructor too (otherwise you'll call wrong > destructor if you delete a TaskTimeTracker*). Done. https://codereview.chromium.org/1898233002/diff/240001/components/scheduler/r... File components/scheduler/renderer/idle_time_estimator_unittest.cc (right): https://codereview.chromium.org/1898233002/diff/240001/components/scheduler/r... components/scheduler/renderer/idle_time_estimator_unittest.cc:47: manager_ = base::WrapUnique(new TaskQueueManager( On 2016/07/06 14:51:32, sdefresne wrote: > nit: would be better to use base::MakeUnique instead > > manager_ = base::MakeUnique<TaskQueueManager>( > main_task_runner_, "test.scheduler", "test.scheduler", > "test.scheduler.debug"); Done.
LGTM % request for extra clarification in the histograms.xml description: https://codereview.chromium.org/1898233002/diff/260001/tools/metrics/histogra... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/1898233002/diff/260001/tools/metrics/histogra... tools/metrics/histograms/histograms.xml:45780: + priority tasks posted to the RendererScheduler. Recorded for each N ms what is N?
https://codereview.chromium.org/1898233002/diff/260001/tools/metrics/histogra... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/1898233002/diff/260001/tools/metrics/histogra... tools/metrics/histograms/histograms.xml:45780: + priority tasks posted to the RendererScheduler. Recorded for each N ms On 2016/07/06 22:53:34, Ilya Sherman wrote: > what is N? 1000, done.
The CQ bit was checked by tdresser@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from skyostil@chromium.org, alexclarke@chromium.org, sdefresne@chromium.org, isherman@chromium.org Link to the patchset: https://codereview.chromium.org/1898233002/#ps280001 (title: "State window size in histograms.xml.")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: win_chromium_compile_dbg_ng on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_comp...)
The CQ bit was checked by tdresser@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: win_clang on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_clang/builds/...)
The CQ bit was checked by tdresser@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from alexclarke@chromium.org, isherman@chromium.org, sdefresne@chromium.org, skyostil@chromium.org Link to the patchset: https://codereview.chromium.org/1898233002/#ps300001 (title: "Remove export. Fix windows.")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_android_rel_ng on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/linux_androi...)
The CQ bit was checked by tdresser@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Message was sent while issue was closed.
Description was changed from ========== Report expected task queueing time via UMA Reported to: RendererScheduler.ExpectedTaskQueueingDuration This is recorded for each 5 second window. This is the expected queueing duration for a high priority task. BUG=607151 TEST=QueueingTimeEstimatorTest ========== to ========== Report expected task queueing time via UMA Reported to: RendererScheduler.ExpectedTaskQueueingDuration This is recorded for each 5 second window. This is the expected queueing duration for a high priority task. BUG=607151 TEST=QueueingTimeEstimatorTest ==========
Message was sent while issue was closed.
Committed patchset #16 (id:300001)
Message was sent while issue was closed.
CQ bit was unchecked.
Message was sent while issue was closed.
Description was changed from ========== Report expected task queueing time via UMA Reported to: RendererScheduler.ExpectedTaskQueueingDuration This is recorded for each 5 second window. This is the expected queueing duration for a high priority task. BUG=607151 TEST=QueueingTimeEstimatorTest ========== to ========== Report expected task queueing time via UMA Reported to: RendererScheduler.ExpectedTaskQueueingDuration This is recorded for each 5 second window. This is the expected queueing duration for a high priority task. BUG=607151 TEST=QueueingTimeEstimatorTest Committed: https://crrev.com/06f261ea4b8db484ffd50d96676dd34e203fa626 Cr-Commit-Position: refs/heads/master@{#404185} ==========
Message was sent while issue was closed.
Patchset 16 (id:??) landed as https://crrev.com/06f261ea4b8db484ffd50d96676dd34e203fa626 Cr-Commit-Position: refs/heads/master@{#404185} |