|
|
Created:
5 years, 11 months ago by jdduke (slow) Modified:
5 years, 10 months ago CC:
chromium-reviews, mkwst+moarreviews-renderer_chromium.org, darin-cc_chromium.org, jam, scheduler-bugs_chromium.org, mlamouri+watch-content_chromium.org, mmenke Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionThrottle resource message requests during user interaction
Resource message requests can be relatively expensive, particularly in
the induced work on the browser IO thread. Currently, there is no bound
on the rate with which such requests are dispatched from the renderer.
This leads to situations where the browser IO thread is flooded with
requests, potentially causing scroll jank and otherwise undesirable
stalls in the browser pipeline.
Introduce a ResourceMessageThrottler which intercepts and defers a given
resource message stream, depending on the state of the RendererScheduler.
When the RendererScheduler indicates that high priority work is
imminent/likely, requests will be throttled according to a configurable
dispatch rate.
Hook this throttling mechanism up to the ResourceDispatcher, limiting
the number of resource message requests/second during user interaction
to 180 (3 per frame at 60 fps) on Android, and 480 on desktop.
See goo.gl/H42AgQ for more design details.
Note: This change originally landed in
https://crrev.com/acfb4199abf841a1577c3968579c43b0232a53b7, but was
reverted due to issues with enforced ThreadChecker validation in
RendererSchedulerImpl. The ThreadChecker validation fix has been split
into a separate patch.
BUG=440037, 402136
Committed: https://crrev.com/94ae1f37922badae0012395018c23342f33d13b7
Cr-Commit-Position: refs/heads/master@{#314767}
Patch Set 1 #Patch Set 2 : Tweak #
Total comments: 19
Patch Set 3 : Partial code review #Patch Set 4 : Only throttle requests #Patch Set 5 : Cleanup #
Total comments: 3
Patch Set 6 : Rebase #
Total comments: 6
Patch Set 7 : Code review #
Total comments: 15
Patch Set 8 : Code review #Patch Set 9 : Stop being clever, only use the following bits 1) is sync IPC, 2) is request IPC #
Total comments: 2
Patch Set 10 : Rebase #
Total comments: 16
Patch Set 11 : Code review #
Total comments: 2
Patch Set 12 : Flush throttled on sync message #
Total comments: 2
Patch Set 13 : Comment fix #Patch Set 14 : Rebase #Patch Set 15 : Pull out ThreadChecker fix #Messages
Total messages: 58 (14 generated)
alexclarke@chromium.org changed reviewers: + alexclarke@chromium.org
https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... File content/renderer/scheduler/throttled_message_sender.cc (right): https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:61: if (!scheduler_->ShouldYieldForHighPriorityWork()) I think you'd get better results if the scheduler exposed it's policy.
picksi@chromium.org changed reviewers: + picksi@chromium.org
https://codereview.chromium.org/847883002/diff/20001/content/renderer/render_... File content/renderer/render_thread_impl.cc (right): https://codereview.chromium.org/847883002/diff/20001/content/renderer/render_... content/renderer/render_thread_impl.cc:199: const int kMaxResourceMessageRequestsPerSecondWhenThrottled = 60 * 4; Can you add a comment to explain how you got to this value? https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... File content/renderer/scheduler/throttled_message_sender.cc (right): https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:14: const int kReasonableFlushesPerSecond = 60; Is this 60 related to the 60*4 in kMaxResourceMessageRequestsPerSecondWhenThrottled? Can you add a comment as to how you arrived at this value? https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:57: throttled_messages_.push_back(msg); cunning! https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:85: sent_messages_since_last_flush_ = 0; Can you confirm that there are no threading issues here? i.e. additional messages won't get added while the throttled messages are being sent? https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:96: while (!throttled_messages_.empty() && Suggestion for discussion: Rather than checking for empty() & max_sent_messages every time you could create a function that flushes(n) messages. This could be called here and also in FlushAll [with queue.size()]. It would probably make FlushAll a bit slower (with all the queue manipulation) but would give a single code location for sending flushed messages.
skyostil@chromium.org changed reviewers: + skyostil@chromium.org
Interesting stuff :) https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... File content/renderer/scheduler/throttled_message_sender.cc (right): https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:30: flush_timer_(FROM_HERE, I wonder if we should build some kind of use-idle-time-if-possible-but-fall-back-to-posting-after-a-timeout abstraction to avoid adding one more timer that fires at some random time into the frame? https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:61: if (!scheduler_->ShouldYieldForHighPriorityWork()) On 2015/01/13 10:38:02, alexclarke1 wrote: > I think you'd get better results if the scheduler exposed it's policy. What advantage were you thinking of? This feels like a generic class that shouldn't be tied to any specific policy. https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:100: SendViaProxiedSender(msg); Should we yield here too (after sending at least one message to avoid starving) if the scheduler wants us to?
On 2015/01/13 14:47:35, Sami wrote: > https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... > content/renderer/scheduler/throttled_message_sender.cc:30: > flush_timer_(FROM_HERE, > I wonder if we should build some kind of > use-idle-time-if-possible-but-fall-back-to-posting-after-a-timeout abstraction > to avoid adding one more timer that fires at some random time into the frame? DoThisThingButDontStepOnTheCompositorsToes would also be an acceptable class name for this.
On 2015/01/13 14:49:53, Sami wrote: > On 2015/01/13 14:47:35, Sami wrote: > > > https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... > > content/renderer/scheduler/throttled_message_sender.cc:30: > > flush_timer_(FROM_HERE, > > I wonder if we should build some kind of > > use-idle-time-if-possible-but-fall-back-to-posting-after-a-timeout abstraction > > to avoid adding one more timer that fires at some random time into the frame? > > DoThisThingButDontStepOnTheCompositorsToes would also be an acceptable class > name for this. Alex reminded me that this was actually about avoiding pushing too much stuff to the IO thread and only indirectly about keeping the main thread jank free (by making sure other IPC isn't delayed). Feel free to ignore most of my comments about so far :) I wonder if we could make this tune automatically to the IO thread throughput instead of coming up with magic constants?
rmcilroy@chromium.org changed reviewers: + rmcilroy@chromium.org
https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... File content/renderer/scheduler/throttled_message_sender.cc (right): https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:30: flush_timer_(FROM_HERE, On 2015/01/13 14:47:35, Sami wrote: > I wonder if we should build some kind of > use-idle-time-if-possible-but-fall-back-to-posting-after-a-timeout abstraction > to avoid adding one more timer that fires at some random time into the frame? I like this idea (if not the name... ;). How about GuaranteedIdleTask, which takes a delay by which it should run, and is implemented as a delayed task which races with an idle task, and the first one to get executed cancels the other? One issue with this for this particular situation is that it might cause FlushThrottled to be called much earlier than it's delay, thus causing more messages to be sent than would otherwise while the sender is throttled. https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:61: if (!scheduler_->ShouldYieldForHighPriorityWork()) On 2015/01/13 14:47:35, Sami wrote: > On 2015/01/13 10:38:02, alexclarke1 wrote: > > I think you'd get better results if the scheduler exposed it's policy. > > What advantage were you thinking of? This feels like a generic class that > shouldn't be tied to any specific policy. I agree with Sami here - I think this shouldn't be tied to a specific policy (and if necessary the scheduler should provide a more specific signal, but I think ShouldYieldForHighPriorityWork is fine unless we have strong evidence that something else is better).
https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... File content/renderer/scheduler/throttled_message_sender.cc (right): https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:61: if (!scheduler_->ShouldYieldForHighPriorityWork()) On 2015/01/13 15:24:37, rmcilroy wrote: > On 2015/01/13 14:47:35, Sami wrote: > > On 2015/01/13 10:38:02, alexclarke1 wrote: > > > I think you'd get better results if the scheduler exposed it's policy. > > > > What advantage were you thinking of? This feels like a generic class that > > shouldn't be tied to any specific policy. > > I agree with Sami here - I think this shouldn't be tied to a specific policy > (and if necessary the scheduler should provide a more specific signal, but I > think ShouldYieldForHighPriorityWork is fine unless we have strong evidence that > something else is better). As far as I understand it, the intent here is to throttle IO resource loads which can trigger expensive processing on the IO thread (this is the root bug). If we get a sudden glut of them a BeginFrame or Input message can get stuck behind 20+ resource loads leading to janks. I contend that the current scheduler policy is a more useful signal than ShouldYieldForHighPriorityWork, because ShouldYieldForHighPriorityWork will return false most of the time even in compositor mode.
Now I've had a chance to think about this patch, here are my thoughts. I think this is worth trying but I suspect there will still be problems in the following scenarios: 1. There's a large number of pending resource loads from before the user started touching the device. 2. (theoretical) Pending resource loads where sent out suitably staggered but a bunch return at the same time.
Thanks all for the feedback, I suppose I should have put a WIP or NOT FOR REVIEW in the title as this was something I idly whipped up yesterday afternoon :) My other question is whether we should restrict throttling to Android? If not, we should probably increase the rate in some fashion. https://codereview.chromium.org/847883002/diff/20001/content/renderer/render_... File content/renderer/render_thread_impl.cc (right): https://codereview.chromium.org/847883002/diff/20001/content/renderer/render_... content/renderer/render_thread_impl.cc:199: const int kMaxResourceMessageRequestsPerSecondWhenThrottled = 60 * 4; On 2015/01/13 11:24:44, picksi1 wrote: > Can you add a comment to explain how you got to this value? Yup, done. https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... File content/renderer/scheduler/throttled_message_sender.cc (right): https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:14: const int kReasonableFlushesPerSecond = 60; On 2015/01/13 11:24:44, picksi1 wrote: > Is this 60 related to the 60*4 in > kMaxResourceMessageRequestsPerSecondWhenThrottled? Can you add a comment as to > how you arrived at this value? It's not really related, but I'll add a comment. https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:30: flush_timer_(FROM_HERE, On 2015/01/13 15:24:37, rmcilroy wrote: > On 2015/01/13 14:47:35, Sami wrote: > > I wonder if we should build some kind of > > use-idle-time-if-possible-but-fall-back-to-posting-after-a-timeout abstraction > > to avoid adding one more timer that fires at some random time into the frame? > > I like this idea (if not the name... ;). How about GuaranteedIdleTask, which > takes a delay by which it should run, and is implemented as a delayed task which > races with an idle task, and the first one to get executed cancels the other? > > One issue with this for this particular situation is that it might cause > FlushThrottled to be called much earlier than it's delay, thus causing more > messages to be sent than would otherwise while the sender is throttled. All solid options, I'm open to whatever you all think best. The randomness is unfortunate, but at least we know that the dispatch time should be fairly cheap (unless it's a sync IPC, in which case there's not much we can do anyhow). https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:61: if (!scheduler_->ShouldYieldForHighPriorityWork()) On 2015/01/13 15:46:55, alexclarke1 wrote: > On 2015/01/13 15:24:37, rmcilroy wrote: > > On 2015/01/13 14:47:35, Sami wrote: > > > On 2015/01/13 10:38:02, alexclarke1 wrote: > > > > I think you'd get better results if the scheduler exposed it's policy. > > > > > > What advantage were you thinking of? This feels like a generic class that > > > shouldn't be tied to any specific policy. > > > > I agree with Sami here - I think this shouldn't be tied to a specific policy > > (and if necessary the scheduler should provide a more specific signal, but I > > think ShouldYieldForHighPriorityWork is fine unless we have strong evidence > that > > something else is better). > > As far as I understand it, the intent here is to throttle IO resource loads > which can trigger expensive > processing on the IO thread (this is the root bug). If we get a sudden glut of > them a BeginFrame or Input message can get stuck behind 20+ resource loads > leading to janks. > > I contend that the current scheduler policy is a more useful signal than > ShouldYieldForHighPriorityWork, because ShouldYieldForHighPriorityWork will > return false most of the time even in compositor mode. I agree with Alex here, and was actually going to ask if we could expose an additional bit from the RendererScheduler. I'm not sure we need to expose the current policy, but perhaps something indicating that a high-priority/user action is taking place would be useful. https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:85: sent_messages_since_last_flush_ = 0; On 2015/01/13 11:24:44, picksi1 wrote: > Can you confirm that there are no threading issues here? i.e. additional > messages won't get added while the throttled messages are being sent? Yeah, I've got the |DCHECK(!is_sending_via_proxied)| check in the |Send()| function. I originally had this function swap the queue into a local copy, I can go back to that if you think it an improvement. https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:96: while (!throttled_messages_.empty() && On 2015/01/13 11:24:44, picksi1 wrote: > Suggestion for discussion: Rather than checking for empty() & max_sent_messages > every time you could create a function that flushes(n) messages. This could be > called here and also in FlushAll [with queue.size()]. It would probably make > FlushAll a bit slower (with all the queue manipulation) but would give a single > code location for sending flushed messages. That's a solid option, though it would prevent the "swap to local queue then dispatch" proposal for FlushAll. Any preference? https://codereview.chromium.org/847883002/diff/20001/content/renderer/schedul... content/renderer/scheduler/throttled_message_sender.cc:100: SendViaProxiedSender(msg); On 2015/01/13 14:47:35, Sami wrote: > Should we yield here too (after sending at least one message to avoid starving) > if the scheduler wants us to? Hmm, the |Send| itself should be fairly cheap (max 30-40us/message), but that's an option.
On 2015/01/13 16:08:31, alexclarke1 wrote: > Now I've had a chance to think about this patch, here are my thoughts. I think > this is worth trying but I suspect there will still be problems in the following > scenarios: > > 1. There's a large number of pending resource loads from before the user started > touching the device. Indeed, particularly if they're dispatched right as the user touches down. I'm not sure there's a lot we can do about that scenario outside of always throttling but at variable rates. I guess we could theoretically reduce the risk by executing the logic in the browser IO thread, where it might be informed of the user gesture slightly sooner than the main thread, but that's still racy. > 2. (theoretical) Pending resource loads where sent out suitably staggered but a > bunch return at the same time. Yup, this doesn't address that at all (neither did I intend it to), we'll need a separate solution (assuming the existing scheduler dispatch is insufficient).
Patchset #5 (id:80001) has been deleted
Patchset #5 (id:100001) has been deleted
jdduke@chromium.org changed reviewers: + mmenke@chromium.org
mmenke@: Could you take a look at content/renderer/resource_dispatch_throttler.* (either high or low level)? This is an initial pass at the most recent design we discussed, namely, only throttling new requests, and updating throttled requests when associated messages are received (priority change, cancellation, downloaded file release). https://codereview.chromium.org/847883002/diff/120001/content/renderer/resour... File content/renderer/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/120001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:19: case ResourceHostMsg_RequestResource::ID: { This is a little ugly... An alternative might be having ResourceDispatcher define a simple interface for message dispatch where the request id is supplied along with the message... https://codereview.chromium.org/847883002/diff/120001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:42: ResourceHostMsg_DidChangePriority::Param priority_params; I originally had some logic that just grouped these messages together, but went ahead and did the live update to simplify the queue. https://codereview.chromium.org/847883002/diff/120001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:70: if (!updated_request.download_to_file) I'm not even sure if this is correct... If we have a throttled request and we get this message to release the downloaded file, could we just drop the request entirely?
On 2015/01/15 04:51:54, jdduke wrote: > mmenke@: Could you take a look at content/renderer/resource_dispatch_throttler.* > (either high or low level)? This is an initial pass at the most recent design we > discussed, namely, only throttling new requests, and updating throttled requests > when associated messages are received (priority change, cancellation, downloaded > file release). Happy to take a look, but as a warning: I'm pretty heavily loaded with reviews, so may not get to this until Friday or even Tuesday (Monday being a holiday).
On 2015/01/13 16:08:31, alexclarke1 wrote: > Now I've had a chance to think about this patch, here are my thoughts. I think > this is worth trying but I suspect there will still be problems in the following > scenarios: > > 1. There's a large number of pending resource loads from before the user started > touching the device. > 2. (theoretical) Pending resource loads where sent out suitably staggered but a > bunch return at the same time. Sami and I discussed (2) briefly. One idea is to implement similar throttling logic for resource responses on the main thread. This logic could be plugged into the ResourceSchedulingFilter (https://codereview.chromium.org/648403004/), perhaps using an abstracted kernel of task throttling logic from this patch (one that also takes into account task execution time).
On 2015/01/15 15:41:56, mmenke wrote: > On 2015/01/15 04:51:54, jdduke wrote: > > mmenke@: Could you take a look at > content/renderer/resource_dispatch_throttler.* > > (either high or low level)? This is an initial pass at the most recent design > we > > discussed, namely, only throttling new requests, and updating throttled > requests > > when associated messages are received (priority change, cancellation, > downloaded > > file release). > > Happy to take a look, but as a warning: I'm pretty heavily loaded with reviews, > so may not get to this until Friday or even Tuesday (Monday being a holiday). mmenke@: Should I find a different reviewer?
On 2015/01/22 17:18:36, jdduke wrote: > On 2015/01/15 15:41:56, mmenke wrote: > > On 2015/01/15 04:51:54, jdduke wrote: > > > mmenke@: Could you take a look at > > content/renderer/resource_dispatch_throttler.* > > > (either high or low level)? This is an initial pass at the most recent > design > > we > > > discussed, namely, only throttling new requests, and updating throttled > > requests > > > when associated messages are received (priority change, cancellation, > > downloaded > > > file release). > > > > Happy to take a look, but as a warning: I'm pretty heavily loaded with > reviews, > > so may not get to this until Friday or even Tuesday (Monday being a holiday). > > mmenke@: Should I find a different reviewer? I'm so sorry. If you check my pending review list, *15* have been updated in the past 2 days, so I'm having trouble keeping track. I can get to this later today... Or if you want to find a new reviewer, I'd certainly be happy with tha,t too - I'm pretty heavily loaded, and am just finally getting caught up.
On 2015/01/22 17:22:07, mmenke wrote: > On 2015/01/22 17:18:36, jdduke wrote: > > mmenke@: Should I find a different reviewer? > > I'm so sorry. > > If you check my pending review list, *15* have been updated in the past 2 days, > so I'm having trouble keeping track. I can get to this later today... Or if > you want to find a new reviewer, I'd certainly be happy with tha,t too - I'm > pretty heavily loaded, and am just finally getting caught up. No worries, is there another reviewer you'd recommend for the resource-related code bits?
mmenke@chromium.org changed reviewers: + davidben@chromium.org
davidben: Have time to review this? I accidentally dropped it on the floor due to review buffer overflow.
On 2015/01/22 17:37:40, mmenke wrote: > davidben: Have time to review this? I accidentally dropped it on the floor due > to review buffer overflow. alexclarke@: Would you mind reviewing this in the meantime while waiting for owner review?
lgtm Changes for content/renderer/scheduler/ look good to me, but I'm not currently an owner :) https://codereview.chromium.org/847883002/diff/140001/content/renderer/render... File content/renderer/render_thread_impl.cc (right): https://codereview.chromium.org/847883002/diff/140001/content/renderer/render... content/renderer/render_thread_impl.cc:492: resource_dispatch_throttler_.reset(new ResourceDispatchThrottler( I wonder if we should have a FIXME comment since in an ideal world all operations on the the IO thread would be fast and we wouldn't need to do this. https://codereview.chromium.org/847883002/diff/140001/content/renderer/schedu... File content/renderer/scheduler/renderer_scheduler.h (right): https://codereview.chromium.org/847883002/diff/140001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler.h:60: virtual bool ShouldAnticipateHighPriorityWork() = 0; Functionality seems fine, but I'm not sure about the name. Maybe HighPriorityWorkAnticipated? https://codereview.chromium.org/847883002/diff/140001/content/renderer/schedu... File content/renderer/scheduler/renderer_scheduler_impl_unittest.cc (right): https://codereview.chromium.org/847883002/diff/140001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler_impl_unittest.cc:553: clock_->AdvanceNow(base::TimeDelta::FromMilliseconds(200)); Maybe make this an multiple of RendererSchedulerImpl ::kCompositorPriorityAfterTouchMillis. That way if we ever have to tweak the delay we won't need to edit this test.
https://codereview.chromium.org/847883002/diff/140001/content/renderer/render... File content/renderer/render_thread_impl.cc (right): https://codereview.chromium.org/847883002/diff/140001/content/renderer/render... content/renderer/render_thread_impl.cc:492: resource_dispatch_throttler_.reset(new ResourceDispatchThrottler( On 2015/01/26 17:15:59, alexclarke1 wrote: > I wonder if we should have a FIXME comment since in an ideal world all > operations on the the IO thread would be fast and we wouldn't need to do this. That's a good point, yeah, I'll put a note in ResourceDispatchThrottler that it should eventually go away entirely. https://codereview.chromium.org/847883002/diff/140001/content/renderer/schedu... File content/renderer/scheduler/renderer_scheduler.h (right): https://codereview.chromium.org/847883002/diff/140001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler.h:60: virtual bool ShouldAnticipateHighPriorityWork() = 0; On 2015/01/26 17:16:00, alexclarke1 wrote: > Functionality seems fine, but I'm not sure about the name. Maybe > HighPriorityWorkAnticipated? Yeah, changed to |IsHighPriorityWorkAnticipated| to make it clear it's a query not a notification (as we can't simply make the method const). https://codereview.chromium.org/847883002/diff/140001/content/renderer/schedu... File content/renderer/scheduler/renderer_scheduler_impl_unittest.cc (right): https://codereview.chromium.org/847883002/diff/140001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler_impl_unittest.cc:553: clock_->AdvanceNow(base::TimeDelta::FromMilliseconds(200)); On 2015/01/26 17:16:00, alexclarke1 wrote: > Maybe make this an multiple of RendererSchedulerImpl > ::kCompositorPriorityAfterTouchMillis. That way if we ever have to tweak the > delay we won't need to edit this test. Ah, good call, didn't realize that was exposed. Done.
content/renderer/scheduler parts lgtm. https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... File content/renderer/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:101: flush_timer_.SetTaskRunner(scheduler->DefaultTaskRunner()); Should we make this use the loading task runner instead? https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:113: case ResourceHostMsg_RequestResource::ID: Could we use IPC_BEGIN_MESSAGE_MAP etc. here? https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:181: // Valid request ids must be monotonically increasing. Do we need to worry about wrapping? https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... File content/renderer/resource_dispatch_throttler.h (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.h:29: class CONTENT_EXPORT ResourceDispatchThrottler : public IPC::Sender { Should this live in content/renderer/scheduler? It seems very schedulery. I'm not opposed to leaving it where it is either. https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... File content/renderer/resource_dispatch_throttler_unittest.cc (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler_unittest.cc:54: bool IsHighPriorityWorkAnticipated() override { return anticipate_; } nit: const https://codereview.chromium.org/847883002/diff/160001/content/renderer/schedu... File content/renderer/scheduler/renderer_scheduler_impl_unittest.cc (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler_impl_unittest.cc:52: static base::TimeDelta compositor_priority_after_touch() { nit: append "_duration" to the name?
Thanks for review! mmenke@: Can you comment on the wrapping issue below? https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... File content/renderer/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:101: flush_timer_.SetTaskRunner(scheduler->DefaultTaskRunner()); On 2015/01/27 14:06:06, Sami wrote: > Should we make this use the loading task runner instead? Done. https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:113: case ResourceHostMsg_RequestResource::ID: On 2015/01/27 14:06:06, Sami wrote: > Could we use IPC_BEGIN_MESSAGE_MAP etc. here? Hmm, I don't want to deserialize the whole message so I'd still want to use IPC_MESSAGE_HANDLER_GENERIC. I'm also not sure how common it is to use that macro in a |Send| call, as it has slightly different semantics (handled vs sent), but I can make the switch if you prefer. https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:181: // Valid request ids must be monotonically increasing. On 2015/01/27 14:06:06, Sami wrote: > Do we need to worry about wrapping? Yeah, I raised this concern to mmenke@ last week. I too don't feel comfortable with an int here, and wonder if we should go ahead and make the switch to int64. I should note, other parts of the resource pipeline also assume this monotonicity, and it's a very real possibility (albeit very likely quite rare) that we could run into wrapping and the whole system would break. https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... File content/renderer/resource_dispatch_throttler.h (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.h:29: class CONTENT_EXPORT ResourceDispatchThrottler : public IPC::Sender { On 2015/01/27 14:06:06, Sami wrote: > Should this live in content/renderer/scheduler? It seems very schedulery. I'm > not opposed to leaving it where it is either. Yeah, I had it there originally, and I'd be happy to move it back. It used to be a lot more generic, i.e., you could throttle any IPC::Message stream. I've since narrowed it down to resource requests, but yeah, putting it in renderer/scheduler is probably best, particularly as we hope this is but a temporary utility class. https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... File content/renderer/resource_dispatch_throttler_unittest.cc (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler_unittest.cc:54: bool IsHighPriorityWorkAnticipated() override { return anticipate_; } On 2015/01/27 14:06:06, Sami wrote: > nit: const Hmm, the abstract method isn't const (the impl implementation updates some state :( wonder if we should make that state mutable...). https://codereview.chromium.org/847883002/diff/160001/content/renderer/schedu... File content/renderer/scheduler/renderer_scheduler_impl_unittest.cc (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler_impl_unittest.cc:52: static base::TimeDelta compositor_priority_after_touch() { On 2015/01/27 14:06:06, Sami wrote: > nit: append "_duration" to the name? Done.
Thanks Jared, lgtm % the bit about int64. Let's give the resource folks a chance to review too. https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... File content/renderer/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:113: case ResourceHostMsg_RequestResource::ID: On 2015/01/27 16:59:09, jdduke wrote: > On 2015/01/27 14:06:06, Sami wrote: > > Could we use IPC_BEGIN_MESSAGE_MAP etc. here? > > Hmm, I don't want to deserialize the whole message so I'd still want to use > IPC_MESSAGE_HANDLER_GENERIC. I'm also not sure how common it is to use that > macro in a |Send| call, as it has slightly different semantics (handled vs > sent), but I can make the switch if you prefer. Oh, right, those macros are clearly meant more for the receiving side. The current code looks fine, I was just used to seeing macros for this type of stuff. https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler.cc:181: // Valid request ids must be monotonically increasing. On 2015/01/27 16:59:09, jdduke wrote: > On 2015/01/27 14:06:06, Sami wrote: > > Do we need to worry about wrapping? > > Yeah, I raised this concern to mmenke@ last week. I too don't feel comfortable > with an int here, and wonder if we should go ahead and make the switch to int64. > I should note, other parts of the resource pipeline also assume this > monotonicity, and it's a very real possibility (albeit very likely quite rare) > that we could run into wrapping and the whole system would break. Right. It seems simple enough to use an int64 here, which would make at least this part of the pipeline safe. https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... File content/renderer/resource_dispatch_throttler_unittest.cc (right): https://codereview.chromium.org/847883002/diff/160001/content/renderer/resour... content/renderer/resource_dispatch_throttler_unittest.cc:54: bool IsHighPriorityWorkAnticipated() override { return anticipate_; } On 2015/01/27 16:59:09, jdduke wrote: > On 2015/01/27 14:06:06, Sami wrote: > > nit: const > > Hmm, the abstract method isn't const (the impl implementation updates some state > :( wonder if we should make that state mutable...). Oh, right, forgot about that. Let's leave this as is then. Making the state mutable would probable be cleaner but I'd do it in another patch.
Sorry about the delay. A high-level question before I get too deep into the weeds: I'm not sure where is the best place to put this comment --- at the top of resource_dispatcher.cc? --- but I'm a little worried that this breaks the ordering guarantees of our IPC system. It may be fine now, but it's not obvious to someone looking at ResourceDispatcher that messages will get reordered on them. What if we add a new IPC layer to control a request? Then the RDH will see a message for a request that doesn't exist and get confused. Have you considered doing this at a layer /above/ the ResourceDispatcher instead? Say, wrap the WebURLLoader returned to Blink. Then we'll have the C++ type system on our side when WebURLLoader's interface changes.
On 2015/01/27 22:50:46, David Benjamin wrote: > Sorry about the delay. A high-level question before I get too deep into the > weeds: > > I'm not sure where is the best place to put this comment --- at the top of > resource_dispatcher.cc? --- but I'm a little worried that this breaks the > ordering guarantees of our IPC system. It may be fine now, but it's not obvious > to someone looking at ResourceDispatcher that messages will get reordered on > them. What if we add a new IPC layer to control a request? Then the RDH will see > a message for a request that doesn't exist and get confused. > Yeah, this approach is a bit more fragile than I'd like. > Have you considered doing this at a layer /above/ the ResourceDispatcher > instead? Say, wrap the WebURLLoader returned to Blink. Then we'll have the C++ > type system on our side when WebURLLoader's interface changes. I haven't, lacking familiarity with the complete resource pipeline, but I can certainly look into that. Definitely open to ideas/feedback as to how the throttling can best be wired into the pipeline. I'll take a look at the WebURLLoader implementation and see how this might fight in.
On 2015/01/28 00:11:25, jdduke wrote: > On 2015/01/27 22:50:46, David Benjamin wrote: > > Sorry about the delay. A high-level question before I get too deep into the > > weeds: > > > > I'm not sure where is the best place to put this comment --- at the top of > > resource_dispatcher.cc? --- but I'm a little worried that this breaks the > > ordering guarantees of our IPC system. It may be fine now, but it's not > obvious > > to someone looking at ResourceDispatcher that messages will get reordered on > > them. What if we add a new IPC layer to control a request? Then the RDH will > see > > a message for a request that doesn't exist and get confused. > > > > Yeah, this approach is a bit more fragile than I'd like. > > > Have you considered doing this at a layer /above/ the ResourceDispatcher > > instead? Say, wrap the WebURLLoader returned to Blink. Then we'll have the C++ > > type system on our side when WebURLLoader's interface changes. > > I haven't, lacking familiarity with the complete resource pipeline, but I can > certainly look into that. Definitely open to ideas/feedback as to how the > throttling can best be wired into the pipeline. I'll take a look at the > WebURLLoader implementation and see how this might fight in. One other thought here is to go back to more of the original model I had, where we 1) forward sync requests immediately, and 2) when throttling, dump all other messages in a queue, preserving their order (rather than trying to be smart/efficient about operating on the request stream with different message types). This should be fairly agnostic to future changes in the pipeline, and would no longer depend on message state or context.
I went ahead and implemented the minimally managed/invasive approach, wherein we simply queue all received messages when throttling (excepting sync messages). It's much simpler, and I hope sufficiently general. It may not be optimal in the sense that follow-up messages (cancellations/priority changes) are never coalesced with the request. However, that should be the rare(r) case, as throttling should happen in short bursts at very specific times. davidben@, thoughts? https://codereview.chromium.org/847883002/diff/200001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/200001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler.cc:56: if (msg->is_sync()) Should this flush any pending message? Sync IPC dispatch is still a bit of a black box to me (i.e., does it pump all message loops in such a way that preceding messages will still maintain proper relative ordering as seen by the browser process?).
mmenke@chromium.org changed reviewers: - mmenke@chromium.org
Thanks! Yeah, I think deferring the ResourceDispatcher as a unit is much safer. We're still losing a bit of an ordering guarantee so I'd somewhat prefer making a wrapper WebURLLoader or ResourceLoaderBridge, but I think that one's not too likely to break. Just a few comments here: https://codereview.chromium.org/847883002/diff/200001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/200001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler.cc:56: if (msg->is_sync()) On 2015/01/28 23:13:29, jdduke wrote: > Should this flush any pending message? Sync IPC dispatch is still a bit of a > black box to me (i.e., does it pump all message loops in such a way that > preceding messages will still maintain proper relative ordering as seen by the > browser process?). I think this normally works: CreateFooBar(foobar_id) SynchronousDoSomething(foobar_id) The reordering is that messages coming in may be reordered relative to the sync IPC reply. Also sometimes it spins a nested eventloop (ahh!) which is even more crazy. That said, the only sync IPC (sync XHR) in the ResourceDispatcher is independent of all other IPCs. Doing a flush is probably safe though? If you're doing a sync XHR, all hope of responsiveness has probably just gone down the drain. :-) https://codereview.chromium.org/847883002/diff/220001/content/renderer/render... File content/renderer/render_thread_impl.cc (right): https://codereview.chromium.org/847883002/diff/220001/content/renderer/render... content/renderer/render_thread_impl.cc:488: Could you add a comment here, something like: // Note: this reorders messages from the ResourceDispatcher with respect to other subsystems. https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... File content/renderer/scheduler/renderer_scheduler_impl.cc (right): https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler_impl.cc:149: main_thread_checker_.CalledOnValidThread(); DCHECK(....); Ditto for all the other instances of this call apparently. :-) I don't think base::ThreadChecker::CalledOnValidThread aborts on its own. It just returns a boolean. https://code.google.com/p/chromium/codesearch#chromium/src/base/threading/thr... https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler_impl.cc:167: // Note: This function could conceivable be implemented in terms of Nit: conceivable -> conceivably https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler.cc:47: for (auto& message: throttled_messages) Nit: Taking a random sample from codesearch, put a space before the : https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler.cc:55: thread_checker_.CalledOnValidThread(); DCHECK(....CalledOnValidThread()); https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler_unittest.cc (right): https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler_unittest.cc:68: this, Having the FakeRendererScheduler and the ResourceDispatchThrottler both be multiple-inherited in with the Test is kind of confusing. (Style guide also prohibits non-interface multiple inheritance.) Maybe leave the FakeRendererScheduler separate in a scoped_ptr? That'll probably require pulling the ResourceDispatchThrottler out too since you need to pass it into the constructor. https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler_unittest.cc:110: for (RequestResource(); GetAndResetSentMessageCount(); RequestResource()) { This looks very different from a normal for loop. Why not just: RequestResource(); while (GetAndResetSentMessageCount() > 0) { RequestResource(); } https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler_unittest.cc:259: EXPECT_EQ(kRequestsPerFlush, GetAndResetSentMessageCount()) << i; You could probably stick a SCOPED_TRACE(i) and that'll tag both EXPECTs with i. Ditto with some of the other cases. (You can even nest SCOPED_TRACEs. They're fancy.)
Thanks for review. https://codereview.chromium.org/847883002/diff/220001/content/renderer/render... File content/renderer/render_thread_impl.cc (right): https://codereview.chromium.org/847883002/diff/220001/content/renderer/render... content/renderer/render_thread_impl.cc:488: On 2015/02/03 19:46:47, David Benjamin wrote: > Could you add a comment here, something like: > > // Note: this reorders messages from the ResourceDispatcher with respect to > other subsystems. Done. https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... File content/renderer/scheduler/renderer_scheduler_impl.cc (right): https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler_impl.cc:149: main_thread_checker_.CalledOnValidThread(); On 2015/02/03 19:46:47, David Benjamin wrote: > DCHECK(....); > > Ditto for all the other instances of this call apparently. :-) I don't think > base::ThreadChecker::CalledOnValidThread aborts on its own. It just returns a > boolean. > > https://code.google.com/p/chromium/codesearch#chromium/src/base/threading/thr... Hah, good catch. https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/renderer_scheduler_impl.cc:167: // Note: This function could conceivable be implemented in terms of On 2015/02/03 19:46:47, David Benjamin wrote: > Nit: conceivable -> conceivably Done. https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler.cc:47: for (auto& message: throttled_messages) On 2015/02/03 19:46:47, David Benjamin wrote: > Nit: Taking a random sample from codesearch, put a space before the : Done. https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler.cc:55: thread_checker_.CalledOnValidThread(); On 2015/02/03 19:46:47, David Benjamin wrote: > DCHECK(....CalledOnValidThread()); Done. https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler_unittest.cc (right): https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler_unittest.cc:68: this, On 2015/02/03 19:46:48, David Benjamin wrote: > Having the FakeRendererScheduler and the ResourceDispatchThrottler both be > multiple-inherited in with the Test is kind of confusing. (Style guide also > prohibits non-interface multiple inheritance.) Maybe leave the > FakeRendererScheduler separate in a scoped_ptr? That'll probably require pulling > the ResourceDispatchThrottler out too since you need to pass it into the > constructor. Yeah, I get lazy in testing code sometimes, but that's better. https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler_unittest.cc:110: for (RequestResource(); GetAndResetSentMessageCount(); RequestResource()) { On 2015/02/03 19:46:48, David Benjamin wrote: > This looks very different from a normal for loop. Why not just: > RequestResource(); > while (GetAndResetSentMessageCount() > 0) { > RequestResource(); > } Done. https://codereview.chromium.org/847883002/diff/220001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler_unittest.cc:259: EXPECT_EQ(kRequestsPerFlush, GetAndResetSentMessageCount()) << i; On 2015/02/03 19:46:47, David Benjamin wrote: > You could probably stick a SCOPED_TRACE(i) and that'll tag both EXPECTs with i. > Ditto with some of the other cases. (You can even nest SCOPED_TRACEs. They're > fancy.) Done.
Thanks! One last comment that I think slipped through from an older patch set. https://codereview.chromium.org/847883002/diff/240001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/240001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler.cc:56: if (msg->is_sync()) Transplanting this comment; Rietveld doesn't do a good job of making comments on older patch sets visible. :-( On 2015/02/03 19:46:47, David Benjamin wrote: > On 2015/01/28 23:13:29, jdduke wrote: > > Should this flush any pending message? Sync IPC dispatch is still a bit of a > > black box to me (i.e., does it pump all message loops in such a way that > > preceding messages will still maintain proper relative ordering as seen by the > > browser process?). > > I think this normally works: > > CreateFooBar(foobar_id) > SynchronousDoSomething(foobar_id) > > The reordering is that messages coming in may be reordered relative to the sync > IPC reply. Also sometimes it spins a nested eventloop (ahh!) which is even more > crazy. > > That said, the only sync IPC (sync XHR) in the ResourceDispatcher is independent > of all other IPCs. Doing a flush is probably safe though? If you're doing a sync > XHR, all hope of responsiveness has probably just gone down the drain. :-) I thiiink what I wrote there is true? There doesn't seem to be any kind of buffering if you call Send() with a non-sync message. There'll definitely be reordering between sync and async in replies from the browser, but I think outgoing messages stay ordered. https://code.google.com/p/chromium/codesearch#chromium/src/ipc/ipc_sync_chann... (Aside and for amusement: sync IPCs are pain. I went back to this comment to see if I found out anything interesting about outgoing sync IPCs last I looked, but it seems the answer is no. https://crbug.com/358516#c9 )
https://codereview.chromium.org/847883002/diff/240001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler.cc (right): https://codereview.chromium.org/847883002/diff/240001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler.cc:56: if (msg->is_sync()) On 2015/02/04 17:10:41, David Benjamin wrote: > Transplanting this comment; Rietveld doesn't do a good job of making comments on > older patch sets visible. :-( > > On 2015/02/03 19:46:47, David Benjamin wrote: > > On 2015/01/28 23:13:29, jdduke wrote: > > > Should this flush any pending message? Sync IPC dispatch is still a bit of a > > > black box to me (i.e., does it pump all message loops in such a way that > > > preceding messages will still maintain proper relative ordering as seen by > the > > > browser process?). > > > > I think this normally works: > > > > CreateFooBar(foobar_id) > > SynchronousDoSomething(foobar_id) > > > > The reordering is that messages coming in may be reordered relative to the > sync > > IPC reply. Also sometimes it spins a nested eventloop (ahh!) which is even > more > > crazy. > > > > That said, the only sync IPC (sync XHR) in the ResourceDispatcher is > independent > > of all other IPCs. Doing a flush is probably safe though? If you're doing a > sync > > XHR, all hope of responsiveness has probably just gone down the drain. :-) > > I thiiink what I wrote there is true? There doesn't seem to be any kind of > buffering if you call Send() with a non-sync message. There'll definitely be > reordering between sync and async in replies from the browser, but I think > outgoing messages stay ordered. > > https://code.google.com/p/chromium/codesearch#chromium/src/ipc/ipc_sync_chann... > > (Aside and for amusement: sync IPCs are pain. I went back to this comment to see > if I found out anything interesting about outgoing sync IPCs last I looked, but > it seems the answer is no. https://crbug.com/358516#c9 ) Oops, yeah I totally missed that comment. I changed it to do a full flush per your suggestion.
lgtm! https://codereview.chromium.org/847883002/diff/260001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler_unittest.cc (right): https://codereview.chromium.org/847883002/diff/260001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler_unittest.cc:262: // Unthrottled message types should flush any previously throttled messages. Nit: Unthrottled message types -> Synchronous messages? "Unthrottled message types" seems it could refer to either sync IPCs (the only messages that are never throttled) or RequestResource (the only messages we're actually interested in throttling).
New patchsets have been uploaded after l-g-t-m from davidben@chromium.org
Thanks for review! https://codereview.chromium.org/847883002/diff/260001/content/renderer/schedu... File content/renderer/scheduler/resource_dispatch_throttler_unittest.cc (right): https://codereview.chromium.org/847883002/diff/260001/content/renderer/schedu... content/renderer/scheduler/resource_dispatch_throttler_unittest.cc:262: // Unthrottled message types should flush any previously throttled messages. On 2015/02/04 22:57:52, David Benjamin wrote: > Nit: Unthrottled message types -> Synchronous messages? > > "Unthrottled message types" seems it could refer to either sync IPCs (the only > messages that are never throttled) or RequestResource (the only messages we're > actually interested in throttling). Good catch, definitely should have been Synchronous.
The CQ bit was checked by jdduke@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/847883002/280001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: mac_chromium_rel_ng on tryserver.chromium.mac (http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_...)
The CQ bit was checked by jdduke@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/847883002/300001
Message was sent while issue was closed.
Committed patchset #14 (id:300001)
Message was sent while issue was closed.
Patchset 14 (id:??) landed as https://crrev.com/acfb4199abf841a1577c3968579c43b0232a53b7 Cr-Commit-Position: refs/heads/master@{#314739}
Message was sent while issue was closed.
A revert of this CL (patchset #14 id:300001) has been created in https://codereview.chromium.org/897223002/ by dcheng@chromium.org. The reason for reverting is: The new DCHECKs for CalledOnValidThread are breaking tests everywhere: https://build.chromium.org/p/chromium.webkit/builders/Linux%20ChromiumOS%20Te....
Message was sent while issue was closed.
On 2015/02/05 04:39:16, dcheng wrote: > A revert of this CL (patchset #14 id:300001) has been created in > https://codereview.chromium.org/897223002/ by mailto:dcheng@chromium.org. > > The reason for reverting is: The new DCHECKs for CalledOnValidThread are > breaking tests everywhere: > https://build.chromium.org/p/chromium.webkit/builders/Linux%20ChromiumOS%20Te.... Hmm, yeah I probably should have made that fix in a separate CL...
The CQ bit was checked by jdduke@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/847883002/320001
Message was sent while issue was closed.
Committed patchset #15 (id:320001)
Message was sent while issue was closed.
Patchset 15 (id:??) landed as https://crrev.com/94ae1f37922badae0012395018c23342f33d13b7 Cr-Commit-Position: refs/heads/master@{#314767} |