|
|
DescriptionEnsure ResourceScheduler starts requests even if the tab is closed.
Async revalidation requests are low priority, but they must be started
eventually even if the tab is closed.
Add unit tests to ensure that ResourceScheduler starts all requests
eventually, even if the client is deleted.
Also, modify ResourceScheduler to start any pending requests when the
client is deleted. This satisfies the requirement in the simplest way.
BUG=501805
TEST=content_unittests
Committed: https://crrev.com/1ce1e09b37ca7c2bfd25cd70106d2c4eac48b52a
Cr-Commit-Position: refs/heads/master@{#338058}
Patch Set 1 #
Total comments: 4
Patch Set 2 : Rebase. #
Messages
Total messages: 21 (7 generated)
ricea@chromium.org changed reviewers: + mmenke@chromium.org
https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... File content/browser/loader/resource_scheduler.cc (right): https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... content/browser/loader/resource_scheduler.cc:936: // OnClientDeleted() returns. Wait...So we start a bunch of requests, and then we delete a bunch of them? That seems like a bad idea. If they make it to the socket pools, that results in either deleting a bunch of idle sockets, or starting a bunch of connection attempts, before we cancel them.
https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... File content/browser/loader/resource_scheduler.cc (right): https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... content/browser/loader/resource_scheduler.cc:936: // OnClientDeleted() returns. On 2015/06/18 14:49:26, mmenke wrote: > Wait...So we start a bunch of requests, and then we delete a bunch of them? > That seems like a bad idea. If they make it to the socket pools, that results > in either deleting a bunch of idle sockets, or starting a bunch of connection > attempts, before we cancel them. Even without this CL up to 10 requests will be started and then immediately torn down, so it's not a huge regression in most cases. I have a couple of other possible approaches in mind. My preferred approach is to make |unowned_requests_| a type of Client. This causes the throttling logic to be applied consistently, so closing a tab doesn't cause any new requests to be started, but they will be started later as higher priority requests finish. This may cause some visible behaviour changes; currently these requests are always started immediately, whereas with scheduling applied they might have to wait for other requests to finish. The main reason I didn't try this approach is that really the common code should be factored out of Client as (for example) it is meaningless for |unowned_requests_| to have a ClientId. The code changes would end up being quite extensive. A simpler approach would be to post a task to the message loop, and start any requests that had not been deleted when the task was run.
https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... File content/browser/loader/resource_scheduler.cc (right): https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... content/browser/loader/resource_scheduler.cc:936: // OnClientDeleted() returns. On 2015/06/19 13:06:53, Adam Rice wrote: > On 2015/06/18 14:49:26, mmenke wrote: > > Wait...So we start a bunch of requests, and then we delete a bunch of them? > > That seems like a bad idea. If they make it to the socket pools, that results > > in either deleting a bunch of idle sockets, or starting a bunch of connection > > attempts, before we cancel them. > > Even without this CL up to 10 requests will be started and then immediately torn > down, so it's not a huge regression in most cases. You mean in the destructor, when we call UpdateThrottleState(), since should_throttle() is currently always false? Or do you mean something else? > I have a couple of other possible approaches in mind. My preferred approach is > to make |unowned_requests_| a type of Client. This causes the throttling logic > to be applied consistently, so closing a tab doesn't cause any new requests to > be started, but they will be started later as higher priority requests finish. > This may cause some visible behaviour changes; currently these requests are > always started immediately, whereas with scheduling applied they might have to > wait for other requests to finish. > > The main reason I didn't try this approach is that really the common code should > be factored out of Client as (for example) it is meaningless for > |unowned_requests_| to have a ClientId. The code changes would end up being > quite extensive. > > A simpler approach would be to post a task to the message loop, and start any > requests that had not been deleted when the task was run. We could certainly do that, though doesn't seem lovely, either. Let me spend a little time thinking if there's an easy way around this - I'll get back to you later today. One other option would be to have the RDH cancel pending requests before it cancels started ones, and only then call OnClientDeleted, but that also seems a bit hinky. Caveats: I'm not sure the RDH currently has access to that information, and there was one idea at one point to have the scheduler pause requests before first network access, which this approach would break, unless we exposed yet more weird info from the ResourceLoader.
On 2015/06/19 15:10:16, mmenke wrote: > https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... > File content/browser/loader/resource_scheduler.cc (right): > > https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... > content/browser/loader/resource_scheduler.cc:936: // OnClientDeleted() returns. > On 2015/06/19 13:06:53, Adam Rice wrote: > > On 2015/06/18 14:49:26, mmenke wrote: > > > Wait...So we start a bunch of requests, and then we delete a bunch of them? > > > That seems like a bad idea. If they make it to the socket pools, that > results > > > in either deleting a bunch of idle sockets, or starting a bunch of > connection > > > attempts, before we cancel them. > > > > Even without this CL up to 10 requests will be started and then immediately > torn > > down, so it's not a huge regression in most cases. > > You mean in the destructor, when we call UpdateThrottleState(), since > should_throttle() is currently always false? Or do you mean something else? > > > I have a couple of other possible approaches in mind. My preferred approach is > > to make |unowned_requests_| a type of Client. This causes the throttling logic > > to be applied consistently, so closing a tab doesn't cause any new requests to > > be started, but they will be started later as higher priority requests finish. > > This may cause some visible behaviour changes; currently these requests are > > always started immediately, whereas with scheduling applied they might have to > > wait for other requests to finish. > > > > The main reason I didn't try this approach is that really the common code > should > > be factored out of Client as (for example) it is meaningless for > > |unowned_requests_| to have a ClientId. The code changes would end up being > > quite extensive. > > > > A simpler approach would be to post a task to the message loop, and start any > > requests that had not been deleted when the task was run. > > We could certainly do that, though doesn't seem lovely, either. Let me spend a > little time thinking if there's an easy way around this - I'll get back to you > later today. > > One other option would be to have the RDH cancel pending requests before it > cancels started ones, and only then call OnClientDeleted, but that also seems a > bit hinky. Caveats: I'm not sure the RDH currently has access to that > information, and there was one idea at one point to have the scheduler pause > requests before first network access, which this approach would break, unless we > exposed yet more weird info from the ResourceLoader. Sorry, forgot about this, I'll get back to you today.
This LGTM. I want to see if we can switch the order of CancelRequestsForRoute and scheduler_->OnClientDeleted in the RDH, but that doesn't belong in this CL.
The CQ bit was checked by ricea@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1192673002/1
https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... File content/browser/loader/resource_scheduler.cc (right): https://codereview.chromium.org/1192673002/diff/1/content/browser/loader/reso... content/browser/loader/resource_scheduler.cc:936: // OnClientDeleted() returns. On 2015/06/19 15:10:16, mmenke wrote: > You mean in the destructor, when we call UpdateThrottleState(), since > should_throttle() is currently always false? Or do you mean something else? Sorry, I was confused because throttling is enabled by the test fixture in the unit tests. Without throttling the state goes from UNTHROTTLED -> UNTHROTTLED in the destructor and no jobs are started. With throttling enabled, the state goes from THROTTLED -> PAUSED in the destructor and pending jobs are started (because PAUSED is not implemented). So, this change does in fact cause a regression. > One other option would be to have the RDH cancel pending requests before it > cancels started ones, and only then call OnClientDeleted, but that also seems a > bit hinky. Caveats: I'm not sure the RDH currently has access to that > information, and there was one idea at one point to have the scheduler pause > requests before first network access, which this approach would break, unless we > exposed yet more weird info from the ResourceLoader. Another option: switch the order of the two lines in ResourceDispatcherHostImpl::OnRenderViewHostDeleted().
The CQ bit was unchecked by commit-bot@chromium.org
Exceeded global retry quota
The CQ bit was checked by ricea@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1192673002/1
The CQ bit was unchecked by commit-bot@chromium.org
Exceeded global retry quota
The CQ bit was checked by ricea@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from mmenke@chromium.org Link to the patchset: https://codereview.chromium.org/1192673002/#ps20001 (title: "Rebase.")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1192673002/20001
Message was sent while issue was closed.
Committed patchset #2 (id:20001)
Message was sent while issue was closed.
Patchset 2 (id:??) landed as https://crrev.com/1ce1e09b37ca7c2bfd25cd70106d2c4eac48b52a Cr-Commit-Position: refs/heads/master@{#338058} |