|
|
Created:
6 years, 2 months ago by Sami Modified:
6 years, 1 month ago Reviewers:
jochen (gone - plz use gerrit), jar (doing other things), cpu_(ooo_6.6-7.5), picksi1, petrcermak, danakj, rmcilroy, alexclarke, alex clarke (OOO till 29th) CC:
chromium-reviews, danakj, darin (slow to review), darin-cc_chromium.org, jam, mkwst+moarreviews-renderer_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Project:
chromium Visibility:
Public. |
Descriptioncontent: Add task queue manager
This patch introduces a task queue manager class which provides two
main pieces of functionality:
1. An arbitrary number of incoming task queues.
2. A selector interface which allows an outside agent choose the next
task queue to be serviced.
This allows the selector implementation to dynamically prioritize
between pending tasks in the different queues. Actual task running is
accomplished by posting a task to a separate task runner (usually the
main MessageLoop) so that the task queues can coexist with other work
being done on the same thread.
This class will be used by the Blink scheduler.
Design doc: https://docs.google.com/a/chromium.org/document/d/16f_RIhZa47uEK_OdtTgzWdRU0RFMTQWMpEWyWXIpXUo/edit#heading=h.srz53flt1rrp
BUG=391005
Committed: https://crrev.com/1161fca345efbace93bfda2a874f7dde3f3e5381
Cr-Commit-Position: refs/heads/master@{#300870}
Patch Set 1 #Patch Set 2 : Add scheduling. #Patch Set 3 : First stab at blink integration. #
Total comments: 12
Patch Set 4 : Refactor based on discussions. #Patch Set 5 : API updates. #Patch Set 6 : Move TaskQueueManager to base. #Patch Set 7 : Strip out everything else than TaskQueueManager. Added tests. #
Total comments: 25
Patch Set 8 : Review comments. #
Total comments: 2
Patch Set 9 : More review comments. #Patch Set 10 : Disable task runners after shutdown. #Patch Set 11 : Back to content. #
Total comments: 12
Patch Set 12 : Rebased. #Patch Set 13 : Jochen's comments. #
Total comments: 23
Patch Set 14 : Rebased. No more virtual + override. #Patch Set 15 : Carlos's comments. #
Total comments: 2
Messages
Total messages: 60 (11 generated)
skyostil@chromium.org changed reviewers: + alexclarke@chromium.org, petrcermak@chromium.org, picksi@chromium.org, rmcilroy@chromium.org
Here's what a Chromium-side layered task queue manager could look like. The Blink integration still needs more work -- shared timers aren't hooked up yet -- but you can see the general shape.
It's still also worth thinking about whether it would be simpler to do all of this on one side of the boundary (completely in Blink or completely in content).
alexclarke@google.com changed reviewers: + alexclarke@google.com
https://codereview.chromium.org/637303003/diff/120001/content/renderer/render... File content/renderer/renderer_blink_platform_impl.h (right): https://codereview.chromium.org/637303003/diff/120001/content/renderer/render... content/renderer/renderer_blink_platform_impl.h:62: virtual void callOnMainThread(void (*func)(void*), void* context); Personally I'm OK with a naked function pointer, but I'd expect somebody to ask why you're not using a callback. https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:118: // FIXME: Can this ever happen? Right now I suspect no, but if you where to post DoWork as a delayed task (to facilitate idle work) and then as a regular task (because the compositor finished early) it might happen. https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.h (right): https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:23: enum QueueId { Needs a comment explaining it should be kept in sync with the enum in the scheduler https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... File content/renderer/scheduler/task_queue_scheduler.h (right): https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_scheduler.h:18: virtual void SetWorkQueues( We need a way for the scheduler to query the sequence number for the head task of each queue and to determine if it's empty. This feels like it's exposing too much information - although perhaps it's up to the code to implement WebTaskQueue / WebTaskQueueProvider to do the right thing? For reference here's my current implementation of those classes: class BLINK_PLATFORM_EXPORT WebTaskQueue { public: virtual size_t numTasksQueued() const = 0; virtual int headTaskSequenceNum() const = 0; protected: virtual ~WebTaskQueue() {} }; class BLINK_PLATFORM_EXPORT WebTaskQueueProvider { public: virtual size_t taskQueueCount() const = 0; virtual const WebTaskQueue* WebTaskQueueAt(size_t task_queue_index) const = 0; protected: virtual ~WebTaskQueueProvider() {} }; https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_scheduler.h:23: virtual void DidQueueTask(TaskQueueManager::QueueId queue) = 0; Could we get a function to turn these notifications on/off per queue? As far as I can tell only information about the compositor queue needs to be pushed to the scheduler, and it's better for the scheduler to pull information about the other queues as needed. https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_scheduler.h:30: virtual bool ShouldReloadWorkQueue(TaskQueueManager::QueueId queue) = 0; I'm surprised the scheduler needs to be aware of this. Under what circumstances would it not reload the queue?
https://codereview.chromium.org/637303003/diff/120001/content/renderer/render... File content/renderer/renderer_blink_platform_impl.h (right): https://codereview.chromium.org/637303003/diff/120001/content/renderer/render... content/renderer/renderer_blink_platform_impl.h:62: virtual void callOnMainThread(void (*func)(void*), void* context); On 2014/10/13 10:06:22, alexclarke wrote: > Personally I'm OK with a naked function pointer, but I'd expect somebody to ask > why you're not using a callback. This is an existing function that's called from Blink so we can't easily use anything fancier. https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:118: // FIXME: Can this ever happen? On 2014/10/13 10:06:22, alexclarke wrote: > Right now I suspect no, but if you where to post DoWork as a delayed task (to > facilitate idle work) and then as a regular task (because the compositor > finished early) it might happen. Right, let's figure out how idle tasks fit into all of this... https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.h (right): https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:23: enum QueueId { On 2014/10/13 10:06:23, alexclarke wrote: > Needs a comment explaining it should be kept in sync with the enum in the > scheduler Right, I think I'll move this to RendererBlinkPlatformImpl and keep this class agnostic about the meaning of each queue. https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... File content/renderer/scheduler/task_queue_scheduler.h (right): https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_scheduler.h:18: virtual void SetWorkQueues( On 2014/10/13 10:06:23, alexclarke wrote: > We need a way for the scheduler to query the sequence number for the head task > of each queue and to determine if it's empty. This feels like it's exposing too > much information - although perhaps it's up to the code to implement > WebTaskQueue / WebTaskQueueProvider to do the right thing? > > For reference here's my current implementation of those classes: > > class BLINK_PLATFORM_EXPORT WebTaskQueue { > public: > virtual size_t numTasksQueued() const = 0; > > virtual int headTaskSequenceNum() const = 0; > > protected: > virtual ~WebTaskQueue() {} > }; > > class BLINK_PLATFORM_EXPORT WebTaskQueueProvider { > public: > virtual size_t taskQueueCount() const = 0; > > virtual const WebTaskQueue* WebTaskQueueAt(size_t task_queue_index) const = > 0; > > protected: > virtual ~WebTaskQueueProvider() {} > }; Looks like we can get away with just exposing the task queues as they are in the new layout. https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_scheduler.h:23: virtual void DidQueueTask(TaskQueueManager::QueueId queue) = 0; On 2014/10/13 10:06:23, alexclarke wrote: > Could we get a function to turn these notifications on/off per queue? As far as > I can tell only information about the compositor queue needs to be pushed to the > scheduler, and it's better for the scheduler to pull information about the other > queues as needed. Notifications are gone now. https://codereview.chromium.org/637303003/diff/120001/content/renderer/schedu... content/renderer/scheduler/task_queue_scheduler.h:30: virtual bool ShouldReloadWorkQueue(TaskQueueManager::QueueId queue) = 0; On 2014/10/13 10:06:23, alexclarke wrote: > I'm surprised the scheduler needs to be aware of this. Under what circumstances > would it not reload the queue? Also gone.
skyostil@chromium.org changed reviewers: + darin@chromium.org
Hey Darin, since it sounded like you were interested in all of this, would you be willing to review this part? Let me know if you're busy and I'll find another reviewer. Design doc: https://docs.google.com/a/chromium.org/document/d/16f_RIhZa47uEK_OdtTgzWdRU0R...
https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:43: } Should auto_pump be called pump_when_empty? https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:66: scheduler_->RegisterWorkQueues(work_queues); Are we exposing our inner workings (i.e. the work_queue) to the scheduler doing this? Would it be better (ironically!) to pass an InternalTaskQueue to the scheduler and when a task is posted via the Post function (presumably in this file?) it would (internally) use the work_queue? https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:82: InternalTaskQueue* queue = queues_[queue_index]; Getting a queue from an index is repeated a lot, should we turn it into a function that can (a) be future proofed in case we change the implementation (b) allow us to gather the DCHECK_LT's into the same place. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:106: queue->incoming_queue.push(pending_task); Is it possible for the incoming queue to be empty but there still to be an instance of DoWork queued on the main thread? Do we need to have the equvilant of a 'task runner posted' bool that is set true in scheduleWork and false in DoWork? https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:118: } Would you consider having two different functions here; TurnAutoPumpOn and TurnAutoPumpOff? It's that last 'if' that makes me squirm a little! https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:134: } Is ScheduleWork the right name? Should it be more like PostTaskQueueManagerExecution ... or something?
https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.h (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.h:51: void SetAutoPump(size_t queue_index, bool auto_pump); I suspect this method will only ever get called once, should this flag should be set via the constructor? https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.h:60: bool PollQueue(size_t queue_index); How about IsQueueEmpty()? https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.h:122: const size_t index; The only reason for this appears to be because of the C++11 style ranged for loop. As much as I like C++11 style loops I think it would be better to get rid of this :)
https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:84: return !queue->incoming_queue.empty(); Shouldn't this be "return queue->auto_pump && !queue->incoming_queue.empty();"? How else will the TaskQueueScheduler know that it cannot schedule from the corresponding queue? https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.h (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.h:56: void PumpQueue(size_t queue_index); Shouldn't this method reload tasks whenever the incoming queue is non-empty (independent of the emptiness of the work queue)? Otherwise, if some idle tasks didn't finish in the previous idle period, newly scheduled idle tasks will be ignored in the current one.
https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:84: return !queue->incoming_queue.empty(); On 2014/10/15 10:05:40, petrcermak wrote: > Shouldn't this be "return queue->auto_pump && !queue->incoming_queue.empty();"? > How else will the TaskQueueScheduler know that it cannot schedule from the > corresponding queue? The TaskQueueScheduler is responsible for setting the state of auto_pump. Problem with your suggestion is the TaskQueueScheduler would no longer be able to know when the idle queue is empty.
https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:84: return !queue->incoming_queue.empty(); On 2014/10/15 10:09:14, alexclarke wrote: > On 2014/10/15 10:05:40, petrcermak wrote: > > Shouldn't this be "return queue->auto_pump && > !queue->incoming_queue.empty();"? > > How else will the TaskQueueScheduler know that it cannot schedule from the > > corresponding queue? > > The TaskQueueScheduler is responsible for setting the state of auto_pump. > Problem with your suggestion is the TaskQueueScheduler would no longer be able > to know when the idle queue is empty. Acknowledged.
Thanks for all the comments! https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:43: } On 2014/10/15 09:07:38, picksi1 wrote: > Should auto_pump be called pump_when_empty? No, because it also involves how DoWork is scheduled. It's not just about work queue management, which is why we had to think long and hard about its name :) https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:66: scheduler_->RegisterWorkQueues(work_queues); On 2014/10/15 09:07:38, picksi1 wrote: > Are we exposing our inner workings (i.e. the work_queue) to the scheduler doing > this? Would it be better (ironically!) to pass an InternalTaskQueue to the > scheduler and when a task is posted via the Post function (presumably in this > file?) it would (internally) use the work_queue? I'm not sure I follow the last half of your suggestion, but as to the first part the selector needs some way to know what work it can select from at the moment it's called. I'm not sure it needs to know about the work queues specifically, but just the same information. I guess we could rename lie that they are "pending" task queues or something, but I'm not sure that abstraction buys us much. We already need to be a little explicit about the inner workings so that pumping makes sense. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:82: InternalTaskQueue* queue = queues_[queue_index]; On 2014/10/15 09:07:38, picksi1 wrote: > Getting a queue from an index is repeated a lot, should we turn it into a > function that can (a) be future proofed in case we change the implementation (b) > allow us to gather the DCHECK_LT's into the same place. Good idea, done. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:106: queue->incoming_queue.push(pending_task); On 2014/10/15 09:07:38, picksi1 wrote: > Is it possible for the incoming queue to be empty but there still to be an > instance of DoWork queued on the main thread? Do we need to have the equvilant > of a 'task runner posted' bool that is set true in scheduleWork and false in > DoWork? Yes, that's possible. This is written to be slightly eager about posting more DoWork's than are strictly necessary to decrease our changes of getting pushed to the back of the MessageLoop queue by unrelated things. Because DoWork only ever runs one task at most I think this should be safe w.r.t. starvation. A bool like that would work but would need atomic reads and writes, so I'd prefer to avoid it if we can especially since we're already taking a lock here. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:118: } On 2014/10/15 09:07:38, picksi1 wrote: > Would you consider having two different functions here; TurnAutoPumpOn and > TurnAutoPumpOff? It's that last 'if' that makes me squirm a little! Hmm, having two different functions makes this a little more messy to use from the client's side. However let me see how that would look on the implementation side. I think it would let use avoid some of the locks. Edit: Okay I tried this and I still prefer to have all the logic in one place. Having everything spread out in one-line functions makes it really difficult to see what is going on. I've still reworked this a bit to make it clearer though. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:134: } On 2014/10/15 09:07:38, picksi1 wrote: > Is ScheduleWork the right name? Should it be more like > PostTaskQueueManagerExecution ... or something? ScheduleWork/DoWork seems like a reasonable pairing to me, no? Posting is just an implementation detail. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.h (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.h:51: void SetAutoPump(size_t queue_index, bool auto_pump); On 2014/10/15 09:48:59, alexclarke wrote: > I suspect this method will only ever get called once, should this flag should be > set via the constructor? I'm thinking it might be useful to keep it toggleable, say, for a background policy where idle work can happen whenever. It's not a huge overhead to keep it that way. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.h:56: void PumpQueue(size_t queue_index); On 2014/10/15 10:05:40, petrcermak wrote: > Shouldn't this method reload tasks whenever the incoming queue is non-empty > (independent of the emptiness of the work queue)? Otherwise, if some idle tasks > didn't finish in the previous idle period, newly scheduled idle tasks will be > ignored in the current one. I think you're right. I had this this way because the reload is usually done with a swap, but it doesn't really make sense to leak that implementation detail to the caller. Fixed and added a test. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.h:60: bool PollQueue(size_t queue_index); On 2014/10/15 09:48:59, alexclarke wrote: > How about IsQueueEmpty()? I prefer poll since it conveys that this operation has a cost and is not a simple getter. I think I'll add a note about that. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.h:122: const size_t index; On 2014/10/15 09:48:59, alexclarke wrote: > The only reason for this appears to be because of the C++11 style ranged for > loop. As much as I like C++11 style loops I think it would be better to get rid > of this :) Heh, well spotted :) Removed.
Nice, I like this overall. A couple of comments. https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:134: } On 2014/10/15 12:05:24, Sami wrote: > On 2014/10/15 09:07:38, picksi1 wrote: > > Is ScheduleWork the right name? Should it be more like > > PostTaskQueueManagerExecution ... or something? > > ScheduleWork/DoWork seems like a reasonable pairing to me, no? Posting is just > an implementation detail. I somewhat agree with Simon that it's difficult to work out from the name whether this schedules work here or posts it for scheduling. I also think "schedule" shouldn't be used since this function doesn't actually do any scheduling decisions. I would prefer PostDoWorkOnMainRunner or similar. https://codereview.chromium.org/637303003/diff/450001/base/task/task_queue_ma... File base/task/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/450001/base/task/task_queue_ma... base/task/task_queue_manager.cc:142: bool TaskQueueManager::ReloadWorkQueues() { I was going to suggest renaming this function to ReloadAutoPumpedWorkQueues, since this function only does reloads for auto pumped queues, but it also returns true if there is work in any of the queues (including manual pumped queues), which makes it a bit confusing. How about UpdateWorkQueues (which is a bit less specific about whether it reloads the queues or not).
https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... File base/task/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/430001/base/task/task_queue_ma... base/task/task_queue_manager.cc:134: } On 2014/10/15 13:02:45, rmcilroy wrote: > On 2014/10/15 12:05:24, Sami wrote: > > On 2014/10/15 09:07:38, picksi1 wrote: > > > Is ScheduleWork the right name? Should it be more like > > > PostTaskQueueManagerExecution ... or something? > > > > ScheduleWork/DoWork seems like a reasonable pairing to me, no? Posting is just > > an implementation detail. > > I somewhat agree with Simon that it's difficult to work out from the name > whether this schedules work here or posts it for scheduling. I also think > "schedule" shouldn't be used since this function doesn't actually do any > scheduling decisions. I would prefer PostDoWorkOnMainRunner or similar. Ah, I was hoping it was now more clear since we have a selector instead of a scheduler. Happy to concede defeat and go with PostDoWorkOnMainRunner :) Done. https://codereview.chromium.org/637303003/diff/450001/base/task/task_queue_ma... File base/task/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/450001/base/task/task_queue_ma... base/task/task_queue_manager.cc:142: bool TaskQueueManager::ReloadWorkQueues() { On 2014/10/15 13:02:45, rmcilroy wrote: > I was going to suggest renaming this function to ReloadAutoPumpedWorkQueues, > since this function only does reloads for auto pumped queues, but it also > returns true if there is work in any of the queues (including manual pumped > queues), which makes it a bit confusing. How about UpdateWorkQueues (which is a > bit less specific about whether it reloads the queues or not). UpdateWorkQueues sgtm. I've now documented what the function actually does.
skyostil@chromium.org changed reviewers: + danakj@chromium.org - darin@chromium.org
I'm going out on a limb and assuming Darin is oversubscribed. Dana, wanna have a look instead?
Ping? I'm scaring off all reviewers with this patch. Not a great sign.
On 2014/10/16 17:30:00, Sami wrote: > Ping? I'm scaring off all reviewers with this patch. Not a great sign. Can I ask why this should live in base/ ?
On 2014/10/16 17:31:04, danakj wrote: > Can I ask why this should live in base/ ? Two reasons: 1. While the main the main client lives in content/, we're anticipating that the mojo blink embedder will also want to use it for its blink platform implementation. 2. We felt that the functionality is fairly generic and base seemed like the natural place for it. I don't feel too strongly about this so it could live in content/ too for now. WDYT?
Let's move it to base if it makes sense in the future. We should avoid adding to base without a strong case like many places that will use it. You could also put it in webkit/ perhaps tho maybe that's all moving to content/renderer/ anyway, I think.. Tfarina probly know more about that. On Oct 17, 2014 5:58 AM, <skyostil@chromium.org> wrote: > On 2014/10/16 17:31:04, danakj wrote: > >> Can I ask why this should live in base/ ? >> > > Two reasons: > > 1. While the main the main client lives in content/, we're anticipating > that the > mojo blink embedder will also want to use it for its blink platform > implementation. > > 2. We felt that the functionality is fairly generic and base seemed like > the > natural place for it. > > I don't feel too strongly about this so it could live in content/ too for > now. > WDYT? > > https://codereview.chromium.org/637303003/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
skyostil@chromium.org changed reviewers: + jochen@chromium.org - danakj@chromium.org
On 2014/10/17 15:26:36, danakj wrote: > Let's move it to base if it makes sense in the future. We should avoid > adding to base without a strong case like many places that will use it. > > You could also put it in webkit/ perhaps tho maybe that's all moving to > content/renderer/ anyway, I think.. Tfarina probly know more about that. Right, it's not clear to me what happens to webkit/ once we have the one repository to rule them all, so content/ seems like a good place for the time being. Thanks for the advice! Jochen, can I persuade you have a look?
jochen@chromium.org changed reviewers: + danakj@chromium.org
adding danakj back in. I don't think it makes sense to move this to content/ now, and then move it back to base/ in the next CL. There are two embedders of blink in chromium, content and mojo, and we'll need this in both. https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.h (right): https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:70: class TaskRunner : public base::SingleThreadTaskRunner { do TaskRunner and InternalTaskQueue need to live in the header, or is it enough to forward declare them https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:90: }; disallow copy & assign https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:91: friend class TaskRunner; internal classes are auto-friends, no? https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:126: struct InternalTaskQueue { this should be a class (and disallow copying) https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager_unittest.cc (right): https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager_unittest.cc:42: }; nit. disallow copy & assign https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager_unittest.cc:45: public: nit. not needed
also, I'd prefer if somebody with more background on our message loops would review this. happy to rubberstamp then
On Mon, Oct 20, 2014 at 9:35 AM, <jochen@chromium.org> wrote: > adding danakj back in. > > I don't think it makes sense to move this to content/ now, and then move > it back > to base/ in the next CL. > There is no place other than base that content/ and mojo/ can both use that would make sense for this to live? > > There are two embedders of blink in chromium, content and mojo, and we'll > need > this in both. > > > https://codereview.chromium.org/637303003/diff/510001/ > content/renderer/scheduler/task_queue_manager.h > File content/renderer/scheduler/task_queue_manager.h (right): > > https://codereview.chromium.org/637303003/diff/510001/ > content/renderer/scheduler/task_queue_manager.h#newcode70 > content/renderer/scheduler/task_queue_manager.h:70: class TaskRunner : > public base::SingleThreadTaskRunner { > do TaskRunner and InternalTaskQueue need to live in the header, or is it > enough to forward declare them > > https://codereview.chromium.org/637303003/diff/510001/ > content/renderer/scheduler/task_queue_manager.h#newcode90 > content/renderer/scheduler/task_queue_manager.h:90: }; > disallow copy & assign > > https://codereview.chromium.org/637303003/diff/510001/ > content/renderer/scheduler/task_queue_manager.h#newcode91 > content/renderer/scheduler/task_queue_manager.h:91: friend class > TaskRunner; > internal classes are auto-friends, no? > > https://codereview.chromium.org/637303003/diff/510001/ > content/renderer/scheduler/task_queue_manager.h#newcode126 > content/renderer/scheduler/task_queue_manager.h:126: struct > InternalTaskQueue { > this should be a class (and disallow copying) > > https://codereview.chromium.org/637303003/diff/510001/ > content/renderer/scheduler/task_queue_manager_unittest.cc > File content/renderer/scheduler/task_queue_manager_unittest.cc (right): > > https://codereview.chromium.org/637303003/diff/510001/ > content/renderer/scheduler/task_queue_manager_unittest.cc#newcode42 > content/renderer/scheduler/task_queue_manager_unittest.cc:42: }; > nit. disallow copy & assign > > https://codereview.chromium.org/637303003/diff/510001/ > content/renderer/scheduler/task_queue_manager_unittest.cc#newcode45 > content/renderer/scheduler/task_queue_manager_unittest.cc:45: public: > nit. not needed > > https://codereview.chromium.org/637303003/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
we could create such a place, but it's essentially a fancy thread pool, so I thought base/ is actually a good place for this.
On Mon, Oct 20, 2014 at 10:48 AM, <jochen@chromium.org> wrote: > we could create such a place, but it's essentially a fancy thread pool, so > I > thought base/ is actually a good place for this. > I think we should do our best to keep things out of base/ until there's a solid argument for widespread use. This seems pretty specific to a single use case at the moment, so it'd be better homed somewhere else. (+1 to someone familiar with message loop code reviewing also) To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
skyostil@chromium.org changed reviewers: + cpu@chromium.org
On 2014/10/20 at 14:51:49, danakj wrote: > On Mon, Oct 20, 2014 at 10:48 AM, <jochen@chromium.org> wrote: > > > we could create such a place, but it's essentially a fancy thread pool, so > > I > > thought base/ is actually a good place for this. > > > > I think we should do our best to keep things out of base/ until there's a > solid argument for widespread use. This seems pretty specific to a single > use case at the moment, so it'd be better homed somewhere else. isn't using it in mojo/ and content/ widespread? > > (+1 to someone familiar with message loop code reviewing also) > > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org. >
On Mon, Oct 20, 2014 at 10:57 AM, <jochen@chromium.org> wrote: > On 2014/10/20 at 14:51:49, danakj wrote: > >> On Mon, Oct 20, 2014 at 10:48 AM, <jochen@chromium.org> wrote: >> > > > we could create such a place, but it's essentially a fancy thread pool, >> so >> > I >> > thought base/ is actually a good place for this. >> > >> > > I think we should do our best to keep things out of base/ until there's a >> solid argument for widespread use. This seems pretty specific to a single >> use case at the moment, so it'd be better homed somewhere else. >> > > isn't using it in mojo/ and content/ widespread? I don't think so? It's 2 callsites.. by that argument everything in cc/ should be under base/. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
no, because cc/ is one component. mojo and content are two different projects
Thanks for taking a look Jochen. I'm happy to put this where ever the code review hive mind decrees :) My super-weak preference is to start it off in content/ and then move it to base/ once it has stabilized a bit. On 2014/10/20 14:51:49, danakj wrote: > On Mon, Oct 20, 2014 at 10:48 AM, <mailto:jochen@chromium.org> wrote: > > (+1 to someone familiar with message loop code reviewing also) Carlos, could you be that special someone? We're hoping someone with message loop experience to take a look at the new queuing mechanism we're adding here. I assume you've seen the design doc in the earlier thread we had.
On Mon, Oct 20, 2014 at 11:00 AM, <jochen@chromium.org> wrote: > no, because cc/ is one component. mojo and content are two different > projects > What part of mojo would use this? Can you point me to a directory? To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
it will be used in //mojo/services/html_viewer I'd recommend jar@ for message loop
skyostil@chromium.org changed reviewers: + jar@chromium.org
On 2014/10/20 15:06:07, jochen wrote: > it will be used in //mojo/services/html_viewer > > I'd recommend jar@ for message loop Okay, adding jar@ too.
https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.h (right): https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:70: class TaskRunner : public base::SingleThreadTaskRunner { On 2014/10/20 13:35:31, jochen wrote: > do TaskRunner and InternalTaskQueue need to live in the header, or is it enough > to forward declare them Good point, looks like we can just forward declare. https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:90: }; On 2014/10/20 13:35:31, jochen wrote: > disallow copy & assign Done. https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:91: friend class TaskRunner; On 2014/10/20 13:35:31, jochen wrote: > internal classes are auto-friends, no? Oops, I keep forgetting they changed that after TR1. Had to keep this here since I moved the classes to the implementation side. https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:126: struct InternalTaskQueue { On 2014/10/20 13:35:31, jochen wrote: > this should be a class (and disallow copying) Done. https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager_unittest.cc (right): https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager_unittest.cc:42: }; On 2014/10/20 13:35:31, jochen wrote: > nit. disallow copy & assign Done. https://codereview.chromium.org/637303003/diff/510001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager_unittest.cc:45: public: On 2014/10/20 13:35:31, jochen wrote: > nit. not needed Done.
Overall I think this is acceptable. location: I argue we should bake it in content, moving from content to wherever should be easy. Sami, do you want to support both modes right of the bat? I mean autopump and manual-pump? I assume one is going to get used and one is going to be a nice-to-have. Is so we should leave one.
https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:32: base::WeakPtr<TaskQueueManager> task_queue_manager_; why is this weak? https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:69: } It would seem at first blush that the only value that TaskRunner adds to the thing is the queue_index ? that is, it feels that we could not have the taskrunner at all and just have the TQM expose PostNonNestableDelayedTask et al forcing the client to remember the queue number? I am not totally set against it, but seems a lot of work for just a number, specially if the clients are going to be modified: touch_task_runner->PostTask(...) vs task_runner->PostTask(TOUCH_TASK, ...)
On 2014/10/20 19:35:29, cpu wrote: > Overall I think this is acceptable. Thanks for having a look! > location: I argue we should bake it in content, moving from content to wherever > should be easy. That works for me. > Sami, do you want to support both modes right of the bat? I mean autopump and > manual-pump? > > I assume one is going to get used and one is going to be a nice-to-have. Is so > we should leave one. The plan is to use both modes right from the start: manual pumping for idle tasks and automatic pumping for everything else. Ross is working on the patch to do that here: https://codereview.chromium.org/664963002/ I guess you could argue that auto-pumping is really a convenience, but we think it leaves more room to later optimize the internal pumping mechanism (e.g., bulk-pump all queues with a single lock instead of N locks). https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:32: base::WeakPtr<TaskQueueManager> task_queue_manager_; On 2014/10/20 20:00:10, cpu wrote: > why is this weak? It's needed to avoid posting new tasks after the scheduler has shut down. SingleThreadTaskRunner is refcounted, so we can't guarantee that the queue manager outlives it. https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:69: } On 2014/10/20 20:00:10, cpu wrote: > It would seem at first blush that the only value that TaskRunner adds to the > thing is the queue_index ? that is, it feels that we could not have the > taskrunner at all and just have the TQM expose PostNonNestableDelayedTask et al > forcing the client to remember the queue number? > > I am not totally set against it, but seems a lot of work for just a number, > specially if the clients are going to be modified: > > touch_task_runner->PostTask(...) vs task_runner->PostTask(TOUCH_TASK, ...) > > The reason we went for the TaskRunner was that all the clients that need to submit tasks to the manager (e.g., cc, the input event filter) are already parameterized using a SingleThreadTaskRunner. This way we can just hook them up to the appropriate queue and the knowledge about individual queues doesn't need to be baked in everywhere.
https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:32: base::WeakPtr<TaskQueueManager> task_queue_manager_; On 2014/10/21 10:32:54, Sami wrote: > On 2014/10/20 20:00:10, cpu wrote: > > why is this weak? > > It's needed to avoid posting new tasks after the scheduler has shut down. > SingleThreadTaskRunner is refcounted, so we can't guarantee that the queue > manager outlives it. Should the QM take a reference instead? https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:69: } On 2014/10/21 10:32:54, Sami wrote: > On 2014/10/20 20:00:10, cpu wrote: > > It would seem at first blush that the only value that TaskRunner adds to the > > thing is the queue_index ? that is, it feels that we could not have the > > taskrunner at all and just have the TQM expose PostNonNestableDelayedTask et > al > > forcing the client to remember the queue number? > > > > I am not totally set against it, but seems a lot of work for just a number, > > specially if the clients are going to be modified: > > > > touch_task_runner->PostTask(...) vs task_runner->PostTask(TOUCH_TASK, ...) > > > > > > The reason we went for the TaskRunner was that all the clients that need to > submit tasks to the manager (e.g., cc, the input event filter) are already > parameterized using a SingleThreadTaskRunner. This way we can just hook them up > to the appropriate queue and the knowledge about individual queues doesn't need > to be baked in everywhere. Acknowledged. https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:72: public: this should be a struct. https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:106: for (size_t i = 0; i < queues_.size(); i++) use range for, that has been approved, right? https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:124: bool TaskQueueManager::PollQueue(size_t queue_index) { what is the use case for poll? wouldn't we want to reload anyhow? https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:235: if (delay > base::TimeDelta()) { why is the logic here different from the logic in PostNonNestableDelayedTask ?
Thanks for the review! https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:32: base::WeakPtr<TaskQueueManager> task_queue_manager_; On 2014/10/21 19:46:07, cpu wrote: > On 2014/10/21 10:32:54, Sami wrote: > > On 2014/10/20 20:00:10, cpu wrote: > > > why is this weak? > > > > It's needed to avoid posting new tasks after the scheduler has shut down. > > SingleThreadTaskRunner is refcounted, so we can't guarantee that the queue > > manager outlives it. > > Should the QM take a reference instead? I'm not sure I got that -- QM already holds a reference to the task runner (although it stricly doesn't need one). The basic problem is that TaskRunner is refcounted (through inheritance) and QM isn't. This means QM may get deleted before all the references to the task runners have been cleared, which is what the weak pointer guards against. We don't think QM should become refcounted because it is supposed to have a single owner and a very clear life cycle. Having the weak pointer here is the simplest way we could think of of achieving this. (The other ideas we had were just weak pointers in disguise.) Does that make sense? https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:72: public: On 2014/10/21 19:46:07, cpu wrote: > this should be a struct. I turned it into a class on Jochen's suggestion, but a struct seems fine too. Done. https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:106: for (size_t i = 0; i < queues_.size(); i++) On 2014/10/21 19:46:07, cpu wrote: > use range for, that has been approved, right? Ah, that's true. Thanks, done. https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:124: bool TaskQueueManager::PollQueue(size_t queue_index) { On 2014/10/21 19:46:07, cpu wrote: > what is the use case for poll? wouldn't we want to reload anyhow? The use case is checking whether new high priority tasks are pending either in the work or incoming queues so that the foreground task can choose to yield back to the message loop. We do this, for example, with the HTML parser so it knows when to give room for other work. You're right that we'll end up reloading anyway on the next iteration of the message loop. We thought about this and still prefer to expose the operation as a poll (now called IsQueueEmpty), because that's what the client actually wants to do instead of reloading anything. https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:235: if (delay > base::TimeDelta()) { On 2014/10/21 19:46:07, cpu wrote: > why is the logic here different from the logic in PostNonNestableDelayedTask ? We want to defer any non-nestable work directly to the main message loop because it is rare enough (as far as I know) that how it is scheduled doesn't make a huge difference, and doing anything else would complicate the task queue manager significantly because we'd need to add a separate deferred work queue for it.
lgtm on my side.
lgtm
The CQ bit was checked by skyostil@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/637303003/590001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: mac_chromium_rel on tryserver.chromium.mac (http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel/...)
The CQ bit was checked by skyostil@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/637303003/590001
Message was sent while issue was closed.
Committed patchset #15 (id:590001)
Message was sent while issue was closed.
Patchset 15 (id:??) landed as https://crrev.com/1161fca345efbace93bfda2a874f7dde3f3e5381 Cr-Commit-Position: refs/heads/master@{#300870}
Message was sent while issue was closed.
https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:148: PostDoWorkOnMainRunner(); In general, it is a bad idea to do much work while a lock is held, and more often than not, an error to call out to an arbitrary function while holding a lock. Is there a reason why you need the lock held when calling this function. https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:166: queue->work_queue.push(queue->incoming_queue.front()); I'm not sure if I'm understanding this full context... but... The act of pumping one-task-at-a-time gives up some super valuable results of the current message loop / message pump system. The current system: a) Gets faster and faster at servicing a queue the further the queue falls behind (the larger the incoming queue). b) Provides asymptotically zero-locking-cost to the thread running the tasks as the queue gets further and further behind (as the number of unserviced tasks grows). The above system level features were critical for precluding a large class of denial of service attacks (I was really proud of resolving them). Can you comment on why the above is not an issue for you? ...or... Could you consider other mechanisms for processing the external queue?? For instance, if you're not willing to do the swap (loading the whole external queue into a thread-specific queue each time the thread specific queue empties), would would consider (for example) loading half of the external queue (rounded up) each time you try to pump? Is there a reason why the above is bad?
Message was sent while issue was closed.
https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:148: PostDoWorkOnMainRunner(); On 2014/10/28 22:33:41, jar wrote: > In general, it is a bad idea to do much work while a lock is held, and more > often than not, an error to call out to an arbitrary function while holding a > lock. > > Is there a reason why you need the lock held when calling this function. You're right, PostDoWorkOnMainRunner() doesn't need any protection from the lock. I'll write a CL to move it outside the lock scope. https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:166: queue->work_queue.push(queue->incoming_queue.front()); On 2014/10/28 22:33:41, jar wrote: > I'm not sure if I'm understanding this full context... but... > > The act of pumping one-task-at-a-time gives up some super valuable results of > the current message loop / message pump system. > > The current system: > a) Gets faster and faster at servicing a queue the further the queue falls > behind (the larger the incoming queue). > b) Provides asymptotically zero-locking-cost to the thread running the tasks as > the queue gets further and further behind (as the number of unserviced tasks > grows). > > The above system level features were critical for precluding a large class of > denial of service attacks (I was really proud of resolving them). > > Can you comment on why the above is not an issue for you? > > ...or... > > Could you consider other mechanisms for processing the external queue?? For > instance, if you're not willing to do the swap (loading the whole external queue > into a thread-specific queue each time the thread specific queue empties), would > would consider (for example) loading half of the external queue (rounded up) > each time you try to pump? > > Is there a reason why the above is bad? First, thanks for shedding some light on the reasoning behind the message loop's swapping logic. It's easy to see how the mechanism works by reading the code, but the *why* is always more interesting. In our implementation we actually do full queue swaps most of the time (see ReloadWorkQueue). We call this mode automatic pumping, and it is the default for all but one of the queues. The exception is the idle task queue, which we want to pump exactly once a frame. This lets us synchronize idle task execution with graphics frame rendering. In other words, PumpQueueLocked will get called once per frame instead of once per posted task, which makes the locking cost constant. That said, there is a higher level issue here too: because we need to coexist with the base message loop, we're only letting ourselves run exactly one task every time our scheduling function (DoWork) runs. This isn't optimal because it increases the number of queue swaps done in the message loop. I think we could experiment with running N tasks (where N is a small constant) per iteration to reduce this overhead, but for now N = 1 seems like a safer way to get started.
Message was sent while issue was closed.
https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:148: PostDoWorkOnMainRunner(); On 2014/10/29 11:36:19, Sami wrote: > On 2014/10/28 22:33:41, jar wrote: > > In general, it is a bad idea to do much work while a lock is held, and more > > often than not, an error to call out to an arbitrary function while holding a > > lock. > > > > Is there a reason why you need the lock held when calling this function. > > You're right, PostDoWorkOnMainRunner() doesn't need any protection from the > lock. I'll write a CL to move it outside the lock scope. On further reflection, we do need to hold the lock because it prevents the TaskQueueManager from getting deleted from under us as we're posting the task. This code has been refactored since this patch to make the locking dependencies more explicit.
Message was sent while issue was closed.
https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:148: PostDoWorkOnMainRunner(); On 2014/10/29 12:49:08, Sami wrote: > On 2014/10/29 11:36:19, Sami wrote: > > On 2014/10/28 22:33:41, jar wrote: > > > In general, it is a bad idea to do much work while a lock is held, and more > > > often than not, an error to call out to an arbitrary function while holding > a > > > lock. > > > > > > Is there a reason why you need the lock held when calling this function. > > > > You're right, PostDoWorkOnMainRunner() doesn't need any protection from the > > lock. I'll write a CL to move it outside the lock scope. > > On further reflection, we do need to hold the lock because it prevents the > TaskQueueManager from getting deleted from under us as we're posting the task. > This code has been refactored since this patch to make the locking dependencies > more explicit. Can you point to how the lock acquisition precludes destruction? If that were the only thing blocking destruction... then couldn't destruction happen between line 145 and 146?? ...and wouldn't that cause problems on line 146 when trying to lock? https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:166: queue->work_queue.push(queue->incoming_queue.front()); On 2014/10/29 11:36:19, Sami wrote: > On 2014/10/28 22:33:41, jar wrote: > > I'm not sure if I'm understanding this full context... but... > > > > The act of pumping one-task-at-a-time gives up some super valuable results of > > the current message loop / message pump system. > > > > The current system: > > a) Gets faster and faster at servicing a queue the further the queue falls > > behind (the larger the incoming queue). > > b) Provides asymptotically zero-locking-cost to the thread running the tasks > as > > the queue gets further and further behind (as the number of unserviced tasks > > grows). > > > > The above system level features were critical for precluding a large class of > > denial of service attacks (I was really proud of resolving them). > > > > Can you comment on why the above is not an issue for you? > > > > ...or... > > > > Could you consider other mechanisms for processing the external queue?? For > > instance, if you're not willing to do the swap (loading the whole external > queue > > into a thread-specific queue each time the thread specific queue empties), > would > > would consider (for example) loading half of the external queue (rounded up) > > each time you try to pump? > > > > Is there a reason why the above is bad? > > First, thanks for shedding some light on the reasoning behind the message loop's > swapping logic. It's easy to see how the mechanism works by reading the code, > but the *why* is always more interesting. > > In our implementation we actually do full queue swaps most of the time (see > ReloadWorkQueue). We call this mode automatic pumping, and it is the default for > all but one of the queues. The exception is the idle task queue, which we want > to pump exactly once a frame. This lets us synchronize idle task execution with > graphics frame rendering. In other words, PumpQueueLocked will get called once > per frame instead of once per posted task, which makes the locking cost > constant. > > That said, there is a higher level issue here too: because we need to coexist > with the base message loop, we're only letting ourselves run exactly one task > every time our scheduling function (DoWork) runs. This isn't optimal because it > increases the number of queue swaps done in the message loop. I think we could > experiment with running N tasks (where N is a small constant) per iteration to > reduce this overhead, but for now N = 1 seems like a safer way to get started. Acknowledged. https://codereview.chromium.org/637303003/diff/590001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.h (right): https://codereview.chromium.org/637303003/diff/590001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:105: internal::TaskQueue* Queue(size_t queue_index) const; I found this declaration, and its use, but didn't see its implementation. Where is it? (I doesn't even show in the unified diff for the patch).
Message was sent while issue was closed.
https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.cc (right): https://codereview.chromium.org/637303003/diff/550001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.cc:148: PostDoWorkOnMainRunner(); On 2014/10/29 21:14:07, jar wrote: > On 2014/10/29 12:49:08, Sami wrote: > > On 2014/10/29 11:36:19, Sami wrote: > > > On 2014/10/28 22:33:41, jar wrote: > > > > In general, it is a bad idea to do much work while a lock is held, and > more > > > > often than not, an error to call out to an arbitrary function while > holding > > a > > > > lock. > > > > > > > > Is there a reason why you need the lock held when calling this function. > > > > > > You're right, PostDoWorkOnMainRunner() doesn't need any protection from the > > > lock. I'll write a CL to move it outside the lock scope. > > > > On further reflection, we do need to hold the lock because it prevents the > > TaskQueueManager from getting deleted from under us as we're posting the task. > > This code has been refactored since this patch to make the locking > dependencies > > more explicit. > > Can you point to how the lock acquisition precludes destruction? > > If that were the only thing blocking destruction... then couldn't destruction > happen between line 145 and 146?? ...and wouldn't that cause problems on line > 146 when trying to lock? The locking wasn't quite right in this version of the patch. See here: https://code.google.com/p/chromium/codesearch#chromium/src/content/renderer/s... Each queue is notified about TQM going away in its destructor. This notification is protected by the same lock that is held while PostDoWorkOnMainRunner() is called. https://codereview.chromium.org/637303003/diff/590001/content/renderer/schedu... File content/renderer/scheduler/task_queue_manager.h (right): https://codereview.chromium.org/637303003/diff/590001/content/renderer/schedu... content/renderer/scheduler/task_queue_manager.h:105: internal::TaskQueue* Queue(size_t queue_index) const; On 2014/10/29 21:14:07, jar wrote: > I found this declaration, and its use, but didn't see its implementation. Where > is it? (I doesn't even show in the unified diff for the patch). It's at line 113 of task_queue_manager.cc in this patch set. |