|
|
Chromium Code Reviews|
Created:
4 years, 10 months ago by ericrk Modified:
4 years, 9 months ago CC:
blink-worker-reviews_chromium.org, chromium-reviews, darin-cc_chromium.org, jam, kinuko+watch, mkwst+moarreviews-renderer_chromium.org, mlamouri+watch-content_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionRefactor signaling in RWP
A previous patch had moved RasterWorkerPool to make heavy use of
broadcast in order to simplify logic a bit. It turns out that broadcast
is not ideal in performance critical situations, as it increases lock
contention (see
https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronization/condition_variable.h&l=34).
This change removes all uses of broadcast other than in Shutdown, where
performance is less of a concern and we actually need all threads to
wake up.
To achieve this, we need to re-factor RWP to use:
- N foreground threads
- 1 background thread
rather than N threads which may be able to run foreground/background tasks.
In order to ensure that we don't overload the system, this change
introduces a limiting system where the background priority thread can only
run tasks if no other threads are doing so.
BUG=
CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel
Committed: https://crrev.com/f064193fe71b8b055d060adb10fbc4e3cca8c604
Cr-Commit-Position: refs/heads/master@{#377379}
Patch Set 1 #Patch Set 2 : #
Total comments: 2
Patch Set 3 : refactor #Patch Set 4 : #
Total comments: 24
Patch Set 5 : feedback #Patch Set 6 : rebase #Patch Set 7 : fix build #
Total comments: 12
Patch Set 8 : feedback #
Total comments: 20
Patch Set 9 : feedback #Patch Set 10 : Fix detection of already running tasks when category changes #Patch Set 11 : Fix refcounting issue. #
Total comments: 2
Patch Set 12 : Avoid refcounting in schedule #
Total comments: 2
Patch Set 13 : x++ > ++x #
Total comments: 1
Messages
Total messages: 55 (24 generated)
ericrk@chromium.org changed reviewers: + reveman@chromium.org, sievers@chromium.org
Hi David, Turns out that it's a bit tricky to get rid of broadcast without splitting out various thread pools per category. Ended up coding something up which does both of these things - more of a WIP for now, but wanted to send it over and see what you thought of the overall approach. Thanks!
Some suggestions below which I'm not 100% sure makes sense. https://codereview.chromium.org/1666283002/diff/20001/content/renderer/raster... File content/renderer/raster_worker_pool.h (right): https://codereview.chromium.org/1666283002/diff/20001/content/renderer/raster... content/renderer/raster_worker_pool.h:163: // Counters which track the number of running tasks for each category. Can we use the existing run count in work_queue instead of duplicating the state here? https://codereview.chromium.org/1666283002/diff/20001/content/renderer/raster... content/renderer/raster_worker_pool.h:168: int target_running_task_count_; I find this a bit awkward. Now that we're using the "run count" to determine what to run, could we just have "num_threads" of foreground priority threads that can all run nonconcurrent and normal foreground tasks but if a nonconcurrent task can run or not depends on 0 nonconcurrent tasks currently running? In this case only two CVs would be needed, one for background-tasks-can-run and one for foreground-tasks-can-run. When any foreground thread wakes up, it would run a nonconcurrent task if possible, otherwise a normal foreground task.
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see [2/3/16, 11:54 AM] David Reveman (reveman@google.com): https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - 1 Nonconcurrent thread - N foreground threads - 1 background thread rather than N threads which may be able to run multiple types of tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where: - Only N threads can be running tasks at a given time. - The background priority thread can only run tasks if no other threads are doing so. BUG= ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see [2/3/16, 11:54 AM] David Reveman (reveman@google.com): https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - 1 Nonconcurrent thread - N foreground threads - 1 background thread rather than N threads which may be able to run multiple types of tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where: - Only N threads can be running tasks at a given time. - The background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ==========
Thanks! Took a stab at your suggestions. let me know how this looks.
Looks great. Some nits, questions and minor suggestions. https://codereview.chromium.org/1666283002/diff/60001/cc/raster/task_graph_wo... File cc/raster/task_graph_work_queue.h (right): https://codereview.chromium.org/1666283002/diff/60001/cc/raster/task_graph_wo... cc/raster/task_graph_work_queue.h:148: uint32_t NumRunningTasks() const { Do we need both of these functions or is the one below sufficient? https://codereview.chromium.org/1666283002/diff/60001/cc/raster/task_graph_wo... cc/raster/task_graph_work_queue.h:189: std::map<uint16_t, uint32_t> running_task_count_; I'd prefer if we minimized state and looped over all namespaces to compute this. We only have one namespace in the most common case. It also seems consistent with TaskNamespace::ready_to_run_tasks to also have ::running_tasks be a map. Wdyt? https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... File content/renderer/raster_worker_pool.cc (right): https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... content/renderer/raster_worker_pool.cc:39: base::ConditionVariable* const work_ready_cv_; nit: ready_to_run_tasks_cv_ for consistency https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... content/renderer/raster_worker_pool.cc:135: // start |num_threads| threads for foreground work, including nonconcurrent nit: s/start/Start/ https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... content/renderer/raster_worker_pool.cc:377: bool RasterWorkerPool::ShouldRunTaskForCategory(cc::TaskCategory category) { nit: lock_.AssertAcquired() and add "WithLockAcquired" suffix to function name https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... content/renderer/raster_worker_pool.cc:379: // enforcing running task limits. Is this special case worthwhile? Seems like it makes sense to keep the same restrictions during shutdown to avoid corner cases and potentially hard to diagnose bugs. https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... content/renderer/raster_worker_pool.cc:402: void RasterWorkerPool::SignalTaskReadyConditionVariables() { nit: add "WithLockAcquired" suffix to function name https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... content/renderer/raster_worker_pool.cc:403: lock_.AssertAcquired(); nit: blank line after this to separate it from the implementation of the function https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... content/renderer/raster_worker_pool.cc:408: // Only run background tasks if there are no forground tasks left. hm, it looks like this logic is both here and inside the ShouldRunTaskForCategory function. Assuming we need ShouldRunTaskForCategory, can all this logic such as only running background tasks when no foreground tasks are running be constrained to that function? ie. can we remove the "else" and move this comment to ShouldRunTaskForCategory. https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... File content/renderer/raster_worker_pool.h (right): https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... content/renderer/raster_worker_pool.h:107: void SignalTaskReadyConditionVariables(); nit: SignalHasReadyToRunTasks(). I think that's more consistent with the naming of other existing variables and functions and ConditionVariables in the name doesn't seem necessary and more of an implementation detail https://codereview.chromium.org/1666283002/diff/60001/content/renderer/raster... content/renderer/raster_worker_pool.h:140: int target_running_task_count_; Can we remove target_running_task_count_ now?
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see [2/3/16, 11:54 AM] David Reveman (reveman@google.com): https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - 1 Nonconcurrent thread - N foreground threads - 1 background thread rather than N threads which may be able to run multiple types of tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where: - Only N threads can be running tasks at a given time. - The background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - 1 Nonconcurrent thread - N foreground threads - 1 background thread rather than N threads which may be able to run multiple types of tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where: - Only N threads can be running tasks at a given time. - The background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ==========
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - 1 Nonconcurrent thread - N foreground threads - 1 background thread rather than N threads which may be able to run multiple types of tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where: - Only N threads can be running tasks at a given time. - The background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ==========
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= ==========
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ==========
Patchset #7 (id:120001) has been deleted
https://chromiumcodereview.appspot.com/1666283002/diff/60001/cc/raster/task_g... File cc/raster/task_graph_work_queue.h (right): https://chromiumcodereview.appspot.com/1666283002/diff/60001/cc/raster/task_g... cc/raster/task_graph_work_queue.h:148: uint32_t NumRunningTasks() const { On 2016/02/08 19:10:21, reveman wrote: > Do we need both of these functions or is the one below sufficient? Not really - we can drop this one for now. https://chromiumcodereview.appspot.com/1666283002/diff/60001/cc/raster/task_g... cc/raster/task_graph_work_queue.h:189: std::map<uint16_t, uint32_t> running_task_count_; On 2016/02/08 19:10:21, reveman wrote: > I'd prefer if we minimized state and looped over all namespaces to compute this. > We only have one namespace in the most common case. It also seems consistent > with TaskNamespace::ready_to_run_tasks to also have ::running_tasks be a map. > Wdyt? Sure, went back and forth on this - can make this change. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... File content/renderer/raster_worker_pool.cc (right): https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.cc:39: base::ConditionVariable* const work_ready_cv_; On 2016/02/08 19:10:21, reveman wrote: > nit: ready_to_run_tasks_cv_ for consistency Done. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.cc:135: // start |num_threads| threads for foreground work, including nonconcurrent On 2016/02/08 19:10:21, reveman wrote: > nit: s/start/Start/ Done. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.cc:377: bool RasterWorkerPool::ShouldRunTaskForCategory(cc::TaskCategory category) { On 2016/02/08 19:10:21, reveman wrote: > nit: lock_.AssertAcquired() and add "WithLockAcquired" suffix to function name Done. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.cc:379: // enforcing running task limits. On 2016/02/08 19:10:21, reveman wrote: > Is this special case worthwhile? Seems like it makes sense to keep the same > restrictions during shutdown to avoid corner cases and potentially hard to > diagnose bugs. makes sense - will remove this. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.cc:402: void RasterWorkerPool::SignalTaskReadyConditionVariables() { On 2016/02/08 19:10:21, reveman wrote: > nit: add "WithLockAcquired" suffix to function name Done. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.cc:403: lock_.AssertAcquired(); On 2016/02/08 19:10:21, reveman wrote: > nit: blank line after this to separate it from the implementation of the > function Done. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.cc:408: // Only run background tasks if there are no forground tasks left. On 2016/02/08 19:10:21, reveman wrote: > hm, it looks like this logic is both here and inside the > ShouldRunTaskForCategory function. Assuming we need ShouldRunTaskForCategory, > can all this logic such as only running background tasks when no foreground > tasks are running be constrained to that function? ie. can we remove the "else" > and move this comment to ShouldRunTaskForCategory. We can do this - results in a little extra work being done, but does clean up the logic. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... File content/renderer/raster_worker_pool.h (right): https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.h:107: void SignalTaskReadyConditionVariables(); On 2016/02/08 19:10:21, reveman wrote: > nit: SignalHasReadyToRunTasks(). I think that's more consistent with the naming > of other existing variables and functions and ConditionVariables in the name > doesn't seem necessary and more of an implementation detail Done. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.h:140: int target_running_task_count_; On 2016/02/08 19:10:21, reveman wrote: > Can we remove target_running_task_count_ now? I guess we don't strictly need this. It gets us the ability to limit the foreground + background tasks to a fixed number - if a background task is already running when new work comes in, we won't spin up as many foreground tasks until that background task finishes, keeping us in the intended limit. I think it's OK to temporarily exceed the limits in this case, especially as it simplifies the logic a bit - removing this for now. WDYT?
You can ignore the SignalHasReadyToRunTasks nits if you like. This looks good once I understand why the signal after work_queue_.CompleteTask has been removed. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... File content/renderer/raster_worker_pool.cc (right): https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.cc:408: // Only run background tasks if there are no forground tasks left. On 2016/02/10 at 18:44:49, ericrk wrote: > On 2016/02/08 19:10:21, reveman wrote: > > hm, it looks like this logic is both here and inside the > > ShouldRunTaskForCategory function. Assuming we need ShouldRunTaskForCategory, > > can all this logic such as only running background tasks when no foreground > > tasks are running be constrained to that function? ie. can we remove the "else" > > and move this comment to ShouldRunTaskForCategory. > > We can do this - results in a little extra work being done, but does clean up the logic. sorry- I was assuming that a separate ShouldRunTaskForCategory was necessary but it looks like that is not the case so maybe the best is to just merge all this into this function to avoid extra work and make it as clean as possible. Latest patch is fine though so if you prefer that then no need to change it except maybe remove the category argument. See my comment on the latest patch. https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... File content/renderer/raster_worker_pool.h (right): https://chromiumcodereview.appspot.com/1666283002/diff/60001/content/renderer... content/renderer/raster_worker_pool.h:140: int target_running_task_count_; On 2016/02/10 at 18:44:49, ericrk wrote: > On 2016/02/08 19:10:21, reveman wrote: > > Can we remove target_running_task_count_ now? > > I guess we don't strictly need this. It gets us the ability to limit the foreground + background tasks to a fixed number - if a background task is already running when new work comes in, we won't spin up as many foreground tasks until that background task finishes, keeping us in the intended limit. > > I think it's OK to temporarily exceed the limits in this case, especially as it simplifies the logic a bit - removing this for now. > > WDYT? I think it's best to limit how background work can effect how efficient we run foreground as much as possible. Not having this limit improves that. https://chromiumcodereview.appspot.com/1666283002/diff/140001/cc/raster/task_... File cc/raster/task_graph_work_queue.cc (right): https://chromiumcodereview.appspot.com/1666283002/diff/140001/cc/raster/task_... cc/raster/task_graph_work_queue.cc:222: // If |running_tasks_for_category| is empty, remove it from |running_tasks|. Is this necessary? Doesn't it cause some heap allocations that it would be better to avoid? https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... File content/renderer/raster_worker_pool.cc (left): https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:349: has_ready_to_run_tasks_cv_.Broadcast(); Why is this no longer needed? Doesn't CompleteTask potentially result on more tasks being available to run? https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... File content/renderer/raster_worker_pool.cc (right): https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:233: SignalHasReadyToRunTasksWithLockAcquired(); Is this necessary if we keep the signal after work_queue_.CompleteTask? See my other comment for this to make sense. https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:375: bool RasterWorkerPool::ShouldRunTaskForCategoryWithLockAcquired( Do you think merging this into SignalHasReadyToRunTasksWithLockAcquired would make for the most efficient and easiest to read code? If you'd like to use helper functions instead of having all code in SignalHasReadyToRunTasksWithLockAcquired() then can we remove the |category| parameter which seem to be known at compile time and instead have two separate helper functions? ie. ShouldRunForegroundTask + ShouldRunBackgroundTask. Maybe even move these helpers to anonymous namespace. https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:402: return work_queue_.HasReadyToRunTasksForCategory(category); Maybe early out and avoid all the work above if this is not true?
Thanks for the feedback - tried to explain the reasoning behind the signaling pattern a bit more. https://chromiumcodereview.appspot.com/1666283002/diff/140001/cc/raster/task_... File cc/raster/task_graph_work_queue.cc (right): https://chromiumcodereview.appspot.com/1666283002/diff/140001/cc/raster/task_... cc/raster/task_graph_work_queue.cc:222: // If |running_tasks_for_category| is empty, remove it from |running_tasks|. On 2016/02/10 20:49:32, reveman wrote: > Is this necessary? Doesn't it cause some heap allocations that it would be > better to avoid? I guess it's a trade-off - we either do this here or do more iteration later to ensure all categories are emtpy.. Switching this, as I guess the heap allocation is worse than the quick read-only scan of the list. https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... File content/renderer/raster_worker_pool.cc (left): https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:349: has_ready_to_run_tasks_cv_.Broadcast(); On 2016/02/10 20:49:32, reveman wrote: > Why is this no longer needed? Doesn't CompleteTask potentially result on more > tasks being available to run? See my comments above. https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... File content/renderer/raster_worker_pool.cc (right): https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:233: SignalHasReadyToRunTasksWithLockAcquired(); On 2016/02/10 20:49:32, reveman wrote: > Is this necessary if we keep the signal after work_queue_.CompleteTask? See my > other comment for this to make sense. So, if we signal after completing a task, signaling here isn't necessary. However, if we signal after completing a task we can end up with duplicated signaling. We can potentially signal at three points: 1. After dequeueing a task, but before running it. 2. After completing a task. 3. When there are no more tasks to run for the current category (this line) If we signal at location (1) only, we won't wake up the background thread when foreground work finishes. If we signal at location (2) only, we will wake up foreground threads slower than is ideal (we wait until after a task is done to wake up an additional thread). If we signal at location (3) only, we only ever wake up 1 thread at a time. To get correct behavior, we can signal at (1) and (3) or (1) and (2). If we do (1) and (3) - the current approach in this code - we wake up threads quickly by signaling at (1), and we correctly wake up background threads when foreground work finishes (3). There is no duplicate signaling. If we do (1) and (2), we wake up threads quickly (1), and we wake up the background thread (2), but we also frequently double-signal. If multiple foreground tasks run in a row, we may end up signaling at location (1) even after we have just signaled at location (2), waking up 2 threads instead of the desired 1 thread. Does this make sense? https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:375: bool RasterWorkerPool::ShouldRunTaskForCategoryWithLockAcquired( On 2016/02/10 20:49:32, reveman wrote: > Do you think merging this into SignalHasReadyToRunTasksWithLockAcquired would > make for the most efficient and easiest to read code? If you'd like to use > helper functions instead of having all code in > SignalHasReadyToRunTasksWithLockAcquired() then can we remove the |category| > parameter which seem to be known at compile time and instead have two separate > helper functions? ie. ShouldRunForegroundTask + ShouldRunBackgroundTask. Maybe > even move these helpers to anonymous namespace. Meant to use this code in RunTasksWithLockAcquired - it was a bug that I wasn't (background tasks would not stop running) - this is why I factored this way. Should make more sense now. https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:402: return work_queue_.HasReadyToRunTasksForCategory(category); On 2016/02/10 20:49:32, reveman wrote: > Maybe early out and avoid all the work above if this is not true? Done.
lgtm % silly naming nits and a question https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... File content/renderer/raster_worker_pool.cc (right): https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:233: SignalHasReadyToRunTasksWithLockAcquired(); On 2016/02/10 at 22:30:04, ericrk wrote: > On 2016/02/10 20:49:32, reveman wrote: > > Is this necessary if we keep the signal after work_queue_.CompleteTask? See my > > other comment for this to make sense. > > So, if we signal after completing a task, signaling here isn't necessary. However, if we signal after completing a task we can end up with duplicated signaling. > > We can potentially signal at three points: > 1. After dequeueing a task, but before running it. > 2. After completing a task. > 3. When there are no more tasks to run for the current category (this line) > > If we signal at location (1) only, we won't wake up the background thread when foreground work finishes. > > If we signal at location (2) only, we will wake up foreground threads slower than is ideal (we wait until after a task is done to wake up an additional thread). > > If we signal at location (3) only, we only ever wake up 1 thread at a time. > > To get correct behavior, we can signal at (1) and (3) or (1) and (2). > > If we do (1) and (3) - the current approach in this code - we wake up threads quickly by signaling at (1), and we correctly wake up background threads when foreground work finishes (3). There is no duplicate signaling. > > If we do (1) and (2), we wake up threads quickly (1), and we wake up the background thread (2), but we also frequently double-signal. If multiple foreground tasks run in a row, we may end up signaling at location (1) even after we have just signaled at location (2), waking up 2 threads instead of the desired 1 thread. > > Does this make sense? Yes, current code makes sense. Thanks for explaining! https://chromiumcodereview.appspot.com/1666283002/diff/140001/content/rendere... content/renderer/raster_worker_pool.cc:375: bool RasterWorkerPool::ShouldRunTaskForCategoryWithLockAcquired( On 2016/02/10 at 22:30:04, ericrk wrote: > On 2016/02/10 20:49:32, reveman wrote: > > Do you think merging this into SignalHasReadyToRunTasksWithLockAcquired would > > make for the most efficient and easiest to read code? If you'd like to use > > helper functions instead of having all code in > > SignalHasReadyToRunTasksWithLockAcquired() then can we remove the |category| > > parameter which seem to be known at compile time and instead have two separate > > helper functions? ie. ShouldRunForegroundTask + ShouldRunBackgroundTask. Maybe > > even move these helpers to anonymous namespace. > > Meant to use this code in RunTasksWithLockAcquired - it was a bug that I wasn't (background tasks would not stop running) - this is why I factored this way. Should make more sense now. Got it. Looks good now. https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... File content/renderer/raster_worker_pool.cc (right): https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:28: base::ConditionVariable* ready_to_run_tasks_cv) nit: s/ready_to_run_tasks_cv/has_ready_to_run_tasks_cv/ https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:39: base::ConditionVariable* const ready_to_run_tasks_cv_; nit: s/ready_to_run_tasks_cv_/has_ready_to_run_tasks_cv_/ https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:150: { nit: is this scope needed? https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:226: base::ConditionVariable* ready_to_run_tasks_cv) { nit: s/ready_to_run_tasks_cv/has_ready_to_run_tasks_cv/ https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:353: SignalHasReadyToRunTasksWithLockAcquired(); Just to be sure. This can't result in us starting to run another NONCONCURRENT task even if we're just about to run one here, right? https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... File content/renderer/raster_worker_pool.h (right): https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:52: base::ConditionVariable* work_ready_cv); nit: s/work_ready_cv/has_ready_to_run_tasks_cv/ https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:106: // Helper function which signals the CV for the next task to run. nit: "...signals the worker threads if tasks are ready to run"? or maybe just s/CV/CVs/ https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:112: bool ShouldRunTaskForCategoryWithLockAcquired(cc::TaskCategory category); nit: move these functions so the order and location matches the .cc file https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:137: base::ConditionVariable foreground_has_ready_to_run_tasks_cv_; nit: s/foreground_has_ready_to_run_tasks_cv_/has_ready_to_run_foreground_tasks_cv_/ https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:138: base::ConditionVariable background_has_ready_to_run_tasks_cv_; nit: s/background_has_ready_to_run_tasks_cv_/has_ready_to_run_background_tasks_cv_/
Sievers, can you take a look for the content/ changes? Thanks! https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... File content/renderer/raster_worker_pool.cc (right): https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:28: base::ConditionVariable* ready_to_run_tasks_cv) On 2016/02/10 23:25:09, reveman wrote: > nit: s/ready_to_run_tasks_cv/has_ready_to_run_tasks_cv/ Done. https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:39: base::ConditionVariable* const ready_to_run_tasks_cv_; On 2016/02/10 23:25:08, reveman wrote: > nit: s/ready_to_run_tasks_cv_/has_ready_to_run_tasks_cv_/ Done. https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:150: { On 2016/02/10 23:25:08, reveman wrote: > nit: is this scope needed? nope https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:226: base::ConditionVariable* ready_to_run_tasks_cv) { On 2016/02/10 23:25:08, reveman wrote: > nit: s/ready_to_run_tasks_cv/has_ready_to_run_tasks_cv/ Done. https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.cc:353: SignalHasReadyToRunTasksWithLockAcquired(); On 2016/02/10 23:25:08, reveman wrote: > Just to be sure. This can't result in us starting to run another NONCONCURRENT > task even if we're just about to run one here, right? Nope - GetNextTaskToRun pushes the task onto running_tasks, which is respected in the ShouldRunTasksForCategory... calls. https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... File content/renderer/raster_worker_pool.h (right): https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:52: base::ConditionVariable* work_ready_cv); On 2016/02/10 23:25:09, reveman wrote: > nit: s/work_ready_cv/has_ready_to_run_tasks_cv/ Done. https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:106: // Helper function which signals the CV for the next task to run. On 2016/02/10 23:25:09, reveman wrote: > nit: "...signals the worker threads if tasks are ready to run"? or maybe just > s/CV/CVs/ Done. https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:112: bool ShouldRunTaskForCategoryWithLockAcquired(cc::TaskCategory category); On 2016/02/10 23:25:09, reveman wrote: > nit: move these functions so the order and location matches the .cc file Done. https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:137: base::ConditionVariable foreground_has_ready_to_run_tasks_cv_; On 2016/02/10 23:25:09, reveman wrote: > nit: > s/foreground_has_ready_to_run_tasks_cv_/has_ready_to_run_foreground_tasks_cv_/ Done. https://chromiumcodereview.appspot.com/1666283002/diff/160001/content/rendere... content/renderer/raster_worker_pool.h:138: base::ConditionVariable background_has_ready_to_run_tasks_cv_; On 2016/02/10 23:25:09, reveman wrote: > nit: > s/background_has_ready_to_run_tasks_cv_/has_ready_to_run_background_tasks_cv_/ Done.
ericrk@chromium.org changed reviewers: + avi.rohit@gmail.com
+avi for content
lgtm
The CQ bit was checked by ericrk@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from reveman@chromium.org Link to the patchset: https://codereview.chromium.org/1666283002/#ps180001 (title: "feedback")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1666283002/180001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1666283002/180001
Message was sent while issue was closed.
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ==========
Message was sent while issue was closed.
Committed patchset #9 (id:180001)
Message was sent while issue was closed.
A revert of this CL (patchset #9 id:180001) has been created in https://codereview.chromium.org/1690023005/ by skyostil@chromium.org. The reason for reverting is: Broke the thread_times.tough_scrolling_cases test (verified with a local revert): https://build.chromium.org/p/chromium.perf/builders/Linux%20Perf%20%281%29/bu....
Message was sent while issue was closed.
On 2016/02/12 17:12:44, Sami wrote: > A revert of this CL (patchset #9 id:180001) has been created in > https://codereview.chromium.org/1690023005/ by mailto:skyostil@chromium.org. > > The reason for reverting is: Broke the thread_times.tough_scrolling_cases test > (verified with a local revert): > > https://build.chromium.org/p/chromium.perf/builders/Linux%20Perf%20%281%29/bu.... Note that this also caused a whole bunch of crashes - see https://code.google.com/p/chromium/issues/detail?id=586447 for details.
Message was sent while issue was closed.
The issue which caused crashes was due to the following: If we go to schedule a task, we need to ensure that the task is not already running - we do this by seeing if the task is present in running_tasks. With my change to make running_tasks split up per-category, I modified the logic to look for a running task with the same category as the one we were about to schedule. This isn't thorough enough, as the task being scheduled may have changed categories. I've updated TaskGraphWorkQueue to check all running tasks, not just those in the category of the newly scheduled task. I also reverted to using a flat list for running tasks, as the checks were getting incredibly verbose with the doubly-nested list of lists. Given that this list is capped at the number of running threads, I don't expect that iterating this list will be costly compared to a hash map lookup. Let me know how this looks.
lgtm
The CQ bit was checked by ericrk@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from sievers@chromium.org Link to the patchset: https://codereview.chromium.org/1666283002/#ps200001 (title: "Fix detection of already running tasks when category changes")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1666283002/200001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1666283002/200001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_chromium_rel_ng on tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
Message was sent while issue was closed.
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel Committed: https://crrev.com/997a61d92c0cd68a9fb336adfba3abd8849215c6 Cr-Commit-Position: refs/heads/master@{#375069} ==========
Message was sent while issue was closed.
Patchset 10 (id:??) landed as https://crrev.com/997a61d92c0cd68a9fb336adfba3abd8849215c6 Cr-Commit-Position: refs/heads/master@{#375069}
Message was sent while issue was closed.
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel Committed: https://crrev.com/997a61d92c0cd68a9fb336adfba3abd8849215c6 Cr-Commit-Position: refs/heads/master@{#375069} ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ==========
The CQ bit was checked by ericrk@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1666283002/200001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1666283002/200001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: mac_chromium_rel_ng on tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_...)
Patchset #11 (id:220001) has been deleted
Turns out the trybot said this was committed, but there was actually a failure and no commit happened :/ - not sure what caused this... Anyway, the failure was due to me missing the fact that PrioritizedTask::Vector does not refcount tasks, while Task::Vector does... Given that we already refcount tasks in all cases but the "ready_to_run" state, I just added refcounting to PrioritizedTask::Vector, which lets us use the more appropriate one as needed without a behavior difference. Could have also just added a new type (std::pair<uint16_t, scoped_refptr<Task>>), but this seemed like overkill. PTAL, thanks!
https://codereview.chromium.org/1666283002/diff/240001/cc/raster/task_graph_w... File cc/raster/task_graph_work_queue.h (right): https://codereview.chromium.org/1666283002/diff/240001/cc/raster/task_graph_w... cc/raster/task_graph_work_queue.h:40: scoped_refptr<Task> task; hm, not sure about this. I'm worried this will make ScheduleTasks much more expensive as increments/decrements of these thread-safe ref-counted tasks is very expensive compared to everything else done at ScheduleTasks time. Can we avoid this change?
good point - updated with a new type to avoid this... PTAL. https://codereview.chromium.org/1666283002/diff/240001/cc/raster/task_graph_w... File cc/raster/task_graph_work_queue.h (right): https://codereview.chromium.org/1666283002/diff/240001/cc/raster/task_graph_w... cc/raster/task_graph_work_queue.h:40: scoped_refptr<Task> task; On 2016/02/24 18:53:00, reveman wrote: > hm, not sure about this. I'm worried this will make ScheduleTasks much more > expensive as increments/decrements of these thread-safe ref-counted tasks is > very expensive compared to everything else done at ScheduleTasks time. Can we > avoid this change? Introduced a new type, CategorizedTask, which refcounts (as Task::Vector did), but is only used for running_tasks. This avoids unnecessary refcounting but adds a new type.
lgtm https://codereview.chromium.org/1666283002/diff/260001/cc/raster/task_graph_w... File cc/raster/task_graph_work_queue.h (right): https://codereview.chromium.org/1666283002/diff/260001/cc/raster/task_graph_w... cc/raster/task_graph_work_queue.h:156: count++; nit: ++count is typically preferred in chromium code
The CQ bit was checked by ericrk@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from sievers@chromium.org, reveman@chromium.org Link to the patchset: https://codereview.chromium.org/1666283002/#ps280001 (title: "x++ > ++x")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/1666283002/280001 View timeline at https://chromium-cq-status.appspot.com/patch-timeline/1666283002/280001
https://codereview.chromium.org/1666283002/diff/260001/cc/raster/task_graph_w... File cc/raster/task_graph_work_queue.h (right): https://codereview.chromium.org/1666283002/diff/260001/cc/raster/task_graph_w... cc/raster/task_graph_work_queue.h:156: count++; On 2016/02/24 19:34:29, reveman wrote: > nit: ++count is typically preferred in chromium code k
Message was sent while issue was closed.
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ==========
Message was sent while issue was closed.
Committed patchset #13 (id:280001)
Message was sent while issue was closed.
Description was changed from ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel ========== to ========== Refactor signaling in RWP A previous patch had moved RasterWorkerPool to make heavy use of broadcast in order to simplify logic a bit. It turns out that broadcast is not ideal in performance critical situations, as it increases lock contention (see https://code.google.com/p/chromium/codesearch#chromium/src/base/synchronizati...). This change removes all uses of broadcast other than in Shutdown, where performance is less of a concern and we actually need all threads to wake up. To achieve this, we need to re-factor RWP to use: - N foreground threads - 1 background thread rather than N threads which may be able to run foreground/background tasks. In order to ensure that we don't overload the system, this change introduces a limiting system where the background priority thread can only run tasks if no other threads are doing so. BUG= CQ_INCLUDE_TRYBOTS=tryserver.blink:linux_blink_rel Committed: https://crrev.com/f064193fe71b8b055d060adb10fbc4e3cca8c604 Cr-Commit-Position: refs/heads/master@{#377379} ==========
Message was sent while issue was closed.
Patchset 13 (id:??) landed as https://crrev.com/f064193fe71b8b055d060adb10fbc4e3cca8c604 Cr-Commit-Position: refs/heads/master@{#377379}
Message was sent while issue was closed.
ericwilligers@chromium.org changed reviewers: + ericwilligers@chromium.org
Message was sent while issue was closed.
https://codereview.chromium.org/1666283002/diff/280001/content/renderer/raste... File content/renderer/raster_worker_pool.cc (right): https://codereview.chromium.org/1666283002/diff/280001/content/renderer/raste... content/renderer/raster_worker_pool.cc:31: categories_(categories), Perhaps categories_(std::move(categories)), |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
