Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(285)

Issue 2257533006: Free worker context resources on idle. (Closed)

Created:
4 years, 4 months ago by ericrk
Modified:
4 years, 3 months ago
Reviewers:
danakj
CC:
chromium-reviews, mlamouri+watch-content_chromium.org, cc-bugs_chromium.org, jam, darin-cc_chromium.org, piman+watch_chromium.org
Base URL:
https://chromium.googlesource.com/chromium/src.git@fix-cleanup2
Target Ref:
refs/pending/heads/master
Project:
chromium
Visibility:
Public.

Description

Free worker context resources on idle. Allows an idle callback to be set on the worker context, and gives clients of the context a way to notify the context that it is busy or idle. RenderThreadImpl now sets a callback which will clean up Skia and command buffer resources when the context becomes idle for more than one second. BUG=624630 CQ_INCLUDE_TRYBOTS=master.tryserver.blink:linux_precise_blink_rel

Patch Set 1 #

Patch Set 2 : rebase #

Patch Set 3 : rebase #

Patch Set 4 : cleanup #

Patch Set 5 : provide handle #

Patch Set 6 : fixes #

Total comments: 11

Patch Set 7 : rebase #

Unified diffs Side-by-side diffs Delta from patch set Stats (+193 lines, -18 lines) Patch
M cc/BUILD.gn View 1 chunk +1 line, -0 lines 0 comments Download
M cc/output/context_provider.h View 3 chunks +5 lines, -11 lines 0 comments Download
M cc/test/test_context_support.h View 1 2 3 4 5 6 1 chunk +4 lines, -0 lines 0 comments Download
M cc/test/test_context_support.cc View 1 2 3 4 5 6 3 chunks +21 lines, -3 lines 0 comments Download
M content/renderer/render_thread_impl.cc View 1 2 3 4 5 6 4 chunks +26 lines, -0 lines 0 comments Download
M gpu/command_buffer/client/context_support.h View 1 2 3 4 5 6 1 chunk +7 lines, -0 lines 0 comments Download
M gpu/command_buffer/client/gles2_implementation.h View 1 2 3 4 5 6 3 chunks +16 lines, -0 lines 0 comments Download
M gpu/command_buffer/client/gles2_implementation.cc View 1 2 3 4 5 6 7 chunks +65 lines, -3 lines 0 comments Download
M gpu/command_buffer/client/gles2_implementation_unittest.cc View 1 2 3 4 5 6 6 chunks +48 lines, -1 line 0 comments Download

Depends on Patchset:

Messages

Total messages: 12 (9 generated)
ericrk
Here's the "idle" counterpart to the previous "visble" cl. A bit more complex as we ...
4 years, 4 months ago (2016-08-19 20:46:28 UTC) #5
danakj
https://codereview.chromium.org/2257533006/diff/90001/cc/output/context_provider.cc File cc/output/context_provider.cc (right): https://codereview.chromium.org/2257533006/diff/90001/cc/output/context_provider.cc#newcode15 cc/output/context_provider.cc:15: context_provider_->DetachFromThread(); You don't need to do this before ClientBecameBusy? ...
4 years, 4 months ago (2016-08-23 01:26:25 UTC) #9
ericrk
4 years, 3 months ago (2016-08-24 18:32:21 UTC) #10
Thanks for the feedback! A few questions

https://codereview.chromium.org/2257533006/diff/90001/cc/output/context_provi...
File cc/output/context_provider.cc (right):

https://codereview.chromium.org/2257533006/diff/90001/cc/output/context_provi...
cc/output/context_provider.cc:15: context_provider_->DetachFromThread();
On 2016/08/23 01:26:24, danakj wrote:
> You don't need to do this before ClientBecameBusy?

The detach/attach is only necessary before access to the GL context itself. For
helper functions like the busy handler, this shouldn't matter (as long as we
have the lock acquired) -- but I'm happy to move it after if that would keep
things more understandable - WDYT?

https://codereview.chromium.org/2257533006/diff/90001/content/renderer/render...
File content/renderer/render_thread_impl.cc (right):

https://codereview.chromium.org/2257533006/diff/90001/content/renderer/render...
content/renderer/render_thread_impl.cc:2156:
base::ThreadTaskRunnerHandle::Get());
On 2016/08/23 01:26:24, danakj wrote:
> what about just making it use the task runner the context is bound to? thats
how
> the callbacks on ContextProvider work

The current callback systems always funnels through the Context/GpuControl.
Depending on the implementation, the Context/GpuControl may not have a task
runner (and may run callbacks directly), although typically it does have one.

It seems like we could:
- expose the task runner on GpuControl, and allow it to return null.
- Add a "ScheduleTask" and "ScheduleDelayedTask" to GpuControl, but at that
point we might as well just return the task runner?
- Move the idle callback to GpuControl/Context, although this would require some
new logic in each implementation of GpuControl/Context (similar to our other
callbacks I guess?)

https://codereview.chromium.org/2257533006/diff/90001/gpu/command_buffer/clie...
File gpu/command_buffer/client/gles2_implementation.cc (right):

https://codereview.chromium.org/2257533006/diff/90001/gpu/command_buffer/clie...
gpu/command_buffer/client/gles2_implementation.cc:5926: // We can only modify
|pending_idle_callback_| from the provided callback
On 2016/08/23 01:26:25, danakj wrote:
> Its kinda awk if you end up running the cancel right after the callback
because
> of posting order.
> 
> What if you used a generation id like "scheduled idle 5" and "cancelled idle
5"
> now you don't run 5. but if scheduled idle != cancelled idle then you let the
> posted cleanup run? or something like that?

It's actually a little bit tricky to avoid races like this. The problem has to
do with safely accessing the state related to the callback. 

Assuming this is a locking context, at the point when we call
ClientBecameBusy/ClientBecameNotBusy, we are locked by the caller. However, at
the point when the delayed task is run, we are no longer locked. This means we
can't safely access any mutable state that might be in use by another thread.

There are two ways to work around this:
 1) Ensure that any mutable state which affects the callback is *only* modified
from a single thread. This is what we currently do - the callable callback is
the "mutable state", and it is only scheduled/canceled/run on the callback
thread.
 2) Take an additional lock before modifying any callback related mutable state.

As mentioned, the code I initially wrote uses option (1). We could move to a
callback generation id, but modification of this ID would have to happen with
the same restrictions, using either (1) or (2).

If we use a generation id with option (1), we still have the same race where a
cancel can happen right after the callback is run, as we have to post generation
updates to the callback thread.

If we use a generation id with option (2), we need a new lock which is held
around modifications of the generation id, and this is still a bit racy.

The only way I can think of to minimize this race without doing (1) or (2) is to
add an additional "ShouldRunIdleCallback(uint32_t idle_callback_id)" on
ContextSupport. The external callback code could then call this after acquiring
the context lock, and early out if it returns false. This would ensure that
generation id was only checked / modified while the context lock is held.
However, this is pretty roundabout and puts additional checking burden on the
callback.

WDYT?

https://codereview.chromium.org/2257533006/diff/90001/gpu/command_buffer/clie...
File gpu/command_buffer/client/gles2_implementation.h (right):

https://codereview.chromium.org/2257533006/diff/90001/gpu/command_buffer/clie...
gpu/command_buffer/client/gles2_implementation.h:837:
std::unique_ptr<base::CancelableClosure> pending_idle_callback_;
On 2016/08/23 01:26:25, danakj wrote:
> why unique_ptr it?

You must create a CancelableClosure on the same thread you call cancel on, so
this lets us delay creation until we are on the idle_callback_task_runner_.

This can be re-worked if we go with a generation_id_, see my comments below.

Powered by Google App Engine
This is Rietveld 408576698