|
|
DescriptionGpuVSyncProvider with unit test
This introduces a class for waiting for GPU VSync signals on background thread.
For now this class isn't hooked anywhere but eventually it is going to be hooked
to either GpuCommandBufferStub or PassThroughImageTransportSurface and
replace the current VSyncProvider based mechanism on Windows.
I verified functionality of this code as a part of the prototype
(where this class is called VSyncThread):
https://codereview.chromium.org/2555173003/
BUG=467617
CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel
Review-Url: https://codereview.chromium.org/2596123002
Cr-Commit-Position: refs/heads/master@{#442386}
Committed: https://chromium.googlesource.com/chromium/src/+/a4002d66074c748ac4630dd9dd5414aa4ce06ec4
Patch Set 1 #Patch Set 2 : Fixed build failure on posix. #
Total comments: 20
Patch Set 3 : Addressed CR feedback #Patch Set 4 : Self review. #Patch Set 5 : Fixed win_clang build error #
Total comments: 3
Patch Set 6 : Optimized opening/closing adapter #
Total comments: 2
Patch Set 7 : Added comment for GpuVSyncWorker destructor #
Messages
Total messages: 49 (32 generated)
Description was changed from ========== GpuVSyncProvider with unit test BUG= ========== to ========== GpuVSyncProvider with unit test BUG= CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
The CQ bit was checked by stanisc@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: chromeos_daisy_chromium_compile_only_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromeos_daisy_...)
The CQ bit was checked by stanisc@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Description was changed from ========== GpuVSyncProvider with unit test BUG= CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== GpuVSyncProvider with unit test This introduces a class for waiting for GPU VSync signals on background thread. For now this class isn't hooked anywhere but eventually it is going to be hooked to either GpuCommandBufferStub or PassThroughImageTransportSurface and replace the current VSyncProvider based mechanism on Windows. I verified functionality of this code as a part of the prototype (where this class is called VSyncThread): https://codereview.chromium.org/2555173003/ BUG= CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
Description was changed from ========== GpuVSyncProvider with unit test This introduces a class for waiting for GPU VSync signals on background thread. For now this class isn't hooked anywhere but eventually it is going to be hooked to either GpuCommandBufferStub or PassThroughImageTransportSurface and replace the current VSyncProvider based mechanism on Windows. I verified functionality of this code as a part of the prototype (where this class is called VSyncThread): https://codereview.chromium.org/2555173003/ BUG= CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== GpuVSyncProvider with unit test This introduces a class for waiting for GPU VSync signals on background thread. For now this class isn't hooked anywhere but eventually it is going to be hooked to either GpuCommandBufferStub or PassThroughImageTransportSurface and replace the current VSyncProvider based mechanism on Windows. I verified functionality of this code as a part of the prototype (where this class is called VSyncThread): https://codereview.chromium.org/2555173003/ BUG=467617 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
stanisc@chromium.org changed reviewers: + brucedawson@chromium.org, jbauman@chromium.org
PTAL! https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider.h (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:18: class GPU_EXPORT GpuVSyncProvider : public base::Thread { Not sure about the class name because it is similar to VSyncProvider which is related but provides slightly different kind of data. Feel free to suggest a better name. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:42: // This can be set of main thread only and accessed from both threads. Need to replace of -> on. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider_win.cc (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:174: TRACE_EVENT0("gpu", "GpuVSyncProvider::SendVSync"); Need to change this to SendVSyncUpdate
A few thoughts. Swing by if you want details. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider.h (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:18: class GPU_EXPORT GpuVSyncProvider : public base::Thread { On 2016/12/22 21:35:08, stanisc wrote: > Not sure about the class name because it is similar to VSyncProvider which is > related but provides slightly different kind of data. Feel free to suggest a > better name. Well I like the name... https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:43: volatile bool enabled_ = false; Shouldn't need volatile. volatile does tend to 'help' but it actually has no defined meaning in multi-threaded code so it should usually be avoided. In general you should prefer locks. Any data that is referenced on multiple threads should be protected by a lock even if it "seems" safe. Otherwise you are hitting C++ undefined behavior and it's very difficult to reason about what guarantees you have. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:45: // This can be set on background thread only. Can it be read on other threads? If so then it needs to be behind a lock. If it can only be referenced on the background thread then change the comment to say that, to make it clear that no lock is needed. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider_unittest_win.cc (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_unittest_win.cc:36: volatile LONG vsync_count_ = 0; Volatile isn't needed here. What makes this safe to access from multiple threads is the use of Interlocked operations. The non-interlocked accesses to this variable are theoretically dangerous. Safer thing to do is to use a lock to protect vsync_count_. Yes, it feels like overkill, but it is the only way to avoid a host of subtle and undefined behavior. TL;DR - use a lock to protect all accesses to vsync_count_ and get rid of volatile and Interlocked*. Safer, and more portable. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider_win.cc (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:92: void GpuVSyncProvider::WaitForVSyncOnThread() { Do we really want this to do the whole init/de-init dance for every vblank? I would expect that we would set up all of the state and then reuse it. Is there a need (monitor changes perhaps) for redoing all of the work? Should at least be able to cache the function pointers. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:97: // TODO: implement this I don't understand this comment.
https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider.h (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:43: volatile bool enabled_ = false; On 2016/12/22 21:49:16, brucedawson wrote: > Shouldn't need volatile. volatile does tend to 'help' but it actually has no > defined meaning in multi-threaded code so it should usually be avoided. In > general you should prefer locks. Any data that is referenced on multiple threads > should be protected by a lock even if it "seems" safe. Otherwise you are hitting > C++ undefined behavior and it's very difficult to reason about what guarantees > you have. I had a lock initially but was concerned it could result in priority inversion, although the earlier version of the code was more complex with more cross-thread interaction. This is the pattern where enabled in flipped only on main thread but checked on background thread. It doesn't need to protect any data and is used just to control execution. Perhaps I should replace it with Interlocked functions. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:45: // This can be set on background thread only. On 2016/12/22 21:49:16, brucedawson wrote: > Can it be read on other threads? If so then it needs to be behind a lock. > > If it can only be referenced on the background thread then change the comment to > say that, to make it clear that no lock is needed. No, this is accessed on background thread only. I'll change the comment to reflect that. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider_win.cc (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:92: void GpuVSyncProvider::WaitForVSyncOnThread() { On 2016/12/22 21:49:16, brucedawson wrote: > Do we really want this to do the whole init/de-init dance for every vblank? > > I would expect that we would set up all of the state and then reuse it. Is there > a need (monitor changes perhaps) for redoing all of the work? > > Should at least be able to cache the function pointers. One reason I've done it this way is because I wanted to avoid leaking these implementation details into the header file. This function pointers are windows specific and the header file is supported on all platforms. It seems that getting this function pointers every time isn't a big deal - I profiled this code and the CPU cost of this code was fairly negligible. I agree that I could reuse at least the function pointers. What I could do is to move the background worker implementation into a separate class defined locally in this file. I think that is a good idea. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:97: // TODO: implement this On 2016/12/22 21:49:16, brucedawson wrote: > I don't understand this comment. Forgot to remove this one. Will remove it in the next change.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider.h (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:43: volatile bool enabled_ = false; A few possibilities... 1) Use a lock, see if it works. Because you are locking very briefly and with very little contention the overhead should be trivial. 2) Look in base\atomicops.h, and use its types and functions. There are some comments their that discourage use of these functions but those comments apply equally to using Interlocked* directly. Option #2 is not unreasonable, but you don't need to mark your variables as volatile. Just always use the functions to read/write the data. Acquire_Load or NoBarrier_Load, for instance, instead of a raw read. Mixing normal reads/writes of a variable with atomics reads/writes is when things get ill-defined.
The CQ bit was checked by stanisc@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Addressed Bruce's feedback. Please take a look. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider.h (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:42: // This can be set of main thread only and accessed from both threads. On 2016/12/22 21:35:08, stanisc wrote: > Need to replace of -> on. Done. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:43: volatile bool enabled_ = false; On 2016/12/23 00:37:30, brucedawson wrote: > A few possibilities... > > 1) Use a lock, see if it works. Because you are locking very briefly and with > very little contention the overhead should be trivial. > > 2) Look in base\atomicops.h, and use its types and functions. There are some > comments their that discourage use of these functions but those comments apply > equally to using Interlocked* directly. > > Option #2 is not unreasonable, but you don't need to mark your variables as > volatile. Just always use the functions to read/write the data. Acquire_Load or > NoBarrier_Load, for instance, instead of a raw read. Mixing normal reads/writes > of a variable with atomics reads/writes is when things get ill-defined. Done. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider.h:45: // This can be set on background thread only. On 2016/12/22 22:59:31, stanisc wrote: > On 2016/12/22 21:49:16, brucedawson wrote: > > Can it be read on other threads? If so then it needs to be behind a lock. > > > > If it can only be referenced on the background thread then change the comment > to > > say that, to make it clear that no lock is needed. > > No, this is accessed on background thread only. I'll change the comment to > reflect that. Done. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider_unittest_win.cc (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_unittest_win.cc:36: volatile LONG vsync_count_ = 0; On 2016/12/22 21:49:16, brucedawson wrote: > Volatile isn't needed here. What makes this safe to access from multiple threads > is the use of Interlocked operations. The non-interlocked accesses to this > variable are theoretically dangerous. > > Safer thing to do is to use a lock to protect vsync_count_. Yes, it feels like > overkill, but it is the only way to avoid a host of subtle and undefined > behavior. > > TL;DR - use a lock to protect all accesses to vsync_count_ and get rid of > volatile and Interlocked*. Safer, and more portable. Done. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider_win.cc (right): https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:92: void GpuVSyncProvider::WaitForVSyncOnThread() { On 2016/12/22 21:49:16, brucedawson wrote: > Do we really want this to do the whole init/de-init dance for every vblank? > > I would expect that we would set up all of the state and then reuse it. Is there > a need (monitor changes perhaps) for redoing all of the work? > > Should at least be able to cache the function pointers. OK. Cached function pointers, but keep opening/closing adapter every time. I think it not worth the extra complexity considering this is called only 60 times per second. It doesn't look like this open/close calls are expensive. https://codereview.chromium.org/2596123002/diff/20001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:97: // TODO: implement this On 2016/12/22 22:59:31, stanisc wrote: > On 2016/12/22 21:49:16, brucedawson wrote: > > I don't understand this comment. > > Forgot to remove this one. Will remove it in the next change. Done.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_clang on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_clang/builds/...)
The CQ bit was checked by stanisc@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
lgtm https://codereview.chromium.org/2596123002/diff/80001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider_win.cc (right): https://codereview.chromium.org/2596123002/diff/80001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:152: NTSTATUS result = open_adapter_from_hdc_ptr_(&open_adapter_data); I guess we'll see how the performance of this looks. OpenAdapter definitely has to do some driver work in the kernel but at least it's not as expensive as creating a device.
https://codereview.chromium.org/2596123002/diff/80001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider_win.cc (right): https://codereview.chromium.org/2596123002/diff/80001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:152: NTSTATUS result = open_adapter_from_hdc_ptr_(&open_adapter_data); On 2017/01/06 03:25:50, jbauman wrote: > I guess we'll see how the performance of this looks. OpenAdapter definitely has > to do some driver work in the kernel but at least it's not as expensive as > creating a device. I did some tracing and this didn't look like a noticeable cost. I guess if this becomes a problem I could add a logic where monitor_info.szDevice is compared to the previous one and the adapter is opened/closed only when the device name changes.
lgtm with one request https://codereview.chromium.org/2596123002/diff/80001/gpu/ipc/service/gpu_vsy... File gpu/ipc/service/gpu_vsync_provider_win.cc (right): https://codereview.chromium.org/2596123002/diff/80001/gpu/ipc/service/gpu_vsy... gpu/ipc/service/gpu_vsync_provider_win.cc:152: NTSTATUS result = open_adapter_from_hdc_ptr_(&open_adapter_data); On 2017/01/06 21:35:51, stanisc wrote: > On 2017/01/06 03:25:50, jbauman wrote: > > I guess we'll see how the performance of this looks. OpenAdapter definitely > has > > to do some driver work in the kernel but at least it's not as expensive as > > creating a device. > > I did some tracing and this didn't look like a noticeable cost. I guess if this > becomes a problem I could add a logic where monitor_info.szDevice is compared to > the previous one and the adapter is opened/closed only when the device name > changes. Maybe do some timing measurements on a few machines and add a comment to quantify what you're seeing, if only to save future readers from wondering what counts as "noticeable cost". I guess the thing to measure would be the entire init sequence, from the beginning of this function to where you call wait_for_vertical_blank_event_ptr. That is, add a comment saying the cost in microseconds you are seeing on one or more machines.
On 2017/01/06 21:43:52, brucedawson wrote: > lgtm with one request > > https://codereview.chromium.org/2596123002/diff/80001/gpu/ipc/service/gpu_vsy... > File gpu/ipc/service/gpu_vsync_provider_win.cc (right): > > https://codereview.chromium.org/2596123002/diff/80001/gpu/ipc/service/gpu_vsy... > gpu/ipc/service/gpu_vsync_provider_win.cc:152: NTSTATUS result = > open_adapter_from_hdc_ptr_(&open_adapter_data); > On 2017/01/06 21:35:51, stanisc wrote: > > On 2017/01/06 03:25:50, jbauman wrote: > > > I guess we'll see how the performance of this looks. OpenAdapter definitely > > has > > > to do some driver work in the kernel but at least it's not as expensive as > > > creating a device. > > > > I did some tracing and this didn't look like a noticeable cost. I guess if > this > > becomes a problem I could add a logic where monitor_info.szDevice is compared > to > > the previous one and the adapter is opened/closed only when the device name > > changes. > > Maybe do some timing measurements on a few machines and add a comment to > quantify what you're seeing, if only to save future readers from wondering what > counts as "noticeable cost". I guess the thing to measure would be the entire > init sequence, from the beginning of this function to where you call > wait_for_vertical_blank_event_ptr. > > That is, add a comment saying the cost in microseconds you are seeing on one or > more machines. OK, will add tracing specifically for this part of code and report back results.
The average CPU cost of the code block that creates DC, opens adapter, and closes DC is 0.133 ms on my desktop and 0.102 ms on my test laptop. That corresponds to 0.6-0.8% of one CPU core usage so I guess it is worth optimizing. On Fri, Jan 6, 2017 at 2:07 PM, <stanisc@chromium.org> wrote: > On 2017/01/06 21:43:52, brucedawson wrote: > > lgtm with one request > > > > > https://codereview.chromium.org/2596123002/diff/80001/gpu/ > ipc/service/gpu_vsync_provider_win.cc > > File gpu/ipc/service/gpu_vsync_provider_win.cc (right): > > > > > https://codereview.chromium.org/2596123002/diff/80001/gpu/ > ipc/service/gpu_vsync_provider_win.cc#newcode152 > > gpu/ipc/service/gpu_vsync_provider_win.cc:152: NTSTATUS result = > > open_adapter_from_hdc_ptr_(&open_adapter_data); > > On 2017/01/06 21:35:51, stanisc wrote: > > > On 2017/01/06 03:25:50, jbauman wrote: > > > > I guess we'll see how the performance of this looks. OpenAdapter > definitely > > > has > > > > to do some driver work in the kernel but at least it's not as > expensive as > > > > creating a device. > > > > > > I did some tracing and this didn't look like a noticeable cost. I > guess if > > this > > > becomes a problem I could add a logic where monitor_info.szDevice is > compared > > to > > > the previous one and the adapter is opened/closed only when the device > name > > > changes. > > > > Maybe do some timing measurements on a few machines and add a comment to > > quantify what you're seeing, if only to save future readers from > wondering > what > > counts as "noticeable cost". I guess the thing to measure would be the > entire > > init sequence, from the beginning of this function to where you call > > wait_for_vertical_blank_event_ptr. > > > > That is, add a comment saying the cost in microseconds you are seeing on > one > or > > more machines. > > OK, will add tracing specifically for this part of code and report back > results. > > > https://codereview.chromium.org/2596123002/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
The CQ bit was checked by stanisc@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Here is the version with adapter handle caching that avoids opening and closing the adapter unless the device name has changed.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
On 2017/01/07 01:53:45, stanisc wrote: > Here is the version with adapter handle caching that avoids opening and closing > the adapter unless the device name has changed. Do you want to do another round of review? Otherwise I'll commit this in an hour or so.
One area of concern - can you comment? https://codereview.chromium.org/2596123002/diff/100001/gpu/ipc/service/gpu_vs... File gpu/ipc/service/gpu_vsync_provider_win.cc (right): https://codereview.chromium.org/2596123002/diff/100001/gpu/ipc/service/gpu_vs... gpu/ipc/service/gpu_vsync_provider_win.cc:125: base::Unretained(this))); This looks dangerous. CloseAdapter depends on the GPUVSyncWorker still existing so you can't schedule task to call it during the destructor. If would have expected a direct call to CloseAdapter here. If that is not legal then it is better to leak the adapter data.
https://codereview.chromium.org/2596123002/diff/100001/gpu/ipc/service/gpu_vs... File gpu/ipc/service/gpu_vsync_provider_win.cc (right): https://codereview.chromium.org/2596123002/diff/100001/gpu/ipc/service/gpu_vs... gpu/ipc/service/gpu_vsync_provider_win.cc:125: base::Unretained(this))); On 2017/01/09 18:51:54, brucedawson wrote: > This looks dangerous. CloseAdapter depends on the GPUVSyncWorker still existing > so you can't schedule task to call it during the destructor. > > If would have expected a direct call to CloseAdapter here. If that is not legal > then it is better to leak the adapter data. This should be safe because Thread::Stop posts "Quit" task and waits for it run which means that this task must be done before Thread::Stop returns. I've added a comment and a couple of DCHECKs to make it clear.
Perfect. Thanks for the explanation. lgtm
The CQ bit was checked by stanisc@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by stanisc@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from jbauman@chromium.org Link to the patchset: https://codereview.chromium.org/2596123002/#ps120001 (title: "Added comment for GpuVSyncWorker destructor")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 120001, "attempt_start_ts": 1484000952750410, "parent_rev": "625e34af3f15b18fced1c4d4affdc1b6a579ba44", "commit_rev": "a4002d66074c748ac4630dd9dd5414aa4ce06ec4"}
Message was sent while issue was closed.
Description was changed from ========== GpuVSyncProvider with unit test This introduces a class for waiting for GPU VSync signals on background thread. For now this class isn't hooked anywhere but eventually it is going to be hooked to either GpuCommandBufferStub or PassThroughImageTransportSurface and replace the current VSyncProvider based mechanism on Windows. I verified functionality of this code as a part of the prototype (where this class is called VSyncThread): https://codereview.chromium.org/2555173003/ BUG=467617 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== GpuVSyncProvider with unit test This introduces a class for waiting for GPU VSync signals on background thread. For now this class isn't hooked anywhere but eventually it is going to be hooked to either GpuCommandBufferStub or PassThroughImageTransportSurface and replace the current VSyncProvider based mechanism on Windows. I verified functionality of this code as a part of the prototype (where this class is called VSyncThread): https://codereview.chromium.org/2555173003/ BUG=467617 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2596123002 Cr-Commit-Position: refs/heads/master@{#442386} Committed: https://chromium.googlesource.com/chromium/src/+/a4002d66074c748ac4630dd9dd54... ==========
Message was sent while issue was closed.
Committed patchset #7 (id:120001) as https://chromium.googlesource.com/chromium/src/+/a4002d66074c748ac4630dd9dd54... |