|
|
Created:
4 years, 1 month ago by sadrul Modified:
4 years ago CC:
Aaron Boodman, abarth-chromium, chromium-reviews, darin (slow to review), qsr+mojo_chromium.org, rjkroege, viettrungluu+watch_chromium.org, yzshen+watch_chromium.org Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
Descriptionmus: Get GpuMemoryBuffer from gpu process when possible.
From the window server (i.e. gpu-host process), try to create the
GpuMemoryBuffer from the gpu process first, before falling back
to using shared memory. To do this correctly, it is necessary to
associate these memory buffers to unique client-id of each client.
So create a separate mojom::GpuService instance for each client
(instead of using the same instance for handling requests from all
clients).
BUG=643746
CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel
Committed: https://crrev.com/0b7867d1681e539c50e5cf075baf6863dd128055
Cr-Commit-Position: refs/heads/master@{#436874}
Patch Set 1 #Patch Set 2 : . #Patch Set 3 : . #Patch Set 4 : . #Patch Set 5 : . #Patch Set 6 : self nits #Patch Set 7 : . #
Total comments: 13
Patch Set 8 : . #
Total comments: 1
Patch Set 9 : . #
Total comments: 5
Patch Set 10 : . #
Total comments: 6
Patch Set 11 : . #Patch Set 12 : . #Patch Set 13 : . #Patch Set 14 : . #
Messages
Total messages: 83 (59 generated)
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: cast_shell_linux on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/cast_shell_linu...)
Description was changed from ========== mus: Get gmb from gpu. BUG= ========== to ========== mus: Get gmb from gpu. BUG= CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: chromium_presubmit on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromium_presub...)
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Description was changed from ========== mus: Get gmb from gpu. BUG= CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== mus: Get GpuMemoryBuffer from gpu process when possible. From the window server (i.e. gpu-host process), try to create the GpuMemoryBuffer from the gpu process first, before falling back to using shared memory. To do this correctly, it is necessary to associate these memory buffers to unique client-id of each client. So create a separate mojom::GpuService instance for each client (instead of using the same instance for handling requests from all clients). BUG=643746 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
sadrul@chromium.org changed reviewers: + reveman@chromium.org
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_chromium_rel_ng on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_rel_...)
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
I should note that I am working on sharing code between the MusGpuMemoryBufferManager and BrowserGpuMemoryBufferManager, but want to fix up MusGpuMemoryBufferManager first. Thanks!
On 2016/11/17 15:49:00, sadrul wrote: > I should note that I am working on sharing code between the > MusGpuMemoryBufferManager and BrowserGpuMemoryBufferManager, but want to fix up > MusGpuMemoryBufferManager first. Thanks! ping
https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... File services/ui/ws/gpu_service_proxy.cc (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... services/ui/ws/gpu_service_proxy.cc:31: class GpuServiceImpl : public mojom::GpuService { I'm confused by this. Who is supposed to use this interface? Why are they not using the GpuMemoryBufferManager directly? The purpose of the GMBM interface is to hide all complicated out-of-process logic that might be needed and it's what chromium code should be using in general. Here you seem to be wrapping that interface in a GpuService while I would expected something more of the opposite; GMBM using some mojo to allocate native buffers through another process. https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... File services/ui/ws/mus_gpu_memory_buffer_manager.h (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... services/ui/ws/mus_gpu_memory_buffer_manager.h:34: gfx::GpuMemoryBufferHandle AllocateGpuMemoryBufferHandle( Hm, I'm not a fan of a Handle suffix. It's allocating a buffer and not a handle. The result is just returned in the form of a handle. I think the problem is that the AllocateGpuMemoryBuffer function below should be named CreateGpuMemoryBuffer as it returns an instance. We might want a quick rename before we land this. https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... services/ui/ws/mus_gpu_memory_buffer_manager.h:64: std::unordered_map<int, NativeBuffers> native_buffers_; what is this used for? It seems to keep track of what are native buffers but I'm not seeing it actually being used for anything.
https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... File services/ui/ws/gpu_service_proxy.cc (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... services/ui/ws/gpu_service_proxy.cc:31: class GpuServiceImpl : public mojom::GpuService { On 2016/11/22 08:18:56, reveman wrote: > I'm confused by this. Who is supposed to use this interface? Why are they not > using the GpuMemoryBufferManager directly? The purpose of the GMBM interface is > to hide all complicated out-of-process logic that might be needed and it's what > chromium code should be using in general. Here you seem to be wrapping that > interface in a GpuService while I would expected something more of the opposite; > GMBM using some mojo to allocate native buffers through another process. I have shared a doc [1] with you that hopefully explains this a little better. But briefly: mus-clients use MojoGMBManager to talk to the mus-window-server. This GpuServiceImpl (which lives in the mus-ws) is the end-point of that IPC channel. This receives the request from the mus-clients, and forwards that to the mus-gpu through MusGpuMemoryBufferManager (which lives in the mus-ws). ui::GpuServiceInternal (which lives in mus-gpu) receives that message, and uses GMBFactory to do actual memory allocation. The names maybe confusing. I am open to suggestions for better names. [1] https://docs.google.com/presentation/d/1Ms1kgyg_nCxdbskxV4M9op5OtklKDcmkIk56m... https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... File services/ui/ws/mus_gpu_memory_buffer_manager.h (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... services/ui/ws/mus_gpu_memory_buffer_manager.h:64: std::unordered_map<int, NativeBuffers> native_buffers_; On 2016/11/22 08:18:56, reveman wrote: > what is this used for? It seems to keep track of what are native buffers but I'm > not seeing it actually being used for anything. This is somewhat equivalent to BrowserGpuMemoryManager::clients_ map [1]. This is used to decide when to notify mus-gpu about GMB destruction. [1] https://cs.chromium.org/chromium/src/content/browser/gpu/browser_gpu_memory_b...
https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... File services/ui/ws/gpu_service_proxy.cc (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... services/ui/ws/gpu_service_proxy.cc:31: class GpuServiceImpl : public mojom::GpuService { On 2016/11/22 at 18:45:41, sadrul wrote: > On 2016/11/22 08:18:56, reveman wrote: > > I'm confused by this. Who is supposed to use this interface? Why are they not > > using the GpuMemoryBufferManager directly? The purpose of the GMBM interface is > > to hide all complicated out-of-process logic that might be needed and it's what > > chromium code should be using in general. Here you seem to be wrapping that > > interface in a GpuService while I would expected something more of the opposite; > > GMBM using some mojo to allocate native buffers through another process. > > I have shared a doc [1] with you that hopefully explains this a little better. But briefly: mus-clients use MojoGMBManager to talk to the mus-window-server. This GpuServiceImpl (which lives in the mus-ws) is the end-point of that IPC channel. This receives the request from the mus-clients, and forwards that to the mus-gpu through MusGpuMemoryBufferManager (which lives in the mus-ws). ui::GpuServiceInternal (which lives in mus-gpu) receives that message, and uses GMBFactory to do actual memory allocation. > > The names maybe confusing. I am open to suggestions for better names. > > [1] https://docs.google.com/presentation/d/1Ms1kgyg_nCxdbskxV4M9op5OtklKDcmkIk56m... Ok, makes sense. I guess MojoGMBManager can't talk to MusGMBManager directly? In a mus world, are we supposed to survive mus-gpu process crashes or is that considered fatal? Curious as if the latter we need to make sure mus-gpu is respawned correctly for GMBs allocation to never fail. https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... File services/ui/ws/mus_gpu_memory_buffer_manager.h (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... services/ui/ws/mus_gpu_memory_buffer_manager.h:64: std::unordered_map<int, NativeBuffers> native_buffers_; On 2016/11/22 at 18:45:41, sadrul wrote: > On 2016/11/22 08:18:56, reveman wrote: > > what is this used for? It seems to keep track of what are native buffers but I'm > > not seeing it actually being used for anything. > > This is somewhat equivalent to BrowserGpuMemoryManager::clients_ map [1]. This is used to decide when to notify mus-gpu about GMB destruction. > > [1] https://cs.chromium.org/chromium/src/content/browser/gpu/browser_gpu_memory_b... That's only needed for for Mac because IOSurface sharing is currently broken there (it doesn't survide GPU process crashes like other platforms). I created a bug some time ago for fixing it (crbug.com/649721) and that would remove the need for this map. Let's not add it for MUS as it's broken be design. The destroy notification for GMBs shouldn't exist. We should fix the Mac issue asap as that will also guarantee that GMBs don't cause drawing artifacts when the GPU process crashes. You only need this for MUS if you're planning to support Mac with native IOSurface backed GMBs before we fixed the above mentioned issue.
https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... File services/ui/ws/gpu_service_proxy.cc (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... services/ui/ws/gpu_service_proxy.cc:31: class GpuServiceImpl : public mojom::GpuService { On 2016/11/22 23:25:04, reveman wrote: > On 2016/11/22 at 18:45:41, sadrul wrote: > > On 2016/11/22 08:18:56, reveman wrote: > > > I'm confused by this. Who is supposed to use this interface? Why are they > not > > > using the GpuMemoryBufferManager directly? The purpose of the GMBM interface > is > > > to hide all complicated out-of-process logic that might be needed and it's > what > > > chromium code should be using in general. Here you seem to be wrapping that > > > interface in a GpuService while I would expected something more of the > opposite; > > > GMBM using some mojo to allocate native buffers through another process. > > > > I have shared a doc [1] with you that hopefully explains this a little better. > But briefly: mus-clients use MojoGMBManager to talk to the mus-window-server. > This GpuServiceImpl (which lives in the mus-ws) is the end-point of that IPC > channel. This receives the request from the mus-clients, and forwards that to > the mus-gpu through MusGpuMemoryBufferManager (which lives in the mus-ws). > ui::GpuServiceInternal (which lives in mus-gpu) receives that message, and uses > GMBFactory to do actual memory allocation. > > > > The names maybe confusing. I am open to suggestions for better names. > > > > [1] > https://docs.google.com/presentation/d/1Ms1kgyg_nCxdbskxV4M9op5OtklKDcmkIk56m... > > Ok, makes sense. I guess MojoGMBManager can't talk to MusGMBManager directly? It could if we really wanted to. But that would mean an extra message-pipe between the two, whereas with the current set-up, the same message-pipe can be used to establish the gpu-channel too. > In > a mus world, are we supposed to survive mus-gpu process crashes or is that > considered fatal? Curious as if the latter we need to make sure mus-gpu is > respawned correctly for GMBs allocation to never fail. We need to survive a mus-gpu process crash. https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... File services/ui/ws/mus_gpu_memory_buffer_manager.h (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... services/ui/ws/mus_gpu_memory_buffer_manager.h:64: std::unordered_map<int, NativeBuffers> native_buffers_; On 2016/11/22 23:25:04, reveman wrote: > On 2016/11/22 at 18:45:41, sadrul wrote: > > On 2016/11/22 08:18:56, reveman wrote: > > > what is this used for? It seems to keep track of what are native buffers but > I'm > > > not seeing it actually being used for anything. > > > > This is somewhat equivalent to BrowserGpuMemoryManager::clients_ map [1]. This > is used to decide when to notify mus-gpu about GMB destruction. > > > > [1] > https://cs.chromium.org/chromium/src/content/browser/gpu/browser_gpu_memory_b... > > That's only needed for for Mac because IOSurface sharing is currently broken > there (it doesn't survide GPU process crashes like other platforms). I created a > bug some time ago for fixing it (crbug.com/649721) and that would remove the > need for this map. Let's not add it for MUS as it's broken be design. The > destroy notification for GMBs shouldn't exist. We should fix the Mac issue asap > as that will also guarantee that GMBs don't cause drawing artifacts when the GPU > process crashes. You only need this for MUS if you're planning to support Mac > with native IOSurface backed GMBs before we fixed the above mentioned issue. Ok. If I understand correctly, you are suggesting that I don't actually need to do this, and I should remove it from the CL?
https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... File services/ui/ws/mus_gpu_memory_buffer_manager.h (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... services/ui/ws/mus_gpu_memory_buffer_manager.h:64: std::unordered_map<int, NativeBuffers> native_buffers_; On 2016/11/24 02:02:41, sadrul wrote: > On 2016/11/22 23:25:04, reveman wrote: > > On 2016/11/22 at 18:45:41, sadrul wrote: > > > On 2016/11/22 08:18:56, reveman wrote: > > > > what is this used for? It seems to keep track of what are native buffers > but > > I'm > > > > not seeing it actually being used for anything. > > > > > > This is somewhat equivalent to BrowserGpuMemoryManager::clients_ map [1]. > This > > is used to decide when to notify mus-gpu about GMB destruction. > > > > > > [1] > > > https://cs.chromium.org/chromium/src/content/browser/gpu/browser_gpu_memory_b... > > > > That's only needed for for Mac because IOSurface sharing is currently broken > > there (it doesn't survide GPU process crashes like other platforms). I created > a > > bug some time ago for fixing it (crbug.com/649721) and that would remove the > > need for this map. Let's not add it for MUS as it's broken be design. The > > destroy notification for GMBs shouldn't exist. We should fix the Mac issue > asap > > as that will also guarantee that GMBs don't cause drawing artifacts when the > GPU > > process crashes. You only need this for MUS if you're planning to support Mac > > with native IOSurface backed GMBs before we fixed the above mentioned issue. > > Ok. If I understand correctly, you are suggesting that I don't actually need to > do this, and I should remove it from the CL? It looks like ozone may be doing something as well? [1] Or can that be removed too? [1] https://cs.chromium.org/chromium/src/gpu/ipc/service/gpu_memory_buffer_factor...
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
I have merged with the tot changes. PTAL. https://codereview.chromium.org/2503113002/diff/140001/services/ui/common/mus... File services/ui/common/mus_gpu_memory_buffer_manager.h (right): https://codereview.chromium.org/2503113002/diff/140001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.h:60: std::unordered_map<int, NativeBuffers> native_buffers_; I have kept this since it looks like this is still needed until crbug.com/628334 is resolved?
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
sadrul@chromium.org changed reviewers: + sky@chromium.org
+sky@ for services/ui/common
LGTM - you should be an owner of this file, feel free to update OWNERs accordingly.
https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... File services/ui/ws/gpu_service_proxy.cc (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... services/ui/ws/gpu_service_proxy.cc:31: class GpuServiceImpl : public mojom::GpuService { On 2016/11/24 at 02:02:41, sadrul wrote: > On 2016/11/22 23:25:04, reveman wrote: > > On 2016/11/22 at 18:45:41, sadrul wrote: > > > On 2016/11/22 08:18:56, reveman wrote: > > > > I'm confused by this. Who is supposed to use this interface? Why are they > > not > > > > using the GpuMemoryBufferManager directly? The purpose of the GMBM interface > > is > > > > to hide all complicated out-of-process logic that might be needed and it's > > what > > > > chromium code should be using in general. Here you seem to be wrapping that > > > > interface in a GpuService while I would expected something more of the > > opposite; > > > > GMBM using some mojo to allocate native buffers through another process. > > > > > > I have shared a doc [1] with you that hopefully explains this a little better. > > But briefly: mus-clients use MojoGMBManager to talk to the mus-window-server. > > This GpuServiceImpl (which lives in the mus-ws) is the end-point of that IPC > > channel. This receives the request from the mus-clients, and forwards that to > > the mus-gpu through MusGpuMemoryBufferManager (which lives in the mus-ws). > > ui::GpuServiceInternal (which lives in mus-gpu) receives that message, and uses > > GMBFactory to do actual memory allocation. > > > > > > The names maybe confusing. I am open to suggestions for better names. > > > > > > [1] > > https://docs.google.com/presentation/d/1Ms1kgyg_nCxdbskxV4M9op5OtklKDcmkIk56m... > > > > Ok, makes sense. I guess MojoGMBManager can't talk to MusGMBManager directly? > > It could if we really wanted to. But that would mean an extra message-pipe between the two, whereas with the current set-up, the same message-pipe can be used to establish the gpu-channel too. > > > In > > a mus world, are we supposed to survive mus-gpu process crashes or is that > > considered fatal? Curious as if the latter we need to make sure mus-gpu is > > respawned correctly for GMBs allocation to never fail. > > We need to survive a mus-gpu process crash. Ok, what happens if CreateGpuMemoryBuffer is called just after the mus-gpu process has crashed but a new connection has not been established? Is there code in place that will try to reestablish a new connection of GMB allocation fails. Similar to what we have for content/ here: https://cs.chromium.org/chromium/src/content/browser/gpu/browser_gpu_memory_b... https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... File services/ui/ws/mus_gpu_memory_buffer_manager.h (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/mus_gpu... services/ui/ws/mus_gpu_memory_buffer_manager.h:64: std::unordered_map<int, NativeBuffers> native_buffers_; On 2016/11/24 at 03:34:04, sadrul wrote: > On 2016/11/24 02:02:41, sadrul wrote: > > On 2016/11/22 23:25:04, reveman wrote: > > > On 2016/11/22 at 18:45:41, sadrul wrote: > > > > On 2016/11/22 08:18:56, reveman wrote: > > > > > what is this used for? It seems to keep track of what are native buffers > > but > > > I'm > > > > > not seeing it actually being used for anything. > > > > > > > > This is somewhat equivalent to BrowserGpuMemoryManager::clients_ map [1]. > > This > > > is used to decide when to notify mus-gpu about GMB destruction. > > > > > > > > [1] > > > > > https://cs.chromium.org/chromium/src/content/browser/gpu/browser_gpu_memory_b... > > > > > > That's only needed for for Mac because IOSurface sharing is currently broken > > > there (it doesn't survide GPU process crashes like other platforms). I created > > a > > > bug some time ago for fixing it (crbug.com/649721) and that would remove the > > > need for this map. Let's not add it for MUS as it's broken be design. The > > > destroy notification for GMBs shouldn't exist. We should fix the Mac issue > > asap > > > as that will also guarantee that GMBs don't cause drawing artifacts when the > > GPU > > > process crashes. You only need this for MUS if you're planning to support Mac > > > with native IOSurface backed GMBs before we fixed the above mentioned issue. > > > > Ok. If I understand correctly, you are suggesting that I don't actually need to > > do this, and I should remove it from the CL? > > It looks like ozone may be doing something as well? [1] Or can that be removed too? > > [1] https://cs.chromium.org/chromium/src/gpu/ipc/service/gpu_memory_buffer_factor... Yes, I forgot that we unfortunately have a workaround in place that requires this (crbug.com/628334). This should not be needed in the future but I guess we have to deal with it even in the mus case for now. https://codereview.chromium.org/2503113002/diff/160001/services/ui/common/mus... File services/ui/common/mus_gpu_memory_buffer_manager.cc (right): https://codereview.chromium.org/2503113002/diff/160001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:89: if (!native_buffers_.count(client_id) || Can we avoid iterating over all buffers? Finding the first is sufficient here. https://codereview.chromium.org/2503113002/diff/160001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:92: if (native_buffers_[client_id].empty()) Can the manager be notified when the client is gone instead of having to handle this special case here each time we destroy a buffer? Notifying also allows to to make sure that all native buffers are properly destroyed on the gpu service side when a client goes away.
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... File services/ui/ws/gpu_service_proxy.cc (right): https://codereview.chromium.org/2503113002/diff/120001/services/ui/ws/gpu_ser... services/ui/ws/gpu_service_proxy.cc:31: class GpuServiceImpl : public mojom::GpuService { On 2016/11/30 17:32:45, reveman wrote: > On 2016/11/24 at 02:02:41, sadrul wrote: > > On 2016/11/22 23:25:04, reveman wrote: > > > On 2016/11/22 at 18:45:41, sadrul wrote: > > > > On 2016/11/22 08:18:56, reveman wrote: > > > > > I'm confused by this. Who is supposed to use this interface? Why are > they > > > not > > > > > using the GpuMemoryBufferManager directly? The purpose of the GMBM > interface > > > is > > > > > to hide all complicated out-of-process logic that might be needed and > it's > > > what > > > > > chromium code should be using in general. Here you seem to be wrapping > that > > > > > interface in a GpuService while I would expected something more of the > > > opposite; > > > > > GMBM using some mojo to allocate native buffers through another process. > > > > > > > > I have shared a doc [1] with you that hopefully explains this a little > better. > > > But briefly: mus-clients use MojoGMBManager to talk to the > mus-window-server. > > > This GpuServiceImpl (which lives in the mus-ws) is the end-point of that IPC > > > channel. This receives the request from the mus-clients, and forwards that > to > > > the mus-gpu through MusGpuMemoryBufferManager (which lives in the mus-ws). > > > ui::GpuServiceInternal (which lives in mus-gpu) receives that message, and > uses > > > GMBFactory to do actual memory allocation. > > > > > > > > The names maybe confusing. I am open to suggestions for better names. > > > > > > > > [1] > > > > https://docs.google.com/presentation/d/1Ms1kgyg_nCxdbskxV4M9op5OtklKDcmkIk56m... > > > > > > Ok, makes sense. I guess MojoGMBManager can't talk to MusGMBManager > directly? > > > > It could if we really wanted to. But that would mean an extra message-pipe > between the two, whereas with the current set-up, the same message-pipe can be > used to establish the gpu-channel too. > > > > > In > > > a mus world, are we supposed to survive mus-gpu process crashes or is that > > > considered fatal? Curious as if the latter we need to make sure mus-gpu is > > > respawned correctly for GMBs allocation to never fail. > > > > We need to survive a mus-gpu process crash. > > Ok, what happens if CreateGpuMemoryBuffer is called just after the mus-gpu > process has crashed but a new connection has not been established? Is there code > in place that will try to reestablish a new connection of GMB allocation fails. > Similar to what we have for content/ here: > https://cs.chromium.org/chromium/src/content/browser/gpu/browser_gpu_memory_b... The short answer is: yes. But let me explain: if the gpu process dies, then |gpu_service_internal_| of this object will not be able to make any requests (or, the requests it makes will not go anywhere). As the owner of the connection, GpuServiceProxy will have to recreate that connection. Here is an example of how that would look like: https://codereview.chromium.org/2546563002 This process of reconnecting will respawn the gpu process. https://codereview.chromium.org/2503113002/diff/160001/services/ui/common/mus... File services/ui/common/mus_gpu_memory_buffer_manager.cc (right): https://codereview.chromium.org/2503113002/diff/160001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:89: if (!native_buffers_.count(client_id) || On 2016/11/30 17:32:45, reveman wrote: > Can we avoid iterating over all buffers? Finding the first is sufficient here. Not totally sure what you mean here. This is checking whether there's a native memory buffer associated with (id, client_id). https://codereview.chromium.org/2503113002/diff/160001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:92: if (native_buffers_[client_id].empty()) On 2016/11/30 17:32:45, reveman wrote: > Can the manager be notified when the client is gone instead of having to handle > this special case here each time we destroy a buffer? Notifying also allows to > to make sure that all native buffers are properly destroyed on the gpu service > side when a client goes away. Oh good point. I have added the callback to notify this manager when a client goes away. But, is it possible for a client to destroy some memory buffer it had previously allocated without freeing all such memory buffers?
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
lgtm with some nits https://codereview.chromium.org/2503113002/diff/160001/services/ui/common/mus... File services/ui/common/mus_gpu_memory_buffer_manager.cc (right): https://codereview.chromium.org/2503113002/diff/160001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:89: if (!native_buffers_.count(client_id) || On 2016/11/30 at 19:54:31, sadrul wrote: > On 2016/11/30 17:32:45, reveman wrote: > > Can we avoid iterating over all buffers? Finding the first is sufficient here. > > Not totally sure what you mean here. This is checking whether there's a native memory buffer associated with (id, client_id). I was referring to count() being less efficient than find() and in this case find() is sufficient and more properly match the intent of the code. Anyhow, looks like this count() call is not needed at all in latest patch unless I'm missing something. https://codereview.chromium.org/2503113002/diff/180001/services/ui/common/mus... File services/ui/common/mus_gpu_memory_buffer_manager.cc (right): https://codereview.chromium.org/2503113002/diff/180001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:89: if (!native_buffers_.count(client_id) || nit: Do we need to check that the client_id exists? What's the harm of assuming that it exists? https://codereview.chromium.org/2503113002/diff/180001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:93: native_buffers_.erase(client_id); nit: We don't need these two lines anymore now that we notify the manager when a client is gone, right? https://codereview.chromium.org/2503113002/diff/180001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:100: if (!native_buffers_.count(client_id)) nit: This check seems like an unnecessary optimization
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
https://codereview.chromium.org/2503113002/diff/180001/services/ui/common/mus... File services/ui/common/mus_gpu_memory_buffer_manager.cc (right): https://codereview.chromium.org/2503113002/diff/180001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:89: if (!native_buffers_.count(client_id) || On 2016/12/02 19:12:51, reveman wrote: > nit: Do we need to check that the client_id exists? What's the harm of assuming > that it exists? We end up creating an empty map here if the client_id did not already exist. But I suppose since we clear the map anyway when the client goes away, this would not be a problem. Removed. btw: both find [1] and count [2] have constant complexity on average. [1] http://en.cppreference.com/w/cpp/container/unordered_map/find [2] http://en.cppreference.com/w/cpp/container/unordered_map/count https://codereview.chromium.org/2503113002/diff/180001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:93: native_buffers_.erase(client_id); On 2016/12/02 19:12:51, reveman wrote: > nit: We don't need these two lines anymore now that we notify the manager when a > client is gone, right? Yep. Removed. https://codereview.chromium.org/2503113002/diff/180001/services/ui/common/mus... services/ui/common/mus_gpu_memory_buffer_manager.cc:100: if (!native_buffers_.count(client_id)) On 2016/12/02 19:12:51, reveman wrote: > nit: This check seems like an unnecessary optimization Removed.
The CQ bit was checked by sadrul@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from sky@chromium.org, reveman@chromium.org Link to the patchset: https://codereview.chromium.org/2503113002/#ps220001 (title: ".")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_chromium_chromeos_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by sadrul@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_chromium_chromeos_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_chromeos_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
Patchset #13 (id:240001) has been deleted
The CQ bit was checked by sadrul@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was checked by sadrul@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from reveman@chromium.org, sky@chromium.org Link to the patchset: https://codereview.chromium.org/2503113002/#ps280001 (title: ".")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: win_optional_gpu_tests_rel on master.tryserver.chromium.win (JOB_TIMED_OUT, no build URL)
The CQ bit was checked by sadrul@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 280001, "attempt_start_ts": 1481085486340030, "parent_rev": "6a33247ada8b6f4b9f55512e2f30b41befd5a8eb", "commit_rev": "bce496b7dd17fcf898a8538ccac6c264a584d544"}
Message was sent while issue was closed.
Description was changed from ========== mus: Get GpuMemoryBuffer from gpu process when possible. From the window server (i.e. gpu-host process), try to create the GpuMemoryBuffer from the gpu process first, before falling back to using shared memory. To do this correctly, it is necessary to associate these memory buffers to unique client-id of each client. So create a separate mojom::GpuService instance for each client (instead of using the same instance for handling requests from all clients). BUG=643746 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== mus: Get GpuMemoryBuffer from gpu process when possible. From the window server (i.e. gpu-host process), try to create the GpuMemoryBuffer from the gpu process first, before falling back to using shared memory. To do this correctly, it is necessary to associate these memory buffers to unique client-id of each client. So create a separate mojom::GpuService instance for each client (instead of using the same instance for handling requests from all clients). BUG=643746 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
Message was sent while issue was closed.
Committed patchset #14 (id:280001)
Message was sent while issue was closed.
Description was changed from ========== mus: Get GpuMemoryBuffer from gpu process when possible. From the window server (i.e. gpu-host process), try to create the GpuMemoryBuffer from the gpu process first, before falling back to using shared memory. To do this correctly, it is necessary to associate these memory buffers to unique client-id of each client. So create a separate mojom::GpuService instance for each client (instead of using the same instance for handling requests from all clients). BUG=643746 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== mus: Get GpuMemoryBuffer from gpu process when possible. From the window server (i.e. gpu-host process), try to create the GpuMemoryBuffer from the gpu process first, before falling back to using shared memory. To do this correctly, it is necessary to associate these memory buffers to unique client-id of each client. So create a separate mojom::GpuService instance for each client (instead of using the same instance for handling requests from all clients). BUG=643746 CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Committed: https://crrev.com/0b7867d1681e539c50e5cf075baf6863dd128055 Cr-Commit-Position: refs/heads/master@{#436874} ==========
Message was sent while issue was closed.
Patchset 14 (id:??) landed as https://crrev.com/0b7867d1681e539c50e5cf075baf6863dd128055 Cr-Commit-Position: refs/heads/master@{#436874} |