Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(18)

Issue 2866973003: Fix global dumps to be dumped only when shared memory is actually used

Created:
3 years, 7 months ago by hajimehoshi
Modified:
3 years, 7 months ago
CC:
chromium-reviews, posciak+watch_chromium.org, piman+watch_chromium.org, cc-bugs_chromium.org, feature-media-reviews_chromium.org
Target Ref:
refs/heads/master
Project:
chromium
Visibility:
Public.

Description

Fix global dumps to be dumped only when shared memory is actually used Before this CL, a global shared memory dump was created with calling CreateSharedGlobalAllocatorDump, regardless of what its actual backend was. For example, gpu::Buffer's backend might be shared memory or might not. This CL fixes this to create global shared dumps only when the backend is actually shared memory. BUG=604726 TEST=gpu_unittests CQ_INCLUDE_TRYBOTS=master.tryserver.blink:linux_trusty_blink_rel;master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel

Patch Set 1 #

Patch Set 2 : Fix test errors #

Patch Set 3 : Remove change for GpuMemoryBuffer #

Unified diffs Side-by-side diffs Delta from patch set Stats (+47 lines, -20 lines) Patch
M cc/raster/staging_buffer_pool.cc View 1 2 1 chunk +3 lines, -2 lines 0 comments Download
M gpu/command_buffer/client/cmd_buffer_helper.cc View 1 chunk +6 lines, -4 lines 0 comments Download
M gpu/command_buffer/client/gles2_implementation.cc View 1 chunk +7 lines, -5 lines 0 comments Download
M gpu/command_buffer/client/gles2_implementation_unittest.cc View 2 chunks +5 lines, -0 lines 0 comments Download
M gpu/command_buffer/client/mapped_memory.h View 1 chunk +2 lines, -0 lines 0 comments Download
M gpu/command_buffer/client/mapped_memory.cc View 1 chunk +6 lines, -5 lines 0 comments Download
M gpu/command_buffer/client/transfer_buffer.h View 2 chunks +3 lines, -0 lines 0 comments Download
M gpu/command_buffer/client/transfer_buffer.cc View 1 chunk +8 lines, -0 lines 0 comments Download
M gpu/command_buffer/service/buffer_manager.cc View 1 1 chunk +7 lines, -4 lines 0 comments Download

Messages

Total messages: 30 (21 generated)
hajimehoshi
PTAL
3 years, 7 months ago (2017-05-08 12:19:57 UTC) #9
ssid
Is it the case that we create memory dumps for non-existent memory or is it ...
3 years, 7 months ago (2017-05-09 18:59:51 UTC) #17
hajimehoshi
On 2017/05/09 18:59:51, ssid wrote: > Is it the case that we create memory dumps ...
3 years, 7 months ago (2017-05-10 08:47:53 UTC) #18
hajimehoshi
On 2017/05/10 08:47:53, hajimehoshi wrote: > On 2017/05/09 18:59:51, ssid wrote: > > Is it ...
3 years, 7 months ago (2017-05-10 09:28:28 UTC) #23
hajimehoshi
ping ericrk
3 years, 7 months ago (2017-05-12 05:57:53 UTC) #26
ericrk
On 2017/05/12 05:57:53, hajimehoshi wrote: > ping ericrk I agree with ssid@ that our current ...
3 years, 7 months ago (2017-05-15 18:38:54 UTC) #27
hajimehoshi
Thank your for your reply! On 2017/05/15 18:38:54, ericrk wrote: > On 2017/05/12 05:57:53, hajimehoshi ...
3 years, 7 months ago (2017-05-16 06:14:57 UTC) #28
ericrk
On 2017/05/16 06:14:57, hajimehoshi wrote: > Thank your for your reply! > > On 2017/05/15 ...
3 years, 7 months ago (2017-05-17 06:02:04 UTC) #29
ssid
3 years, 7 months ago (2017-05-18 23:13:14 UTC) #30
On 2017/05/17 06:02:04, ericrk wrote:
> On 2017/05/16 06:14:57, hajimehoshi wrote:
> > Thank your for your reply!
> > 
> > On 2017/05/15 18:38:54, ericrk wrote:
> > > On 2017/05/12 05:57:53, hajimehoshi wrote:
> > > > ping ericrk
> > > 
> > > I agree with ssid@ that our current logging is not dependent on shared
> memory,
> > > and we are likely behaving ~correctly as-is.
> > > 
> > > I think a lot of the confusion is in the naming of functions on
> > > TransferBuffer/etc... We may want to rename some of these.
> > > 
> > > In general, GetBufferGUIDForTracing doesn't care if the allocation is
shared
> > > memory or not. For instance, in the case of an in-process command buffer,
we
> > may
> > > share these across components without shared memory. In these cases, we
> still
> > > want to generate global dumps.
> > 
> > I was wondering how MemoryBufferBacking is shared between processes. I
thought
> > they are not shared actually as far as I read the code.
> 
> When we use MemoryBufferBacking (as opposed to SharedMemoryBufferBacking),
> the data itself isn't shared between processes. However, it might be shared
> between multiple components within the same process. These components still
need
> a way to alias memory, so we use CreateSharedGlobalAllocatorDump and edges.
> 
> Maybe we could create an ownership edge for these without the shared global
> allocator dump? Although using the global allocator dump seems to keep the
code
> uniform between the cross-process and cross-thread cases, as well as allowing
> different components to dump / alias the same memory without knowledge of
> each-other.
> 
> ssid@, is there another technique we should be using in these cases where
> there's
> no cross-process sharing?

Let's say cc/transfer_memory1 is the main allocator, and gpu/memory_buffer1 is a
component sharing the memory:
TransferMemory does not know anything about who is using the memory. It just
allocates when some component requests it.
The MemoryBuffer dump provider should do:

MemoryBuffer::OnMemoryDump() does:

dump = CreateAllocatorDump("gpu/memory_buffer1");
dump->AddSize(123);
// Now this component knows about transfer buffers and knows the id "1"
// given by the TransferMemory provider since it allocates memory from that.
target = pmd->GetOrCreateMemoryDump("cc/transfer_memory1");
pmd->AddOwnershipEdge(dump->guid(), target->guid());

TransferMemory::OnMemoryDump() does:
pmd->CreateAllocatorDump("cc/transfer_memory1");

// To get unique id for the other components.
TransferMemory::GetUniqueId() {
 return 1;
}

This would cause the memory of transfer buffer to be attributed to the
gpu/memory_buffer and effective size of transfer buffer will become 0.

I see 2 problems if we want to use the in-process ownership edges instead of
global dump edges:
1. We should be able to tell if the memory is shared in-process or not.
2. It would be inconsistent with the global dump behavior. It was just
consistent to add global dump edges to all cases and let Trace viewer handle
where the edges are added to (in-process or out of process).

Unrelated question:
How does gpu memory get shared (memory from gpu driver)? I'd assume it is not
shared memory? We might just miss edges because we account for only
base::SharedMemory.

Powered by Google App Engine
This is Rietveld 408576698