|
|
Descriptionmac: Emit dyld info from the memory dump provider.
Both task_info(TASK_DYLD_INFO) and mach_vm_region_recurse() provide information
about the memory regions in the process. There is some duplicate information.
This CL updates DumpProcessMemoryMaps() to use both methods, and merges the
information from the two functions.
This CL also avoids emitting byte stats for regions in the dyld shared cache,
since it's not possible to assign byte stats to specific regions, and the
regions are likely in use by non-Chrome processes anyways.
BUG=677302
Review-Url: https://codereview.chromium.org/2674973004
Cr-Commit-Position: refs/heads/master@{#450113}
Committed: https://chromium.googlesource.com/chromium/src/+/a232b05bbe4456d3e7ac38074a48a35488ea6f8b
Patch Set 1 #
Total comments: 1
Patch Set 2 : Merge info from dyld. #Patch Set 3 : Minor fixes. #
Total comments: 6
Patch Set 4 : Comments from mark. #Patch Set 5 : Use initprot instead of maxprot. #
Messages
Total messages: 35 (21 generated)
The CQ bit was checked by erikchen@chromium.org to run a CQ dry run
erikchen@chromium.org changed reviewers: + mark@chromium.org
mark: Please review.
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
LGTM but consider strengthening the test https://codereview.chromium.org/2674973004/diff/1/components/tracing/common/p... File components/tracing/common/process_metrics_memory_dump_provider.cc (right): https://codereview.chromium.org/2674973004/diff/1/components/tracing/common/p... components/tracing/common/process_metrics_memory_dump_provider.cc:261: // Ignore regions in the dyld shared cache. Anyone can come along and map something else in this region to hide from our statistic gathering, accidentally or on purpose. Are there other characteristics of the region that we can check to make sure that this is likely to be the dyld shared cache region? A dedicated user_tag, perhaps? The fact that the top-level entry is a submap? Anything else?
primiano@chromium.org changed reviewers: + primiano@chromium.org
Speculative RS-LGTM on whatever you folks decide is the best here.
On 2017/02/03 23:17:13, Mark Mentovai wrote: > LGTM but consider strengthening the test > > https://codereview.chromium.org/2674973004/diff/1/components/tracing/common/p... > File components/tracing/common/process_metrics_memory_dump_provider.cc (right): > > https://codereview.chromium.org/2674973004/diff/1/components/tracing/common/p... > components/tracing/common/process_metrics_memory_dump_provider.cc:261: // Ignore > regions in the dyld shared cache. > Anyone can come along and map something else in this region to hide from our > statistic gathering, accidentally or on purpose. Are there other characteristics > of the region that we can check to make sure that this is likely to be the dyld > shared cache region? A dedicated user_tag, perhaps? The fact that the top-level > entry is a submap? Anything else? Hm. This seems tricky. With depth = 0, we see that there are 4 submaps. [usertag is always 0, I didn't display them here]. """ address | size | is_submap | pages_resident 0x7fff70000000 103727104 1 0 0 0x7fff762ec000 1130496 0 135 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76400000 2097152 0 478 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76600000 2097152 0 511 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76800000 2097152 0 417 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76a00000 2097152 0 501 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76c00000 2097152 0 470 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76e00000 2097152 0 485 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff77000000 2097152 0 268 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff77200000 2097152 0 441 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff77400000 2097152 0 214 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff77600000 10485760 1 0 0 0x7fff78000000 2097152 0 476 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78200000 2097152 0 407 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78400000 2097152 0 23 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78600000 2097152 0 321 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78800000 2097152 0 226 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78a00000 2097152 0 393 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78c00000 2097152 0 480 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78e00000 2097152 0 481 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79000000 2097152 0 510 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79200000 2097152 0 459 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79400000 2097152 0 487 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79600000 2097152 0 507 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79800000 2097152 0 136 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79a00000 8388608 1 0 0 0x7fff7a200000 2097152 0 353 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff7a400000 2097152 0 507 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff7a600000 2097152 0 421 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff7a800000 92274688 1 0 0 """ With depth = 100, we fail to pick up the first submap, but see the other 3. """ address | size | is_submap | pages_resident 0x7fff762ec000 1130496 0 135 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76400000 2097152 0 478 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76600000 2097152 0 511 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76800000 2097152 0 417 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76a00000 2097152 0 501 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76c00000 2097152 0 470 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff76e00000 2097152 0 485 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff77000000 2097152 0 268 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff77200000 2097152 0 441 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff77400000 2097152 0 214 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff77600000 10485760 0 180 0 0x7fff78000000 2097152 0 476 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78200000 2097152 0 407 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78400000 2097152 0 23 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78600000 2097152 0 321 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78800000 2097152 0 226 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78a00000 2097152 0 393 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78c00000 2097152 0 480 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff78e00000 2097152 0 481 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79000000 2097152 0 510 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79200000 2097152 0 459 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79400000 2097152 0 487 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79600000 2097152 0 507 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79800000 2097152 0 136 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff79a00000 8388608 0 49 0 0x7fff7a200000 2097152 0 353 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff7a400000 2097152 0 507 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff7a600000 2097152 0 421 0 /private/var/db/dyld/dyld_shared_cache_x86_64 0x7fff7a800000 610304 0 98 0 """ From vmmap: """ Submap 00007fff70000000-00007fff762ec000 [ 98.9M 0K 0K 0K] r--/rwx SM=PRV process-only VM submap unused shlib __DATA 00007fff762ec000-00007fff762ed000 [ 4K 4K 0K 0K] rw-/rwx SM=COW system shared lib __DATA not used by this process __DATA 00007fff762ed000-00007fff762ee000 [ 4K 4K 4K 0K] rw-/rwx SM=COW /System/Library/Frameworks/IOSurface.framework/Versions/A/IOSurface __DATA 00007fff762ee000-00007fff762ef000 [ 4K 4K 0K 0K] rw-/rwx SM=COW /System/Library/Frameworks/OpenGL.framework/Versions/A/Libraries/libGFXShared.dylib ... ... Submap 00007fff7a800000-00007fff80000000 [ 88.0M 0K 0K 0K] r--/rwx SM=PRV process-only VM submap unused shlib __DATA 00007fff7a800000-00007fff7a895000 [ 596K 392K 0K 0K] rw-/rw- SM=COW system shared lib __DATA not used by this process """ It seems almost like the submaps represent the regions that represent shared libraries that aren't in use by the process, as evidence by the fact that vmmap displays the same region twice, with two different designations, e.g. """ Submap 00007fff79a00000-00007fff7a200000 [ 8192K 0K 0K 0K] r--/rwx SM=PRV process-only VM submap unused shlib __DATA 00007fff79a00000-00007fff7a200000 [ 8192K 196K 0K 0K] rw-/rw- SM=COW system shared lib __DATA not used by this process """ I really don't see what we can do beyond excluding the entire region, since we also want to exclude non-submaps in the dyld shared cache.
I think the right thing to do here is to get information from both TASK_DYLD_INFO and mach_vm_region, and then emit more detailed VMRegions for symbolications, but also manually fudge the resident_pages of dyld shared cache to be 0.
I can get on board with that.
Description was changed from ========== Don't count regions in the dyld shared cache. Resident pages in the dyld shared cache aren't particularly meaningful, since they'll [mostly likely] have been resident regardless of whether Chrome was using them. BUG=677302 ========== to ========== mac: Emit dyld info from the memory dump provider. Both task_info(TASK_DYLD_INFO) and mach_vm_region_recurse() provide information about the memory regions in the process. There is some duplicate information. This CL updates DumpProcessMemoryMaps() to use both methods, and merges the information from the two functions. This CL also avoids emitting byte stats for regions in the dyld shared cache, since it's not possible to assign byte stats to specific regions, and the regions are likely in use by non-Chrome processes anyways. BUG=677302 ==========
The CQ bit was checked by erikchen@chromium.org to run a CQ dry run
mark: Please review.
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
LGTM https://codereview.chromium.org/2674973004/diff/40001/components/tracing/comm... File components/tracing/common/process_metrics_memory_dump_provider.cc (right): https://codereview.chromium.org/2674973004/diff/40001/components/tracing/comm... components/tracing/common/process_metrics_memory_dump_provider.cc:278: if (strcmp(seg->segname, SEG_PAGEZERO) == 0) You should check that the size of this command is actually big enough to hold the structure that you’re interpreting. https://codereview.chromium.org/2674973004/diff/40001/components/tracing/comm... components/tracing/common/process_metrics_memory_dump_provider.cc:299: } else if (load_cmd->cmd == LC_SEGMENT) { Don’t waste your time. You won’t see one of these loaded into a 64-bit process. And since you didn’t bother dealing with the 32-bit mach_header (not mach_header_64), there’s no reason to deal with this. https://codereview.chromium.org/2674973004/diff/40001/components/tracing/comm... components/tracing/common/process_metrics_memory_dump_provider.cc:328: if (reinterpret_cast<uint64_t>(header + 1) + header->sizeofcmds < You should check up at the top of the loop that the command you’re about to interpret doesn’t exceed sizeofcmds.
https://codereview.chromium.org/2674973004/diff/40001/components/tracing/comm... File components/tracing/common/process_metrics_memory_dump_provider.cc (right): https://codereview.chromium.org/2674973004/diff/40001/components/tracing/comm... components/tracing/common/process_metrics_memory_dump_provider.cc:278: if (strcmp(seg->segname, SEG_PAGEZERO) == 0) On 2017/02/11 02:18:50, Mark Mentovai wrote: > You should check that the size of this command is actually big enough to hold > the structure that you’re interpreting. Done. https://codereview.chromium.org/2674973004/diff/40001/components/tracing/comm... components/tracing/common/process_metrics_memory_dump_provider.cc:299: } else if (load_cmd->cmd == LC_SEGMENT) { On 2017/02/11 02:18:50, Mark Mentovai wrote: > Don’t waste your time. You won’t see one of these loaded into a 64-bit process. > And since you didn’t bother dealing with the 32-bit mach_header (not > mach_header_64), there’s no reason to deal with this. Done. https://codereview.chromium.org/2674973004/diff/40001/components/tracing/comm... components/tracing/common/process_metrics_memory_dump_provider.cc:328: if (reinterpret_cast<uint64_t>(header + 1) + header->sizeofcmds < On 2017/02/11 02:18:50, Mark Mentovai wrote: > You should check up at the top of the loop that the command you’re about to > interpret doesn’t exceed sizeofcmds. Done.
The CQ bit was checked by erikchen@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
LGTM
The CQ bit was unchecked by erikchen@chromium.org
The CQ bit was checked by erikchen@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from primiano@chromium.org Link to the patchset: https://codereview.chromium.org/2674973004/#ps60001 (title: "Comments from mark.")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_chromium_asan_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...) linux_chromium_tsan_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by erikchen@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from mark@chromium.org, primiano@chromium.org Link to the patchset: https://codereview.chromium.org/2674973004/#ps80001 (title: "Use initprot instead of maxprot.")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 80001, "attempt_start_ts": 1487018973493380, "parent_rev": "1100249acef2cfa42c46deb9279bbd864d7e8807", "commit_rev": "a232b05bbe4456d3e7ac38074a48a35488ea6f8b"}
Message was sent while issue was closed.
Description was changed from ========== mac: Emit dyld info from the memory dump provider. Both task_info(TASK_DYLD_INFO) and mach_vm_region_recurse() provide information about the memory regions in the process. There is some duplicate information. This CL updates DumpProcessMemoryMaps() to use both methods, and merges the information from the two functions. This CL also avoids emitting byte stats for regions in the dyld shared cache, since it's not possible to assign byte stats to specific regions, and the regions are likely in use by non-Chrome processes anyways. BUG=677302 ========== to ========== mac: Emit dyld info from the memory dump provider. Both task_info(TASK_DYLD_INFO) and mach_vm_region_recurse() provide information about the memory regions in the process. There is some duplicate information. This CL updates DumpProcessMemoryMaps() to use both methods, and merges the information from the two functions. This CL also avoids emitting byte stats for regions in the dyld shared cache, since it's not possible to assign byte stats to specific regions, and the regions are likely in use by non-Chrome processes anyways. BUG=677302 Review-Url: https://codereview.chromium.org/2674973004 Cr-Commit-Position: refs/heads/master@{#450113} Committed: https://chromium.googlesource.com/chromium/src/+/a232b05bbe4456d3e7ac38074a48... ==========
Message was sent while issue was closed.
Committed patchset #5 (id:80001) as https://chromium.googlesource.com/chromium/src/+/a232b05bbe4456d3e7ac38074a48... |