Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(3)

Side by Side Diff: third_party/tcmalloc/chromium/src/memory_region_map.cc

Issue 12388070: Count m(un)map for each stacktrace in MemoryRegionMap instead of HeapProfileTable. (Closed) Base URL: svn://svn.chromium.org/chrome/trunk/src
Patch Set: addressed willchan's comments Created 7 years, 9 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « third_party/tcmalloc/chromium/src/memory_region_map.h ('k') | no next file » | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 /* Copyright (c) 2006, Google Inc. 1 /* Copyright (c) 2006, Google Inc.
2 * All rights reserved. 2 * All rights reserved.
3 * 3 *
4 * Redistribution and use in source and binary forms, with or without 4 * Redistribution and use in source and binary forms, with or without
5 * modification, are permitted provided that the following conditions are 5 * modification, are permitted provided that the following conditions are
6 * met: 6 * met:
7 * 7 *
8 * * Redistributions of source code must retain the above copyright 8 * * Redistributions of source code must retain the above copyright
9 * notice, this list of conditions and the following disclaimer. 9 * notice, this list of conditions and the following disclaimer.
10 * * Redistributions in binary form must reproduce the above 10 * * Redistributions in binary form must reproduce the above
(...skipping 66 matching lines...) Expand 10 before | Expand all | Expand 10 after
77 // to get memory, thus we are able to call LowLevelAlloc from 77 // to get memory, thus we are able to call LowLevelAlloc from
78 // our mmap/sbrk hooks without causing a deadlock in it. 78 // our mmap/sbrk hooks without causing a deadlock in it.
79 // For the same reason of deadlock prevention the locking in MemoryRegionMap 79 // For the same reason of deadlock prevention the locking in MemoryRegionMap
80 // itself is write-recursive which is an exception to Google's mutex usage. 80 // itself is write-recursive which is an exception to Google's mutex usage.
81 // 81 //
82 // We still need to break the infinite cycle of mmap calling our hook, 82 // We still need to break the infinite cycle of mmap calling our hook,
83 // which asks LowLevelAlloc for memory to record this mmap, 83 // which asks LowLevelAlloc for memory to record this mmap,
84 // which (sometimes) causes mmap, which calls our hook, and so on. 84 // which (sometimes) causes mmap, which calls our hook, and so on.
85 // We do this as follows: on a recursive call of MemoryRegionMap's 85 // We do this as follows: on a recursive call of MemoryRegionMap's
86 // mmap/sbrk/mremap hook we record the data about the allocation in a 86 // mmap/sbrk/mremap hook we record the data about the allocation in a
87 // static fixed-sized stack (saved_regions), when the recursion unwinds 87 // static fixed-sized stack (saved_regions and saved_buckets), when the
88 // but before returning from the outer hook call we unwind this stack and 88 // recursion unwinds but before returning from the outer hook call we unwind
89 // move the data from saved_regions to its permanent place in the RegionSet, 89 // this stack and move the data from saved_regions and saved_buckets to its
90 // permanent place in the RegionSet and "bucket_table" respectively,
90 // which can cause more allocations and mmap-s and recursion and unwinding, 91 // which can cause more allocations and mmap-s and recursion and unwinding,
91 // but the whole process ends eventually due to the fact that for the small 92 // but the whole process ends eventually due to the fact that for the small
92 // allocations we are doing LowLevelAlloc reuses one mmap call and parcels out 93 // allocations we are doing LowLevelAlloc reuses one mmap call and parcels out
93 // the memory it created to satisfy several of our allocation requests. 94 // the memory it created to satisfy several of our allocation requests.
94 // 95 //
95 96
96 // ========================================================================= // 97 // ========================================================================= //
97 98
98 #include <config.h> 99 #include <config.h>
99 100
(...skipping 40 matching lines...) Expand 10 before | Expand all | Expand 10 after
140 int MemoryRegionMap::max_stack_depth_ = 0; 141 int MemoryRegionMap::max_stack_depth_ = 0;
141 MemoryRegionMap::RegionSet* MemoryRegionMap::regions_ = NULL; 142 MemoryRegionMap::RegionSet* MemoryRegionMap::regions_ = NULL;
142 LowLevelAlloc::Arena* MemoryRegionMap::arena_ = NULL; 143 LowLevelAlloc::Arena* MemoryRegionMap::arena_ = NULL;
143 SpinLock MemoryRegionMap::lock_(SpinLock::LINKER_INITIALIZED); 144 SpinLock MemoryRegionMap::lock_(SpinLock::LINKER_INITIALIZED);
144 SpinLock MemoryRegionMap::owner_lock_( // ACQUIRED_AFTER(lock_) 145 SpinLock MemoryRegionMap::owner_lock_( // ACQUIRED_AFTER(lock_)
145 SpinLock::LINKER_INITIALIZED); 146 SpinLock::LINKER_INITIALIZED);
146 int MemoryRegionMap::recursion_count_ = 0; // GUARDED_BY(owner_lock_) 147 int MemoryRegionMap::recursion_count_ = 0; // GUARDED_BY(owner_lock_)
147 pthread_t MemoryRegionMap::lock_owner_tid_; // GUARDED_BY(owner_lock_) 148 pthread_t MemoryRegionMap::lock_owner_tid_; // GUARDED_BY(owner_lock_)
148 int64 MemoryRegionMap::map_size_ = 0; 149 int64 MemoryRegionMap::map_size_ = 0;
149 int64 MemoryRegionMap::unmap_size_ = 0; 150 int64 MemoryRegionMap::unmap_size_ = 0;
151 HeapProfileBucket** MemoryRegionMap::bucket_table_ = NULL; // GUARDED_BY(lock_)
152 int MemoryRegionMap::num_buckets_ = 0; // GUARDED_BY(lock_)
153 int MemoryRegionMap::saved_buckets_count_ = 0; // GUARDED_BY(lock_)
154 HeapProfileBucket MemoryRegionMap::saved_buckets_[20]; // GUARDED_BY(lock_)
155
156 // GUARDED_BY(lock_)
157 const void* MemoryRegionMap::saved_buckets_keys_[20][kMaxStackDepth];
150 158
151 // ========================================================================= // 159 // ========================================================================= //
152 160
153 // Simple hook into execution of global object constructors, 161 // Simple hook into execution of global object constructors,
154 // so that we do not call pthread_self() when it does not yet work. 162 // so that we do not call pthread_self() when it does not yet work.
155 static bool libpthread_initialized = false; 163 static bool libpthread_initialized = false;
156 static bool initializer = (libpthread_initialized = true, true); 164 static bool initializer = (libpthread_initialized = true, true);
157 165
158 static inline bool current_thread_is(pthread_t should_be) { 166 static inline bool current_thread_is(pthread_t should_be) {
159 // Before main() runs, there's only one thread, so we're always that thread 167 // Before main() runs, there's only one thread, so we're always that thread
(...skipping 15 matching lines...) Expand all
175 // We use RegionSetRep with noop c-tor so that global construction 183 // We use RegionSetRep with noop c-tor so that global construction
176 // does not interfere. 184 // does not interfere.
177 static MemoryRegionMap::RegionSetRep regions_rep; 185 static MemoryRegionMap::RegionSetRep regions_rep;
178 186
179 // ========================================================================= // 187 // ========================================================================= //
180 188
181 // Has InsertRegionLocked been called recursively 189 // Has InsertRegionLocked been called recursively
182 // (or rather should we *not* use regions_ to record a hooked mmap). 190 // (or rather should we *not* use regions_ to record a hooked mmap).
183 static bool recursive_insert = false; 191 static bool recursive_insert = false;
184 192
185 void MemoryRegionMap::Init(int max_stack_depth) { 193 void MemoryRegionMap::Init(int max_stack_depth, bool use_buckets) {
186 RAW_VLOG(10, "MemoryRegionMap Init"); 194 RAW_VLOG(10, "MemoryRegionMap Init");
187 RAW_CHECK(max_stack_depth >= 0, ""); 195 RAW_CHECK(max_stack_depth >= 0, "");
188 // Make sure we don't overflow the memory in region stacks: 196 // Make sure we don't overflow the memory in region stacks:
189 RAW_CHECK(max_stack_depth <= kMaxStackDepth, 197 RAW_CHECK(max_stack_depth <= kMaxStackDepth,
190 "need to increase kMaxStackDepth?"); 198 "need to increase kMaxStackDepth?");
191 Lock(); 199 Lock();
192 client_count_ += 1; 200 client_count_ += 1;
193 max_stack_depth_ = max(max_stack_depth_, max_stack_depth); 201 max_stack_depth_ = max(max_stack_depth_, max_stack_depth);
194 if (client_count_ > 1) { 202 if (client_count_ > 1) {
195 // not first client: already did initialization-proper 203 // not first client: already did initialization-proper
(...skipping 11 matching lines...) Expand all
207 // recursive_insert allows us to buffer info about these mmap calls. 215 // recursive_insert allows us to buffer info about these mmap calls.
208 // Note that Init() can be (and is) sometimes called 216 // Note that Init() can be (and is) sometimes called
209 // already from within an mmap/sbrk hook. 217 // already from within an mmap/sbrk hook.
210 recursive_insert = true; 218 recursive_insert = true;
211 arena_ = LowLevelAlloc::NewArena(0, LowLevelAlloc::DefaultArena()); 219 arena_ = LowLevelAlloc::NewArena(0, LowLevelAlloc::DefaultArena());
212 recursive_insert = false; 220 recursive_insert = false;
213 HandleSavedRegionsLocked(&InsertRegionLocked); // flush the buffered ones 221 HandleSavedRegionsLocked(&InsertRegionLocked); // flush the buffered ones
214 // Can't instead use HandleSavedRegionsLocked(&DoInsertRegionLocked) before 222 // Can't instead use HandleSavedRegionsLocked(&DoInsertRegionLocked) before
215 // recursive_insert = false; as InsertRegionLocked will also construct 223 // recursive_insert = false; as InsertRegionLocked will also construct
216 // regions_ on demand for us. 224 // regions_ on demand for us.
225 if (use_buckets) {
226 const int table_bytes = kHashTableSize * sizeof(*bucket_table_);
227 recursive_insert = true;
228 bucket_table_ = static_cast<HeapProfileBucket**>(
229 MyAllocator::Allocate(table_bytes));
230 recursive_insert = false;
231 memset(bucket_table_, 0, table_bytes);
232 num_buckets_ = 0;
233 }
217 Unlock(); 234 Unlock();
218 RAW_VLOG(10, "MemoryRegionMap Init done"); 235 RAW_VLOG(10, "MemoryRegionMap Init done");
219 } 236 }
220 237
221 bool MemoryRegionMap::Shutdown() { 238 bool MemoryRegionMap::Shutdown() {
222 RAW_VLOG(10, "MemoryRegionMap Shutdown"); 239 RAW_VLOG(10, "MemoryRegionMap Shutdown");
223 Lock(); 240 Lock();
224 RAW_CHECK(client_count_ > 0, ""); 241 RAW_CHECK(client_count_ > 0, "");
225 client_count_ -= 1; 242 client_count_ -= 1;
226 if (client_count_ != 0) { // not last client; need not really shutdown 243 if (client_count_ != 0) { // not last client; need not really shutdown
227 Unlock(); 244 Unlock();
228 RAW_VLOG(10, "MemoryRegionMap Shutdown decrement done"); 245 RAW_VLOG(10, "MemoryRegionMap Shutdown decrement done");
229 return true; 246 return true;
230 } 247 }
248 if (bucket_table_ != NULL) {
249 for (int i = 0; i < kHashTableSize; i++) {
250 for (HeapProfileBucket* curr = bucket_table_[i]; curr != 0; /**/) {
251 HeapProfileBucket* bucket = curr;
252 curr = curr->next;
253 MyAllocator::Free(bucket->stack, 0);
254 MyAllocator::Free(bucket, 0);
255 }
256 }
257 MyAllocator::Free(bucket_table_, 0);
258 num_buckets_ = 0;
259 bucket_table_ = NULL;
260 }
231 RAW_CHECK(MallocHook::RemoveMmapHook(&MmapHook), ""); 261 RAW_CHECK(MallocHook::RemoveMmapHook(&MmapHook), "");
232 RAW_CHECK(MallocHook::RemoveMremapHook(&MremapHook), ""); 262 RAW_CHECK(MallocHook::RemoveMremapHook(&MremapHook), "");
233 RAW_CHECK(MallocHook::RemoveSbrkHook(&SbrkHook), ""); 263 RAW_CHECK(MallocHook::RemoveSbrkHook(&SbrkHook), "");
234 RAW_CHECK(MallocHook::RemoveMunmapHook(&MunmapHook), ""); 264 RAW_CHECK(MallocHook::RemoveMunmapHook(&MunmapHook), "");
235 if (regions_) regions_->~RegionSet(); 265 if (regions_) regions_->~RegionSet();
236 regions_ = NULL; 266 regions_ = NULL;
237 bool deleted_arena = LowLevelAlloc::DeleteArena(arena_); 267 bool deleted_arena = LowLevelAlloc::DeleteArena(arena_);
238 if (deleted_arena) { 268 if (deleted_arena) {
239 arena_ = 0; 269 arena_ = 0;
240 } else { 270 } else {
241 RAW_LOG(WARNING, "Can't delete LowLevelAlloc arena: it's being used"); 271 RAW_LOG(WARNING, "Can't delete LowLevelAlloc arena: it's being used");
242 } 272 }
243 Unlock(); 273 Unlock();
244 RAW_VLOG(10, "MemoryRegionMap Shutdown done"); 274 RAW_VLOG(10, "MemoryRegionMap Shutdown done");
245 return deleted_arena; 275 return deleted_arena;
246 } 276 }
247 277
278 bool MemoryRegionMap::IsRecordingLocked() {
279 RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
280 return client_count_ > 0;
281 }
282
248 // Invariants (once libpthread_initialized is true): 283 // Invariants (once libpthread_initialized is true):
249 // * While lock_ is not held, recursion_count_ is 0 (and 284 // * While lock_ is not held, recursion_count_ is 0 (and
250 // lock_owner_tid_ is the previous owner, but we don't rely on 285 // lock_owner_tid_ is the previous owner, but we don't rely on
251 // that). 286 // that).
252 // * recursion_count_ and lock_owner_tid_ are only written while 287 // * recursion_count_ and lock_owner_tid_ are only written while
253 // both lock_ and owner_lock_ are held. They may be read under 288 // both lock_ and owner_lock_ are held. They may be read under
254 // just owner_lock_. 289 // just owner_lock_.
255 // * At entry and exit of Lock() and Unlock(), the current thread 290 // * At entry and exit of Lock() and Unlock(), the current thread
256 // owns lock_ iff pthread_equal(lock_owner_tid_, pthread_self()) 291 // owns lock_ iff pthread_equal(lock_owner_tid_, pthread_self())
257 // && recursion_count_ > 0. 292 // && recursion_count_ > 0.
(...skipping 71 matching lines...) Expand 10 before | Expand all | Expand 10 after
329 reinterpret_cast<void*>(region->start_addr), 364 reinterpret_cast<void*>(region->start_addr),
330 reinterpret_cast<void*>(region->end_addr)); 365 reinterpret_cast<void*>(region->end_addr));
331 const_cast<Region*>(region)->set_is_stack(); // now we know 366 const_cast<Region*>(region)->set_is_stack(); // now we know
332 // cast is safe (set_is_stack does not change the set ordering key) 367 // cast is safe (set_is_stack does not change the set ordering key)
333 *result = *region; // create *result as an independent copy 368 *result = *region; // create *result as an independent copy
334 } 369 }
335 Unlock(); 370 Unlock();
336 return region != NULL; 371 return region != NULL;
337 } 372 }
338 373
374 HeapProfileBucket* MemoryRegionMap::GetBucket(int depth,
375 const void* const key[]) {
376 RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
377 // Make hash-value
378 uintptr_t hash = 0;
379 for (int i = 0; i < depth; i++) {
380 hash += reinterpret_cast<uintptr_t>(key[i]);
381 hash += hash << 10;
382 hash ^= hash >> 6;
383 }
384 hash += hash << 3;
385 hash ^= hash >> 11;
386
387 // Lookup stack trace in table
388 unsigned int hash_index = (static_cast<unsigned int>(hash)) % kHashTableSize;
389 for (HeapProfileBucket* bucket = bucket_table_[hash_index];
390 bucket != 0;
391 bucket = bucket->next) {
392 if ((bucket->hash == hash) && (bucket->depth == depth) &&
393 std::equal(key, key + depth, bucket->stack)) {
394 return bucket;
395 }
396 }
397
398 // Create new bucket
399 const size_t key_size = sizeof(key[0]) * depth;
400 HeapProfileBucket* bucket;
401 if (recursive_insert) { // recursion: save in saved_buckets_
402 const void** key_copy = saved_buckets_keys_[saved_buckets_count_];
403 std::copy(key, key + depth, key_copy);
404 bucket = &saved_buckets_[saved_buckets_count_];
405 memset(bucket, 0, sizeof(*bucket));
406 ++saved_buckets_count_;
407 bucket->stack = key_copy;
408 bucket->next = NULL;
409 } else {
410 recursive_insert = true;
411 const void** key_copy = static_cast<const void**>(
412 MyAllocator::Allocate(key_size));
413 recursive_insert = false;
414 std::copy(key, key + depth, key_copy);
415 recursive_insert = true;
416 bucket = static_cast<HeapProfileBucket*>(
417 MyAllocator::Allocate(sizeof(HeapProfileBucket)));
418 recursive_insert = false;
419 memset(bucket, 0, sizeof(*bucket));
420 bucket->stack = key_copy;
421 bucket->next = bucket_table_[hash_index];
422 }
423 bucket->hash = hash;
424 bucket->depth = depth;
425 bucket_table_[hash_index] = bucket;
426 ++num_buckets_;
427 return bucket;
428 }
429
339 MemoryRegionMap::RegionIterator MemoryRegionMap::BeginRegionLocked() { 430 MemoryRegionMap::RegionIterator MemoryRegionMap::BeginRegionLocked() {
340 RAW_CHECK(LockIsHeld(), "should be held (by this thread)"); 431 RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
341 RAW_CHECK(regions_ != NULL, ""); 432 RAW_CHECK(regions_ != NULL, "");
342 return regions_->begin(); 433 return regions_->begin();
343 } 434 }
344 435
345 MemoryRegionMap::RegionIterator MemoryRegionMap::EndRegionLocked() { 436 MemoryRegionMap::RegionIterator MemoryRegionMap::EndRegionLocked() {
346 RAW_CHECK(LockIsHeld(), "should be held (by this thread)"); 437 RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
347 RAW_CHECK(regions_ != NULL, ""); 438 RAW_CHECK(regions_ != NULL, "");
348 return regions_->end(); 439 return regions_->end();
(...skipping 48 matching lines...) Expand 10 before | Expand all | Expand 10 after
397 while (saved_regions_count > 0) { 488 while (saved_regions_count > 0) {
398 // Making a local-var copy of the region argument to insert_func 489 // Making a local-var copy of the region argument to insert_func
399 // including its stack (w/o doing any memory allocations) is important: 490 // including its stack (w/o doing any memory allocations) is important:
400 // in many cases the memory in saved_regions 491 // in many cases the memory in saved_regions
401 // will get written-to during the (*insert_func)(r) call below. 492 // will get written-to during the (*insert_func)(r) call below.
402 Region r = saved_regions[--saved_regions_count]; 493 Region r = saved_regions[--saved_regions_count];
403 (*insert_func)(r); 494 (*insert_func)(r);
404 } 495 }
405 } 496 }
406 497
498 void MemoryRegionMap::RestoreSavedBucketsLocked() {
499 RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
500 while (saved_buckets_count_ > 0) {
501 HeapProfileBucket bucket = saved_buckets_[--saved_buckets_count_];
502 unsigned int hash_index =
503 static_cast<unsigned int>(bucket.hash) % kHashTableSize;
504 bool is_found = false;
505 for (HeapProfileBucket* curr = bucket_table_[hash_index];
506 curr != 0;
507 curr = curr->next) {
508 if ((curr->hash == bucket.hash) && (curr->depth == bucket.depth) &&
509 std::equal(bucket.stack, bucket.stack + bucket.depth, curr->stack)) {
510 curr->allocs += bucket.allocs;
511 curr->alloc_size += bucket.alloc_size;
512 curr->frees += bucket.frees;
513 curr->free_size += bucket.free_size;
514 is_found = true;
515 break;
516 }
517 }
518 if (is_found) continue;
519
520 const size_t key_size = sizeof(bucket.stack[0]) * bucket.depth;
521 const void** key_copy = static_cast<const void**>(
522 MyAllocator::Allocate(key_size));
523 std::copy(bucket.stack, bucket.stack + bucket.depth, key_copy);
524 HeapProfileBucket* new_bucket = static_cast<HeapProfileBucket*>(
525 MyAllocator::Allocate(sizeof(HeapProfileBucket)));
526 memset(new_bucket, 0, sizeof(*new_bucket));
527 new_bucket->hash = bucket.hash;
528 new_bucket->depth = bucket.depth;
529 new_bucket->stack = key_copy;
530 new_bucket->next = bucket_table_[hash_index];
531 bucket_table_[hash_index] = new_bucket;
532 ++num_buckets_;
533 }
534 }
535
407 inline void MemoryRegionMap::InsertRegionLocked(const Region& region) { 536 inline void MemoryRegionMap::InsertRegionLocked(const Region& region) {
408 RAW_CHECK(LockIsHeld(), "should be held (by this thread)"); 537 RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
409 // We can be called recursively, because RegionSet constructor 538 // We can be called recursively, because RegionSet constructor
410 // and DoInsertRegionLocked() (called below) can call the allocator. 539 // and DoInsertRegionLocked() (called below) can call the allocator.
411 // recursive_insert tells us if that's the case. When this happens, 540 // recursive_insert tells us if that's the case. When this happens,
412 // region insertion information is recorded in saved_regions[], 541 // region insertion information is recorded in saved_regions[],
413 // and taken into account when the recursion unwinds. 542 // and taken into account when the recursion unwinds.
414 // Do the insert: 543 // Do the insert:
415 if (recursive_insert) { // recursion: save in saved_regions 544 if (recursive_insert) { // recursion: save in saved_regions
416 RAW_VLOG(12, "Saving recursive insert of region %p..%p from %p", 545 RAW_VLOG(12, "Saving recursive insert of region %p..%p from %p",
(...skipping 44 matching lines...) Expand 10 before | Expand all | Expand 10 after
461 RAW_VLOG(10, "New global region %p..%p from %p", 590 RAW_VLOG(10, "New global region %p..%p from %p",
462 reinterpret_cast<void*>(region.start_addr), 591 reinterpret_cast<void*>(region.start_addr),
463 reinterpret_cast<void*>(region.end_addr), 592 reinterpret_cast<void*>(region.end_addr),
464 reinterpret_cast<void*>(region.caller())); 593 reinterpret_cast<void*>(region.caller()));
465 // Note: none of the above allocates memory. 594 // Note: none of the above allocates memory.
466 Lock(); // recursively lock 595 Lock(); // recursively lock
467 map_size_ += size; 596 map_size_ += size;
468 InsertRegionLocked(region); 597 InsertRegionLocked(region);
469 // This will (eventually) allocate storage for and copy over the stack data 598 // This will (eventually) allocate storage for and copy over the stack data
470 // from region.call_stack_data_ that is pointed by region.call_stack(). 599 // from region.call_stack_data_ that is pointed by region.call_stack().
600 if (bucket_table_ != NULL) {
601 HeapProfileBucket* b = GetBucket(depth, region.call_stack);
602 ++b->allocs;
603 b->alloc_size += size;
604 if (!recursive_insert) {
605 recursive_insert = true;
606 RestoreSavedBucketsLocked();
607 recursive_insert = false;
608 }
609 }
471 Unlock(); 610 Unlock();
472 } 611 }
473 612
474 void MemoryRegionMap::RecordRegionRemoval(const void* start, size_t size) { 613 void MemoryRegionMap::RecordRegionRemoval(const void* start, size_t size) {
475 Lock(); 614 Lock();
476 if (recursive_insert) { 615 if (recursive_insert) {
477 // First remove the removed region from saved_regions, if it's 616 // First remove the removed region from saved_regions, if it's
478 // there, to prevent overrunning saved_regions in recursive 617 // there, to prevent overrunning saved_regions in recursive
479 // map/unmap call sequences, and also from later inserting regions 618 // map/unmap call sequences, and also from later inserting regions
480 // which have already been unmapped. 619 // which have already been unmapped.
481 uintptr_t start_addr = reinterpret_cast<uintptr_t>(start); 620 uintptr_t start_addr = reinterpret_cast<uintptr_t>(start);
482 uintptr_t end_addr = start_addr + size; 621 uintptr_t end_addr = start_addr + size;
483 int put_pos = 0; 622 int put_pos = 0;
484 int old_count = saved_regions_count; 623 int old_count = saved_regions_count;
485 for (int i = 0; i < old_count; ++i, ++put_pos) { 624 for (int i = 0; i < old_count; ++i, ++put_pos) {
486 Region& r = saved_regions[i]; 625 Region& r = saved_regions[i];
487 if (r.start_addr == start_addr && r.end_addr == end_addr) { 626 if (r.start_addr == start_addr && r.end_addr == end_addr) {
488 // An exact match, so it's safe to remove. 627 // An exact match, so it's safe to remove.
628 RecordRegionRemovalInBucket(r.call_stack_depth, r.call_stack, size);
489 --saved_regions_count; 629 --saved_regions_count;
490 --put_pos; 630 --put_pos;
491 RAW_VLOG(10, ("Insta-Removing saved region %p..%p; " 631 RAW_VLOG(10, ("Insta-Removing saved region %p..%p; "
492 "now have %d saved regions"), 632 "now have %d saved regions"),
493 reinterpret_cast<void*>(start_addr), 633 reinterpret_cast<void*>(start_addr),
494 reinterpret_cast<void*>(end_addr), 634 reinterpret_cast<void*>(end_addr),
495 saved_regions_count); 635 saved_regions_count);
496 } else { 636 } else {
497 if (put_pos < i) { 637 if (put_pos < i) {
498 saved_regions[put_pos] = saved_regions[i]; 638 saved_regions[put_pos] = saved_regions[i];
(...skipping 24 matching lines...) Expand all
523 region != regions_->end() && region->start_addr < end_addr; 663 region != regions_->end() && region->start_addr < end_addr;
524 /*noop*/) { 664 /*noop*/) {
525 RAW_VLOG(13, "Looking at region %p..%p", 665 RAW_VLOG(13, "Looking at region %p..%p",
526 reinterpret_cast<void*>(region->start_addr), 666 reinterpret_cast<void*>(region->start_addr),
527 reinterpret_cast<void*>(region->end_addr)); 667 reinterpret_cast<void*>(region->end_addr));
528 if (start_addr <= region->start_addr && 668 if (start_addr <= region->start_addr &&
529 region->end_addr <= end_addr) { // full deletion 669 region->end_addr <= end_addr) { // full deletion
530 RAW_VLOG(12, "Deleting region %p..%p", 670 RAW_VLOG(12, "Deleting region %p..%p",
531 reinterpret_cast<void*>(region->start_addr), 671 reinterpret_cast<void*>(region->start_addr),
532 reinterpret_cast<void*>(region->end_addr)); 672 reinterpret_cast<void*>(region->end_addr));
673 RecordRegionRemovalInBucket(region->call_stack_depth, region->call_stack,
674 region->end_addr - region->start_addr);
533 RegionSet::iterator d = region; 675 RegionSet::iterator d = region;
534 ++region; 676 ++region;
535 regions_->erase(d); 677 regions_->erase(d);
536 continue; 678 continue;
537 } else if (region->start_addr < start_addr && 679 } else if (region->start_addr < start_addr &&
538 end_addr < region->end_addr) { // cutting-out split 680 end_addr < region->end_addr) { // cutting-out split
539 RAW_VLOG(12, "Splitting region %p..%p in two", 681 RAW_VLOG(12, "Splitting region %p..%p in two",
540 reinterpret_cast<void*>(region->start_addr), 682 reinterpret_cast<void*>(region->start_addr),
541 reinterpret_cast<void*>(region->end_addr)); 683 reinterpret_cast<void*>(region->end_addr));
684 RecordRegionRemovalInBucket(region->call_stack_depth, region->call_stack,
685 end_addr - start_addr);
542 // Make another region for the start portion: 686 // Make another region for the start portion:
543 // The new region has to be the start portion because we can't 687 // The new region has to be the start portion because we can't
544 // just modify region->end_addr as it's the sorting key. 688 // just modify region->end_addr as it's the sorting key.
545 Region r = *region; 689 Region r = *region;
546 r.set_end_addr(start_addr); 690 r.set_end_addr(start_addr);
547 InsertRegionLocked(r); 691 InsertRegionLocked(r);
548 // cut *region from start: 692 // cut *region from start:
549 const_cast<Region&>(*region).set_start_addr(end_addr); 693 const_cast<Region&>(*region).set_start_addr(end_addr);
550 } else if (end_addr > region->start_addr && 694 } else if (end_addr > region->start_addr &&
551 start_addr <= region->start_addr) { // cut from start 695 start_addr <= region->start_addr) { // cut from start
552 RAW_VLOG(12, "Start-chopping region %p..%p", 696 RAW_VLOG(12, "Start-chopping region %p..%p",
553 reinterpret_cast<void*>(region->start_addr), 697 reinterpret_cast<void*>(region->start_addr),
554 reinterpret_cast<void*>(region->end_addr)); 698 reinterpret_cast<void*>(region->end_addr));
699 RecordRegionRemovalInBucket(region->call_stack_depth, region->call_stack,
700 end_addr - region->start_addr);
555 const_cast<Region&>(*region).set_start_addr(end_addr); 701 const_cast<Region&>(*region).set_start_addr(end_addr);
556 } else if (start_addr > region->start_addr && 702 } else if (start_addr > region->start_addr &&
557 start_addr < region->end_addr) { // cut from end 703 start_addr < region->end_addr) { // cut from end
558 RAW_VLOG(12, "End-chopping region %p..%p", 704 RAW_VLOG(12, "End-chopping region %p..%p",
559 reinterpret_cast<void*>(region->start_addr), 705 reinterpret_cast<void*>(region->start_addr),
560 reinterpret_cast<void*>(region->end_addr)); 706 reinterpret_cast<void*>(region->end_addr));
707 RecordRegionRemovalInBucket(region->call_stack_depth, region->call_stack,
708 region->end_addr - start_addr);
561 // Can't just modify region->end_addr (it's the sorting key): 709 // Can't just modify region->end_addr (it's the sorting key):
562 Region r = *region; 710 Region r = *region;
563 r.set_end_addr(start_addr); 711 r.set_end_addr(start_addr);
564 RegionSet::iterator d = region; 712 RegionSet::iterator d = region;
565 ++region; 713 ++region;
566 // It's safe to erase before inserting since r is independent of *d: 714 // It's safe to erase before inserting since r is independent of *d:
567 // r contains an own copy of the call stack: 715 // r contains an own copy of the call stack:
568 regions_->erase(d); 716 regions_->erase(d);
569 InsertRegionLocked(r); 717 InsertRegionLocked(r);
570 continue; 718 continue;
571 } 719 }
572 ++region; 720 ++region;
573 } 721 }
574 RAW_VLOG(12, "Removed region %p..%p; have %"PRIuS" regions", 722 RAW_VLOG(12, "Removed region %p..%p; have %"PRIuS" regions",
575 reinterpret_cast<void*>(start_addr), 723 reinterpret_cast<void*>(start_addr),
576 reinterpret_cast<void*>(end_addr), 724 reinterpret_cast<void*>(end_addr),
577 regions_->size()); 725 regions_->size());
578 if (VLOG_IS_ON(12)) LogAllLocked(); 726 if (VLOG_IS_ON(12)) LogAllLocked();
579 unmap_size_ += size; 727 unmap_size_ += size;
580 Unlock(); 728 Unlock();
581 } 729 }
582 730
731 void MemoryRegionMap::RecordRegionRemovalInBucket(int depth,
732 const void* const stack[],
733 size_t size) {
734 RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
735 if (bucket_table_ == NULL) return;
736 HeapProfileBucket* b = GetBucket(depth, stack);
737 ++b->frees;
738 b->free_size += size;
739 }
740
583 void MemoryRegionMap::MmapHook(const void* result, 741 void MemoryRegionMap::MmapHook(const void* result,
584 const void* start, size_t size, 742 const void* start, size_t size,
585 int prot, int flags, 743 int prot, int flags,
586 int fd, off_t offset) { 744 int fd, off_t offset) {
587 // TODO(maxim): replace all 0x%"PRIxS" by %p when RAW_VLOG uses a safe 745 // TODO(maxim): replace all 0x%"PRIxS" by %p when RAW_VLOG uses a safe
588 // snprintf reimplementation that does not malloc to pretty-print NULL 746 // snprintf reimplementation that does not malloc to pretty-print NULL
589 RAW_VLOG(10, "MMap = 0x%"PRIxPTR" of %"PRIuS" at %"PRIu64" " 747 RAW_VLOG(10, "MMap = 0x%"PRIxPTR" of %"PRIuS" at %"PRIu64" "
590 "prot %d flags %d fd %d offs %"PRId64, 748 "prot %d flags %d fd %d offs %"PRId64,
591 reinterpret_cast<uintptr_t>(result), size, 749 reinterpret_cast<uintptr_t>(result), size,
592 reinterpret_cast<uint64>(start), prot, flags, fd, 750 reinterpret_cast<uint64>(start), prot, flags, fd,
(...skipping 50 matching lines...) Expand 10 before | Expand all | Expand 10 after
643 r != regions_->end(); ++r) { 801 r != regions_->end(); ++r) {
644 RAW_LOG(INFO, "Memory region 0x%"PRIxPTR"..0x%"PRIxPTR" " 802 RAW_LOG(INFO, "Memory region 0x%"PRIxPTR"..0x%"PRIxPTR" "
645 "from 0x%"PRIxPTR" stack=%d", 803 "from 0x%"PRIxPTR" stack=%d",
646 r->start_addr, r->end_addr, r->caller(), r->is_stack); 804 r->start_addr, r->end_addr, r->caller(), r->is_stack);
647 RAW_CHECK(previous < r->end_addr, "wow, we messed up the set order"); 805 RAW_CHECK(previous < r->end_addr, "wow, we messed up the set order");
648 // this must be caused by uncontrolled recursive operations on regions_ 806 // this must be caused by uncontrolled recursive operations on regions_
649 previous = r->end_addr; 807 previous = r->end_addr;
650 } 808 }
651 RAW_LOG(INFO, "End of regions list"); 809 RAW_LOG(INFO, "End of regions list");
652 } 810 }
OLDNEW
« no previous file with comments | « third_party/tcmalloc/chromium/src/memory_region_map.h ('k') | no next file » | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698