Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(123)

Side by Side Diff: base/trace_event/heap_profiler_heap_dump_writer.cc

Issue 1859143003: Handle zero-size allocations properly in heap profiler (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: Created 4 years, 8 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
« no previous file with comments | « base/trace_event/heap_profiler_allocation_register_unittest.cc ('k') | no next file » | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 // Copyright 2015 The Chromium Authors. All rights reserved. 1 // Copyright 2015 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be 2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file. 3 // found in the LICENSE file.
4 4
5 #include "base/trace_event/heap_profiler_heap_dump_writer.h" 5 #include "base/trace_event/heap_profiler_heap_dump_writer.h"
6 6
7 #include <stdint.h> 7 #include <stdint.h>
8 8
9 #include <algorithm> 9 #include <algorithm>
10 #include <iterator> 10 #include <iterator>
(...skipping 111 matching lines...) Expand 10 before | Expand all | Expand 10 after
122 122
123 // Ensure that |buckets| is a max-heap (the data structure, not memory heap), 123 // Ensure that |buckets| is a max-heap (the data structure, not memory heap),
124 // so its front contains the largest bucket. Buckets should be iterated 124 // so its front contains the largest bucket. Buckets should be iterated
125 // ordered by size, but sorting the vector is overkill because the long tail 125 // ordered by size, but sorting the vector is overkill because the long tail
126 // of small buckets will be discarded. By using a max-heap, the optimal case 126 // of small buckets will be discarded. By using a max-heap, the optimal case
127 // where all but the first bucket are discarded is O(n). The worst case where 127 // where all but the first bucket are discarded is O(n). The worst case where
128 // no bucket is discarded is doing a heap sort, which is O(n log n). 128 // no bucket is discarded is doing a heap sort, which is O(n log n).
129 std::make_heap(buckets.begin(), buckets.end()); 129 std::make_heap(buckets.begin(), buckets.end());
130 130
131 // Keep including buckets until adding one would increase the number of 131 // Keep including buckets until adding one would increase the number of
132 // bytes accounted for by less than 0.8 percent. This simple heuristic works 132 // bytes accounted for by less than 0.8% (1/125). This simple heuristic works
133 // quite well. The large buckets end up in [it, end()), [begin(), it) is the 133 // quite well. The large buckets end up in [it, end()), [begin(), it) is the
134 // part that contains the max-heap of small buckets. 134 // part that contains the max-heap of small buckets.
135 size_t accounted_for = 0; 135 size_t accounted_for = 0;
136 std::vector<Bucket>::iterator it; 136 std::vector<Bucket>::iterator it;
137 for (it = buckets.end(); it != buckets.begin(); --it) { 137 for (it = buckets.end(); it != buckets.begin(); --it) {
138 // Compute the contribution to the number of bytes accounted for as a
139 // fraction of 125 (in increments of 0.8 percent). Anything less than 1/125
140 // is rounded down to 0 due to integer division. Buckets are iterated by
141 // descending size, so later buckets cannot have a larger contribution than
142 // this one.
143 accounted_for += buckets.front().size; 138 accounted_for += buckets.front().size;
144 size_t contribution = buckets.front().size * 125 / accounted_for; 139 if (buckets.front().size < (accounted_for / 125))
145 if (contribution == 0)
146 break; 140 break;
147 141
148 // Put the largest bucket in [begin, it) at |it - 1| and max-heapify 142 // Put the largest bucket in [begin, it) at |it - 1| and max-heapify
149 // [begin, it - 1). This puts the next largest bucket at |buckets.front()|. 143 // [begin, it - 1). This puts the next largest bucket at |buckets.front()|.
150 std::pop_heap(buckets.begin(), it); 144 std::pop_heap(buckets.begin(), it);
151 } 145 }
152 146
153 // At this point, |buckets| looks like this (numbers are bucket sizes): 147 // At this point, |buckets| looks like this (numbers are bucket sizes):
154 // 148 //
155 // <-- max-heap of small buckets ---> 149 // <-- max-heap of small buckets --->
(...skipping 71 matching lines...) Expand 10 before | Expand all | Expand 10 after
227 BreakDown(subbucket); 221 BreakDown(subbucket);
228 } 222 }
229 223
230 const std::set<Entry>& HeapDumpWriter::Summarize( 224 const std::set<Entry>& HeapDumpWriter::Summarize(
231 const hash_map<AllocationContext, size_t>& bytes_by_context) { 225 const hash_map<AllocationContext, size_t>& bytes_by_context) {
232 // Start with one bucket that represents the entire heap. Iterate by 226 // Start with one bucket that represents the entire heap. Iterate by
233 // reference, because the allocation contexts are going to point to allocation 227 // reference, because the allocation contexts are going to point to allocation
234 // contexts stored in |bytes_by_context|. 228 // contexts stored in |bytes_by_context|.
235 Bucket root_bucket; 229 Bucket root_bucket;
236 for (const auto& context_and_size : bytes_by_context) { 230 for (const auto& context_and_size : bytes_by_context) {
231 DCHECK_GT(context_and_size.second, 0u);
237 const AllocationContext* context = &context_and_size.first; 232 const AllocationContext* context = &context_and_size.first;
238 const size_t size = context_and_size.second; 233 const size_t size = context_and_size.second;
239 root_bucket.bytes_by_context.push_back(std::make_pair(context, size)); 234 root_bucket.bytes_by_context.push_back(std::make_pair(context, size));
240 root_bucket.size += size; 235 root_bucket.size += size;
241 } 236 }
242 237
243 AddEntryForBucket(root_bucket); 238 AddEntryForBucket(root_bucket);
244 239
245 // Recursively break down the heap and fill |entries_| with entries to dump. 240 // Recursively break down the heap and fill |entries_| with entries to dump.
246 BreakDown(root_bucket); 241 BreakDown(root_bucket);
(...skipping 46 matching lines...) Expand 10 before | Expand all | Expand 10 after
293 const hash_map<AllocationContext, size_t>& bytes_by_size, 288 const hash_map<AllocationContext, size_t>& bytes_by_size,
294 StackFrameDeduplicator* stack_frame_deduplicator, 289 StackFrameDeduplicator* stack_frame_deduplicator,
295 TypeNameDeduplicator* type_name_deduplicator) { 290 TypeNameDeduplicator* type_name_deduplicator) {
296 internal::HeapDumpWriter writer(stack_frame_deduplicator, 291 internal::HeapDumpWriter writer(stack_frame_deduplicator,
297 type_name_deduplicator); 292 type_name_deduplicator);
298 return Serialize(writer.Summarize(bytes_by_size)); 293 return Serialize(writer.Summarize(bytes_by_size));
299 } 294 }
300 295
301 } // namespace trace_event 296 } // namespace trace_event
302 } // namespace base 297 } // namespace base
OLDNEW
« no previous file with comments | « base/trace_event/heap_profiler_allocation_register_unittest.cc ('k') | no next file » | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698