Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(149)

Side by Side Diff: base/trace_event/heap_profiler_heap_dump_writer.cc

Issue 1859143003: Handle zero-size allocations properly in heap profiler (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: Created 4 years, 8 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 // Copyright 2015 The Chromium Authors. All rights reserved. 1 // Copyright 2015 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be 2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file. 3 // found in the LICENSE file.
4 4
5 #include "base/trace_event/heap_profiler_heap_dump_writer.h" 5 #include "base/trace_event/heap_profiler_heap_dump_writer.h"
6 6
7 #include <stdint.h> 7 #include <stdint.h>
8 8
9 #include <algorithm> 9 #include <algorithm>
10 #include <iterator> 10 #include <iterator>
(...skipping 117 matching lines...) Expand 10 before | Expand all | Expand 10 after
128 // no bucket is discarded is doing a heap sort, which is O(n log n). 128 // no bucket is discarded is doing a heap sort, which is O(n log n).
129 std::make_heap(buckets.begin(), buckets.end()); 129 std::make_heap(buckets.begin(), buckets.end());
130 130
131 // Keep including buckets until adding one would increase the number of 131 // Keep including buckets until adding one would increase the number of
132 // bytes accounted for by less than 0.8 percent. This simple heuristic works 132 // bytes accounted for by less than 0.8 percent. This simple heuristic works
133 // quite well. The large buckets end up in [it, end()), [begin(), it) is the 133 // quite well. The large buckets end up in [it, end()), [begin(), it) is the
134 // part that contains the max-heap of small buckets. 134 // part that contains the max-heap of small buckets.
135 size_t accounted_for = 0; 135 size_t accounted_for = 0;
136 std::vector<Bucket>::iterator it; 136 std::vector<Bucket>::iterator it;
137 for (it = buckets.end(); it != buckets.begin(); --it) { 137 for (it = buckets.end(); it != buckets.begin(); --it) {
138 // Compute the contribution to the number of bytes accounted for as a 138 // Compute the total accounted amount of bytes
139 // fraction of 125 (in increments of 0.8 percent). Anything less than 1/125 139 // If next bucket increase this amount by less then 0.8% just skip the tail
Primiano Tucci (use gerrit) 2016/04/06 08:20:44 I'd add a "(1/125)" after 0.8% otherwise It's not
140 // is rounded down to 0 due to integer division. Buckets are iterated by
141 // descending size, so later buckets cannot have a larger contribution than
142 // this one.
143 accounted_for += buckets.front().size; 140 accounted_for += buckets.front().size;
144 size_t contribution = buckets.front().size * 125 / accounted_for; 141 if (buckets.front().size < (accounted_for / 125))
145 if (contribution == 0)
146 break; 142 break;
147 143
148 // Put the largest bucket in [begin, it) at |it - 1| and max-heapify 144 // Put the largest bucket in [begin, it) at |it - 1| and max-heapify
149 // [begin, it - 1). This puts the next largest bucket at |buckets.front()|. 145 // [begin, it - 1). This puts the next largest bucket at |buckets.front()|.
150 std::pop_heap(buckets.begin(), it); 146 std::pop_heap(buckets.begin(), it);
151 } 147 }
152 148
153 // At this point, |buckets| looks like this (numbers are bucket sizes): 149 // At this point, |buckets| looks like this (numbers are bucket sizes):
154 // 150 //
155 // <-- max-heap of small buckets ---> 151 // <-- max-heap of small buckets --->
(...skipping 71 matching lines...) Expand 10 before | Expand all | Expand 10 after
227 BreakDown(subbucket); 223 BreakDown(subbucket);
228 } 224 }
229 225
230 const std::set<Entry>& HeapDumpWriter::Summarize( 226 const std::set<Entry>& HeapDumpWriter::Summarize(
231 const hash_map<AllocationContext, size_t>& bytes_by_context) { 227 const hash_map<AllocationContext, size_t>& bytes_by_context) {
232 // Start with one bucket that represents the entire heap. Iterate by 228 // Start with one bucket that represents the entire heap. Iterate by
233 // reference, because the allocation contexts are going to point to allocation 229 // reference, because the allocation contexts are going to point to allocation
234 // contexts stored in |bytes_by_context|. 230 // contexts stored in |bytes_by_context|.
235 Bucket root_bucket; 231 Bucket root_bucket;
236 for (const auto& context_and_size : bytes_by_context) { 232 for (const auto& context_and_size : bytes_by_context) {
233 DCHECK_GT(context_and_size.second, 0u);
237 const AllocationContext* context = &context_and_size.first; 234 const AllocationContext* context = &context_and_size.first;
238 const size_t size = context_and_size.second; 235 const size_t size = context_and_size.second;
239 root_bucket.bytes_by_context.push_back(std::make_pair(context, size)); 236 root_bucket.bytes_by_context.push_back(std::make_pair(context, size));
240 root_bucket.size += size; 237 root_bucket.size += size;
241 } 238 }
242 239
243 AddEntryForBucket(root_bucket); 240 AddEntryForBucket(root_bucket);
244 241
245 // Recursively break down the heap and fill |entries_| with entries to dump. 242 // Recursively break down the heap and fill |entries_| with entries to dump.
246 BreakDown(root_bucket); 243 BreakDown(root_bucket);
(...skipping 46 matching lines...) Expand 10 before | Expand all | Expand 10 after
293 const hash_map<AllocationContext, size_t>& bytes_by_size, 290 const hash_map<AllocationContext, size_t>& bytes_by_size,
294 StackFrameDeduplicator* stack_frame_deduplicator, 291 StackFrameDeduplicator* stack_frame_deduplicator,
295 TypeNameDeduplicator* type_name_deduplicator) { 292 TypeNameDeduplicator* type_name_deduplicator) {
296 internal::HeapDumpWriter writer(stack_frame_deduplicator, 293 internal::HeapDumpWriter writer(stack_frame_deduplicator,
297 type_name_deduplicator); 294 type_name_deduplicator);
298 return Serialize(writer.Summarize(bytes_by_size)); 295 return Serialize(writer.Summarize(bytes_by_size));
299 } 296 }
300 297
301 } // namespace trace_event 298 } // namespace trace_event
302 } // namespace base 299 } // namespace base
OLDNEW

Powered by Google App Engine
This is Rietveld 408576698