Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(530)

Side by Side Diff: base/trace_event/memory_profiler_allocation_context.h

Issue 1419633004: [Tracing] Introduce HeapDumpWriter helper class (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: Remove WriteStackFrames, DISALLOW_COPY_AND_ASSIGN Created 5 years, 2 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 // Copyright 2015 The Chromium Authors. All rights reserved. 1 // Copyright 2015 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be 2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file. 3 // found in the LICENSE file.
4 4
5 #ifndef BASE_TRACE_EVENT_MEMORY_PROFILER_ALLOCATION_CONTEXT_H_ 5 #ifndef BASE_TRACE_EVENT_MEMORY_PROFILER_ALLOCATION_CONTEXT_H_
6 #define BASE_TRACE_EVENT_MEMORY_PROFILER_ALLOCATION_CONTEXT_H_ 6 #define BASE_TRACE_EVENT_MEMORY_PROFILER_ALLOCATION_CONTEXT_H_
7 7
8 #include <vector> 8 #include <vector>
9 9
10 #include "base/atomicops.h" 10 #include "base/atomicops.h"
(...skipping 51 matching lines...) Expand 10 before | Expand all | Expand 10 after
62 // The number of stack frames stored in the backtrace is a trade off between 62 // The number of stack frames stored in the backtrace is a trade off between
63 // memory used for tracing and accuracy. Measurements done on a prototype 63 // memory used for tracing and accuracy. Measurements done on a prototype
64 // revealed that: 64 // revealed that:
65 // 65 //
66 // - In 60 percent of the cases, stack depth <= 7. 66 // - In 60 percent of the cases, stack depth <= 7.
67 // - In 87 percent of the cases, stack depth <= 9. 67 // - In 87 percent of the cases, stack depth <= 9.
68 // - In 95 percent of the cases, stack depth <= 11. 68 // - In 95 percent of the cases, stack depth <= 11.
69 // 69 //
70 // See the design doc (https://goo.gl/4s7v7b) for more details. 70 // See the design doc (https://goo.gl/4s7v7b) for more details.
71 71
72 // The allocation context is context metadata that is kept for every allocation 72 struct BASE_EXPORT Backtrace {
Primiano Tucci (use gerrit) 2015/10/26 11:52:06 not really sure this is more readable than before.
Ruud van Asseldonk 2015/10/26 14:51:26 Going to leave it outside for now to avoid rightwa
73 // when heap profiling is enabled. To simplify memory management for 73 // Unused backtrace frames are filled with nullptr frames. If the stack is
74 // bookkeeping, this struct has a fixed size. All |const char*|s here 74 // higher than what can be stored here, the bottom frames are stored. Based
75 // must have static lifetime. 75 // on the data above, a depth of 12 captures the full stack in the vast
76 struct BASE_EXPORT AllocationContext { 76 // majority of the cases.
77 struct Backtrace { 77 StackFrame frames[12];
78 // Unused backtrace frames are filled with nullptr frames. If the stack is 78 };
79 // higher than what can be stored here, the bottom frames are stored. Based
80 // on the data above, a depth of 12 captures the full stack in the vast
81 // majority of the cases.
82 StackFrame frames[12];
83 } backtrace;
84 79
85 // There is room for two arbitrary context fields, which can be set by the 80 bool BASE_EXPORT operator==(const Backtrace& lhs, const Backtrace& rhs);
86 // |TRACE_ALLOCATION_CONTEXT| macro. A nullptr key indicates that the field is 81
87 // unused. 82 // A hasher for |Backtrace| that allows using it as a key for |base::hash_map|.
88 std::pair<const char*, const char*> fields[2]; 83 class BASE_EXPORT BacktraceHash {
Primiano Tucci (use gerrit) 2015/10/26 11:52:06 A search in the codebase suggests that the way we
Primiano Tucci (use gerrit) 2015/10/26 11:52:06 +er (Hasher)?
Ruud van Asseldonk 2015/10/26 14:51:26 Done.
Ruud van Asseldonk 2015/10/26 14:51:26 Would be called |hash<Backtrace>| then.
84 public:
85 using result_type = uint32_t;
86 using argument_type = Backtrace;
87 uint32_t operator()(const Backtrace& backtrace) const;
89 }; 88 };
90 89
91 // A data structure that allows grouping a set of backtraces in a space- 90 // A data structure that allows grouping a set of backtraces in a space-
92 // efficient manner by creating a call tree and writing it as a set of (node, 91 // efficient manner by creating a call tree and writing it as a set of (node,
93 // parent) pairs. The tree nodes reference both parent and children. The parent 92 // parent) pairs. The tree nodes reference both parent and children. The parent
94 // is referenced by index into |frames_|. The children are referenced via a map 93 // is referenced by index into |frames_|. The children are referenced via a map
95 // of |StackFrame|s to index into |frames_|. So there is a trie for bottum-up 94 // of |StackFrame|s to index into |frames_|. So there is a trie for bottum-up
96 // lookup of a backtrace for deduplication, and a tree for compact storage in 95 // lookup of a backtrace for deduplication, and a tree for compact storage in
97 // the trace log. 96 // the trace log.
98 class BASE_EXPORT StackFrameDeduplicator { 97 class BASE_EXPORT StackFrameDeduplicator {
(...skipping 11 matching lines...) Expand all
110 109
111 // Indices into |frames_| of frames called from the current frame. 110 // Indices into |frames_| of frames called from the current frame.
112 std::map<StackFrame, int> children; 111 std::map<StackFrame, int> children;
113 }; 112 };
114 113
115 using ConstIterator = std::vector<FrameNode>::const_iterator; 114 using ConstIterator = std::vector<FrameNode>::const_iterator;
116 115
117 StackFrameDeduplicator(); 116 StackFrameDeduplicator();
118 ~StackFrameDeduplicator(); 117 ~StackFrameDeduplicator();
119 118
120 // Inserts a backtrace and returns the index of its leaf node in the range 119 // Inserts a backtrace and returns the index of its leaf node in |frames_|.
121 // defined by |begin| and |end|. I.e. if this returns |n|, the node is 120 // Returns -1 if the backtrace is empty.
122 // |begin() + n|. Returns -1 if the backtrace is empty. 121 int Insert(const Backtrace& bt);
123 int Insert(const AllocationContext::Backtrace& bt);
124 122
125 // Iterators over the frame nodes in the call tree. 123 // Iterators over the frame nodes in the call tree.
126 ConstIterator begin() const { return frames_.begin(); } 124 ConstIterator begin() const { return frames_.begin(); }
127 ConstIterator end() const { return frames_.end(); } 125 ConstIterator end() const { return frames_.end(); }
128 126
129 private: 127 private:
130 std::map<StackFrame, int> roots_; 128 std::map<StackFrame, int> roots_;
131 std::vector<FrameNode> frames_; 129 std::vector<FrameNode> frames_;
132 130
133 DISALLOW_COPY_AND_ASSIGN(StackFrameDeduplicator); 131 DISALLOW_COPY_AND_ASSIGN(StackFrameDeduplicator);
134 }; 132 };
135 133
134 // The allocation context is context metadata that is kept for every allocation
135 // when heap profiling is enabled. To simplify memory management for
136 // bookkeeping, this struct has a fixed size. All |const char*|s here
137 // must have static lifetime.
138 struct BASE_EXPORT AllocationContext {
139 Backtrace backtrace;
140
141 // There is room for two arbitrary context fields, which can be set by the
142 // |TRACE_ALLOCATION_CONTEXT| macro. A nullptr key indicates that the field is
143 // unused.
144 std::pair<const char*, const char*> fields[2];
145 };
146
136 // The allocation context tracker keeps track of thread-local context for heap 147 // The allocation context tracker keeps track of thread-local context for heap
137 // profiling. It includes a pseudo stack of trace events, and it might contain 148 // profiling. It includes a pseudo stack of trace events, and it might contain
138 // arbitrary (key, value) context. On every allocation the tracker provides a 149 // arbitrary (key, value) context. On every allocation the tracker provides a
139 // snapshot of its context in the form of an |AllocationContext| that is to be 150 // snapshot of its context in the form of an |AllocationContext| that is to be
140 // stored together with the allocation details. 151 // stored together with the allocation details.
141 class BASE_EXPORT AllocationContextTracker { 152 class BASE_EXPORT AllocationContextTracker {
142 public: 153 public:
143 // Globally enables capturing allocation context. 154 // Globally enables capturing allocation context.
144 // TODO(ruuda): Should this be replaced by |EnableCapturing| in the future? 155 // TODO(ruuda): Should this be replaced by |EnableCapturing| in the future?
145 // Or at least have something that guards agains enable -> disable -> enable? 156 // Or at least have something that guards agains enable -> disable -> enable?
(...skipping 45 matching lines...) Expand 10 before | Expand all | Expand 10 after
191 // A dictionary of arbitrary context. 202 // A dictionary of arbitrary context.
192 SmallMap<std::map<const char*, const char*>> context_; 203 SmallMap<std::map<const char*, const char*>> context_;
193 204
194 DISALLOW_COPY_AND_ASSIGN(AllocationContextTracker); 205 DISALLOW_COPY_AND_ASSIGN(AllocationContextTracker);
195 }; 206 };
196 207
197 } // namespace trace_event 208 } // namespace trace_event
198 } // namespace base 209 } // namespace base
199 210
200 #endif // BASE_TRACE_EVENT_MEMORY_PROFILER_ALLOCATION_CONTEXT_H_ 211 #endif // BASE_TRACE_EVENT_MEMORY_PROFILER_ALLOCATION_CONTEXT_H_
OLDNEW

Powered by Google App Engine
This is Rietveld 408576698