Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(1499)

Side by Side Diff: base/debug/trace_event_internal.h

Issue 12150004: Category group support/Renamings. (Closed) Base URL: http://git.chromium.org/chromium/src.git@master
Patch Set: Fixed comment, added better support for default filtering, fixed merge issues. Created 7 years, 9 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 // Copyright (c) 2012 The Chromium Authors. All rights reserved. 1 // Copyright (c) 2012 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be 2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file. 3 // found in the LICENSE file.
4 4
5 // This header file defines the set of trace_event macros without specifying 5 // This header file defines the set of trace_event macros without specifying
6 // how the events actually get collected and stored. If you need to expose trace 6 // how the events actually get collected and stored. If you need to expose trace
7 // events to some other universe, you can copy-and-paste this file as well as 7 // events to some other universe, you can copy-and-paste this file as well as
8 // trace_event.h, modifying the macros contained there as necessary for the 8 // trace_event.h, modifying the macros contained there as necessary for the
9 // target platform. The end result is that multiple libraries can funnel events 9 // target platform. The end result is that multiple libraries can funnel events
10 // through to a shared trace event collector. 10 // through to a shared trace event collector.
11 11
12 // Trace events are for tracking application performance and resource usage. 12 // Trace events are for tracking application performance and resource usage.
13 // Macros are provided to track: 13 // Macros are provided to track:
14 // Begin and end of function calls 14 // Begin and end of function calls
15 // Counters 15 // Counters
16 // 16 //
17 // Events are issued against categories. Whereas LOG's 17 // Events are issued against categories. Whereas LOG's
18 // categories are statically defined, TRACE categories are created 18 // categories are statically defined, TRACE categories are created
19 // implicitly with a string. For example: 19 // implicitly with a string. For example:
20 // TRACE_EVENT_INSTANT0("MY_SUBSYSTEM", "SomeImportantEvent") 20 // TRACE_EVENT_INSTANT0("MY_SUBSYSTEM", "SomeImportantEvent")
21 // 21 //
22 // It is often the case that one trace may belong in multiple categories at the
23 // same time. The first argument to the trace can be a comma-separated list of
24 // categories, forming a category group, like:
25 //
26 // TRACE_EVENT_INSTANT0("input,views", "OnMouseOver")
27 //
28 // We can enable/disable tracing of OnMouseOver by enabling/disabling either
29 // category.
30 //
22 // Events can be INSTANT, or can be pairs of BEGIN and END in the same scope: 31 // Events can be INSTANT, or can be pairs of BEGIN and END in the same scope:
23 // TRACE_EVENT_BEGIN0("MY_SUBSYSTEM", "SomethingCostly") 32 // TRACE_EVENT_BEGIN0("MY_SUBSYSTEM", "SomethingCostly")
24 // doSomethingCostly() 33 // doSomethingCostly()
25 // TRACE_EVENT_END0("MY_SUBSYSTEM", "SomethingCostly") 34 // TRACE_EVENT_END0("MY_SUBSYSTEM", "SomethingCostly")
26 // Note: our tools can't always determine the correct BEGIN/END pairs unless 35 // Note: our tools can't always determine the correct BEGIN/END pairs unless
27 // these are used in the same scope. Use ASYNC_BEGIN/ASYNC_END macros if you 36 // these are used in the same scope. Use ASYNC_BEGIN/ASYNC_END macros if you
28 // need them to be in separate scopes. 37 // need them to be in separate scopes.
29 // 38 //
30 // A common use case is to trace entire function scopes. This 39 // A common use case is to trace entire function scopes. This
31 // issues a trace BEGIN and END automatically: 40 // issues a trace BEGIN and END automatically:
(...skipping 60 matching lines...) Expand 10 before | Expand all | Expand 10 after
92 // unique ID, by using the TRACE_COUNTER_ID* variations. 101 // unique ID, by using the TRACE_COUNTER_ID* variations.
93 // 102 //
94 // By default, trace collection is compiled in, but turned off at runtime. 103 // By default, trace collection is compiled in, but turned off at runtime.
95 // Collecting trace data is the responsibility of the embedding 104 // Collecting trace data is the responsibility of the embedding
96 // application. In Chrome's case, navigating to about:tracing will turn on 105 // application. In Chrome's case, navigating to about:tracing will turn on
97 // tracing and display data collected across all active processes. 106 // tracing and display data collected across all active processes.
98 // 107 //
99 // 108 //
100 // Memory scoping note: 109 // Memory scoping note:
101 // Tracing copies the pointers, not the string content, of the strings passed 110 // Tracing copies the pointers, not the string content, of the strings passed
102 // in for category, name, and arg_names. Thus, the following code will 111 // in for category_group, name, and arg_names. Thus, the following code will
103 // cause problems: 112 // cause problems:
104 // char* str = strdup("impprtantName"); 113 // char* str = strdup("importantName");
105 // TRACE_EVENT_INSTANT0("SUBSYSTEM", str); // BAD! 114 // TRACE_EVENT_INSTANT0("SUBSYSTEM", str); // BAD!
106 // free(str); // Trace system now has dangling pointer 115 // free(str); // Trace system now has dangling pointer
107 // 116 //
108 // To avoid this issue with the |name| and |arg_name| parameters, use the 117 // To avoid this issue with the |name| and |arg_name| parameters, use the
109 // TRACE_EVENT_COPY_XXX overloads of the macros at additional runtime overhead. 118 // TRACE_EVENT_COPY_XXX overloads of the macros at additional runtime overhead.
110 // Notes: The category must always be in a long-lived char* (i.e. static const). 119 // Notes: The category must always be in a long-lived char* (i.e. static const).
111 // The |arg_values|, when used, are always deep copied with the _COPY 120 // The |arg_values|, when used, are always deep copied with the _COPY
112 // macros. 121 // macros.
113 // 122 //
114 // When are string argument values copied: 123 // When are string argument values copied:
(...skipping 12 matching lines...) Expand all
127 // A thread safe singleton and mutex are used for thread safety. Category 136 // A thread safe singleton and mutex are used for thread safety. Category
128 // enabled flags are used to limit the performance impact when the system 137 // enabled flags are used to limit the performance impact when the system
129 // is not enabled. 138 // is not enabled.
130 // 139 //
131 // TRACE_EVENT macros first cache a pointer to a category. The categories are 140 // TRACE_EVENT macros first cache a pointer to a category. The categories are
132 // statically allocated and safe at all times, even after exit. Fetching a 141 // statically allocated and safe at all times, even after exit. Fetching a
133 // category is protected by the TraceLog::lock_. Multiple threads initializing 142 // category is protected by the TraceLog::lock_. Multiple threads initializing
134 // the static variable is safe, as they will be serialized by the lock and 143 // the static variable is safe, as they will be serialized by the lock and
135 // multiple calls will return the same pointer to the category. 144 // multiple calls will return the same pointer to the category.
136 // 145 //
137 // Then the category_enabled flag is checked. This is a unsigned char, and 146 // Then the category_group_enabled flag is checked. This is a unsigned char, and
138 // not intended to be multithread safe. It optimizes access to AddTraceEvent 147 // not intended to be multithread safe. It optimizes access to AddTraceEvent
139 // which is threadsafe internally via TraceLog::lock_. The enabled flag may 148 // which is threadsafe internally via TraceLog::lock_. The enabled flag may
140 // cause some threads to incorrectly call or skip calling AddTraceEvent near 149 // cause some threads to incorrectly call or skip calling AddTraceEvent near
141 // the time of the system being enabled or disabled. This is acceptable as 150 // the time of the system being enabled or disabled. This is acceptable as
142 // we tolerate some data loss while the system is being enabled/disabled and 151 // we tolerate some data loss while the system is being enabled/disabled and
143 // because AddTraceEvent is threadsafe internally and checks the enabled state 152 // because AddTraceEvent is threadsafe internally and checks the enabled state
144 // again under lock. 153 // again under lock.
145 // 154 //
146 // Without the use of these static category pointers and enabled flags all 155 // Without the use of these static category pointers and enabled flags all
147 // trace points would carry a significant performance cost of aquiring a lock 156 // trace points would carry a significant performance cost of aquiring a lock
(...skipping 13 matching lines...) Expand all
161 // By default, uint64 ID argument values are not mangled with the Process ID in 170 // By default, uint64 ID argument values are not mangled with the Process ID in
162 // TRACE_EVENT_ASYNC macros. Use this macro to force Process ID mangling. 171 // TRACE_EVENT_ASYNC macros. Use this macro to force Process ID mangling.
163 #define TRACE_ID_MANGLE(id) \ 172 #define TRACE_ID_MANGLE(id) \
164 trace_event_internal::TraceID::ForceMangle(id) 173 trace_event_internal::TraceID::ForceMangle(id)
165 174
166 // Records a pair of begin and end events called "name" for the current 175 // Records a pair of begin and end events called "name" for the current
167 // scope, with 0, 1 or 2 associated arguments. If the category is not 176 // scope, with 0, 1 or 2 associated arguments. If the category is not
168 // enabled, then this does nothing. 177 // enabled, then this does nothing.
169 // - category and name strings must have application lifetime (statics or 178 // - category and name strings must have application lifetime (statics or
170 // literals). They may not include " chars. 179 // literals). They may not include " chars.
171 #define TRACE_EVENT0(category, name) \ 180 #define TRACE_EVENT0(category_group, name) \
172 INTERNAL_TRACE_EVENT_ADD_SCOPED(category, name) 181 INTERNAL_TRACE_EVENT_ADD_SCOPED(category_group, name)
173 #define TRACE_EVENT1(category, name, arg1_name, arg1_val) \ 182 #define TRACE_EVENT1(category_group, name, arg1_name, arg1_val) \
174 INTERNAL_TRACE_EVENT_ADD_SCOPED(category, name, arg1_name, arg1_val) 183 INTERNAL_TRACE_EVENT_ADD_SCOPED(category_group, name, arg1_name, arg1_val)
175 #define TRACE_EVENT2(category, name, arg1_name, arg1_val, arg2_name, arg2_val) \ 184 #define TRACE_EVENT2(category_group, name, arg1_name, arg1_val, arg2_name, \
176 INTERNAL_TRACE_EVENT_ADD_SCOPED(category, name, arg1_name, arg1_val, \ 185 arg2_val) \
186 INTERNAL_TRACE_EVENT_ADD_SCOPED(category_group, name, arg1_name, arg1_val, \
177 arg2_name, arg2_val) 187 arg2_name, arg2_val)
178 188
179 // Same as TRACE_EVENT except that they are not included in official builds. 189 // Same as TRACE_EVENT except that they are not included in official builds.
180 #ifdef OFFICIAL_BUILD 190 #ifdef OFFICIAL_BUILD
181 #define UNSHIPPED_TRACE_EVENT0(category, name) (void)0 191 #define UNSHIPPED_TRACE_EVENT0(category_group, name) (void)0
182 #define UNSHIPPED_TRACE_EVENT1(category, name, arg1_name, arg1_val) (void)0 192 #define UNSHIPPED_TRACE_EVENT1(category_group, name, arg1_name, arg1_val) \
183 #define UNSHIPPED_TRACE_EVENT2(category, name, arg1_name, arg1_val, \ 193 (void)0
194 #define UNSHIPPED_TRACE_EVENT2(category_group, name, arg1_name, arg1_val, \
184 arg2_name, arg2_val) (void)0 195 arg2_name, arg2_val) (void)0
185 #define UNSHIPPED_TRACE_EVENT_INSTANT0(category, name) (void)0 196 #define UNSHIPPED_TRACE_EVENT_INSTANT0(category_group, name) (void)0
186 #define UNSHIPPED_TRACE_EVENT_INSTANT1(category, name, arg1_name, arg1_val) \ 197 #define UNSHIPPED_TRACE_EVENT_INSTANT1(category_group, name, arg1_name, \
198 arg1_val) \
187 (void)0 199 (void)0
188 #define UNSHIPPED_TRACE_EVENT_INSTANT2(category, name, arg1_name, arg1_val, \ 200 #define UNSHIPPED_TRACE_EVENT_INSTANT2(category_group, name, arg1_name, \
189 arg2_name, arg2_val) (void)0 201 arg1_val, arg2_name, arg2_val) (void)0
190 #else 202 #else
191 #define UNSHIPPED_TRACE_EVENT0(category, name) \ 203 #define UNSHIPPED_TRACE_EVENT0(category_group, name) \
192 TRACE_EVENT0(category, name) 204 TRACE_EVENT0(category_group, name)
193 #define UNSHIPPED_TRACE_EVENT1(category, name, arg1_name, arg1_val) \ 205 #define UNSHIPPED_TRACE_EVENT1(category_group, name, arg1_name, arg1_val) \
194 TRACE_EVENT1(category, name, arg1_name, arg1_val) 206 TRACE_EVENT1(category_group, name, arg1_name, arg1_val)
195 #define UNSHIPPED_TRACE_EVENT2(category, name, arg1_name, arg1_val, \ 207 #define UNSHIPPED_TRACE_EVENT2(category_group, name, arg1_name, arg1_val, \
196 arg2_name, arg2_val) \ 208 arg2_name, arg2_val) \
197 TRACE_EVENT2(category, name, arg1_name, arg1_val, arg2_name, arg2_val) 209 TRACE_EVENT2(category_group, name, arg1_name, arg1_val, arg2_name, arg2_val)
198 #define UNSHIPPED_TRACE_EVENT_INSTANT0(category, name) \ 210 #define UNSHIPPED_TRACE_EVENT_INSTANT0(category_group, name) \
199 TRACE_EVENT_INSTANT0(category, name) 211 TRACE_EVENT_INSTANT0(category_group, name)
200 #define UNSHIPPED_TRACE_EVENT_INSTANT1(category, name, arg1_name, arg1_val) \ 212 #define UNSHIPPED_TRACE_EVENT_INSTANT1(category_group, name, arg1_name, \
201 TRACE_EVENT_INSTANT1(category, name, arg1_name, arg1_val) 213 arg1_val) \
202 #define UNSHIPPED_TRACE_EVENT_INSTANT2(category, name, arg1_name, arg1_val, \ 214 TRACE_EVENT_INSTANT1(category_group, name, arg1_name, arg1_val)
203 arg2_name, arg2_val) \ 215 #define UNSHIPPED_TRACE_EVENT_INSTANT2(category_group, name, arg1_name, \
204 TRACE_EVENT_INSTANT2(category, name, arg1_name, arg1_val, \ 216 arg1_val, arg2_name, arg2_val) \
217 TRACE_EVENT_INSTANT2(category_group, name, arg1_name, arg1_val, \
205 arg2_name, arg2_val) 218 arg2_name, arg2_val)
206 #endif 219 #endif
207 220
208 // Records a single event called "name" immediately, with 0, 1 or 2 221 // Records a single event called "name" immediately, with 0, 1 or 2
209 // associated arguments. If the category is not enabled, then this 222 // associated arguments. If the category is not enabled, then this
210 // does nothing. 223 // does nothing.
211 // - category and name strings must have application lifetime (statics or 224 // - category and name strings must have application lifetime (statics or
212 // literals). They may not include " chars. 225 // literals). They may not include " chars.
213 #define TRACE_EVENT_INSTANT0(category, name) \ 226 #define TRACE_EVENT_INSTANT0(category_group, name) \
214 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \ 227 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \
215 category, name, TRACE_EVENT_FLAG_NONE) 228 category_group, name, TRACE_EVENT_FLAG_NONE)
216 #define TRACE_EVENT_INSTANT1(category, name, arg1_name, arg1_val) \ 229 #define TRACE_EVENT_INSTANT1(category_group, name, arg1_name, arg1_val) \
217 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \ 230 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \
218 category, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val) 231 category_group, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val)
219 #define TRACE_EVENT_INSTANT2(category, name, arg1_name, arg1_val, \ 232 #define TRACE_EVENT_INSTANT2(category_group, name, arg1_name, arg1_val, \
220 arg2_name, arg2_val) \ 233 arg2_name, arg2_val) \
221 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \ 234 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \
222 category, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val, \ 235 category_group, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val, \
223 arg2_name, arg2_val) 236 arg2_name, arg2_val)
224 #define TRACE_EVENT_COPY_INSTANT0(category, name) \ 237 #define TRACE_EVENT_COPY_INSTANT0(category_group, name) \
225 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \ 238 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \
226 category, name, TRACE_EVENT_FLAG_COPY) 239 category_group, name, TRACE_EVENT_FLAG_COPY)
227 #define TRACE_EVENT_COPY_INSTANT1(category, name, arg1_name, arg1_val) \ 240 #define TRACE_EVENT_COPY_INSTANT1(category_group, name, arg1_name, arg1_val) \
228 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \ 241 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \
229 category, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val) 242 category_group, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val)
230 #define TRACE_EVENT_COPY_INSTANT2(category, name, arg1_name, arg1_val, \ 243 #define TRACE_EVENT_COPY_INSTANT2(category_group, name, arg1_name, arg1_val, \
231 arg2_name, arg2_val) \ 244 arg2_name, arg2_val) \
232 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \ 245 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_INSTANT, \
233 category, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val, \ 246 category_group, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val, \
234 arg2_name, arg2_val) 247 arg2_name, arg2_val)
235 248
236 // Sets the current sample state to the given category and name (both must be 249 // Sets the current sample state to the given category and name (both must be
237 // constant strings). These states are intended for a sampling profiler. 250 // constant strings). These states are intended for a sampling profiler.
238 // Implementation note: we store category and name together because we don't 251 // Implementation note: we store category and name together because we don't
239 // want the inconsistency/expense of storing two pointers. 252 // want the inconsistency/expense of storing two pointers.
240 // |thread_bucket| is [0..2] and is used to statically isolate samples in one 253 // |thread_bucket| is [0..2] and is used to statically isolate samples in one
241 // thread from others. 254 // thread from others.
242 #define TRACE_EVENT_SAMPLE_STATE(thread_bucket, category, name) \ 255 #define TRACE_EVENT_SAMPLE_STATE(thread_bucket, category, name) \
243 TRACE_EVENT_API_ATOMIC_STORE( \ 256 TRACE_EVENT_API_ATOMIC_STORE( \
244 TRACE_EVENT_API_THREAD_BUCKET(thread_bucket), \ 257 TRACE_EVENT_API_THREAD_BUCKET(thread_bucket), \
245 reinterpret_cast<TRACE_EVENT_API_ATOMIC_WORD>(category "\0" name)); 258 reinterpret_cast<TRACE_EVENT_API_ATOMIC_WORD>(category "\0" name));
246 259
247 // Records a single BEGIN event called "name" immediately, with 0, 1 or 2 260 // Records a single BEGIN event called "name" immediately, with 0, 1 or 2
248 // associated arguments. If the category is not enabled, then this 261 // associated arguments. If the category is not enabled, then this
249 // does nothing. 262 // does nothing.
250 // - category and name strings must have application lifetime (statics or 263 // - category and name strings must have application lifetime (statics or
251 // literals). They may not include " chars. 264 // literals). They may not include " chars.
252 #define TRACE_EVENT_BEGIN0(category, name) \ 265 #define TRACE_EVENT_BEGIN0(category_group, name) \
253 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \ 266 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \
254 category, name, TRACE_EVENT_FLAG_NONE) 267 category_group, name, TRACE_EVENT_FLAG_NONE)
255 #define TRACE_EVENT_BEGIN1(category, name, arg1_name, arg1_val) \ 268 #define TRACE_EVENT_BEGIN1(category_group, name, arg1_name, arg1_val) \
256 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \ 269 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \
257 category, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val) 270 category_group, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val)
258 #define TRACE_EVENT_BEGIN2(category, name, arg1_name, arg1_val, \ 271 #define TRACE_EVENT_BEGIN2(category_group, name, arg1_name, arg1_val, \
259 arg2_name, arg2_val) \ 272 arg2_name, arg2_val) \
260 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \ 273 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \
261 category, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val, \ 274 category_group, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val, \
262 arg2_name, arg2_val) 275 arg2_name, arg2_val)
263 #define TRACE_EVENT_COPY_BEGIN0(category, name) \ 276 #define TRACE_EVENT_COPY_BEGIN0(category_group, name) \
264 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \ 277 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \
265 category, name, TRACE_EVENT_FLAG_COPY) 278 category_group, name, TRACE_EVENT_FLAG_COPY)
266 #define TRACE_EVENT_COPY_BEGIN1(category, name, arg1_name, arg1_val) \ 279 #define TRACE_EVENT_COPY_BEGIN1(category_group, name, arg1_name, arg1_val) \
267 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \ 280 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \
268 category, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val) 281 category_group, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val)
269 #define TRACE_EVENT_COPY_BEGIN2(category, name, arg1_name, arg1_val, \ 282 #define TRACE_EVENT_COPY_BEGIN2(category_group, name, arg1_name, arg1_val, \
270 arg2_name, arg2_val) \ 283 arg2_name, arg2_val) \
271 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \ 284 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_BEGIN, \
272 category, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val, \ 285 category_group, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val, \
273 arg2_name, arg2_val) 286 arg2_name, arg2_val)
274 287
275 // Similar to TRACE_EVENT_BEGINx but with a custom |at| timestamp provided. 288 // Similar to TRACE_EVENT_BEGINx but with a custom |at| timestamp provided.
276 // - |id| is used to match the _BEGIN event with the _END event. 289 // - |id| is used to match the _BEGIN event with the _END event.
277 // Events are considered to match if their category, name and id values all 290 // Events are considered to match if their category_group, name and id values
278 // match. |id| must either be a pointer or an integer value up to 64 bits. If 291 // all match. |id| must either be a pointer or an integer value up to 64 bits.
279 // it's a pointer, the bits will be xored with a hash of the process ID so 292 // If it's a pointer, the bits will be xored with a hash of the process ID so
280 // that the same pointer on two different processes will not collide. 293 // that the same pointer on two different processes will not collide.
281 #define TRACE_EVENT_BEGIN_WITH_ID_TID_AND_TIMESTAMP0(category, \ 294 #define TRACE_EVENT_BEGIN_WITH_ID_TID_AND_TIMESTAMP0(category_group, \
282 name, id, thread_id, timestamp) \ 295 name, id, thread_id, timestamp) \
283 INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP( \ 296 INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP( \
284 TRACE_EVENT_PHASE_ASYNC_BEGIN, category, name, id, thread_id, \ 297 TRACE_EVENT_PHASE_ASYNC_BEGIN, category_group, name, id, thread_id, \
285 timestamp, TRACE_EVENT_FLAG_NONE) 298 timestamp, TRACE_EVENT_FLAG_NONE)
286 #define TRACE_EVENT_COPY_BEGIN_WITH_ID_TID_AND_TIMESTAMP0( \ 299 #define TRACE_EVENT_COPY_BEGIN_WITH_ID_TID_AND_TIMESTAMP0( \
287 category, name, id, thread_id, timestamp) \ 300 category_group, name, id, thread_id, timestamp) \
288 INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP( \ 301 INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP( \
289 TRACE_EVENT_PHASE_ASYNC_BEGIN, category, name, id, thread_id, \ 302 TRACE_EVENT_PHASE_ASYNC_BEGIN, category_group, name, id, thread_id, \
290 timestamp, TRACE_EVENT_FLAG_COPY) 303 timestamp, TRACE_EVENT_FLAG_COPY)
291 304
292 // Records a single END event for "name" immediately. If the category 305 // Records a single END event for "name" immediately. If the category
293 // is not enabled, then this does nothing. 306 // is not enabled, then this does nothing.
294 // - category and name strings must have application lifetime (statics or 307 // - category and name strings must have application lifetime (statics or
295 // literals). They may not include " chars. 308 // literals). They may not include " chars.
296 #define TRACE_EVENT_END0(category, name) \ 309 #define TRACE_EVENT_END0(category_group, name) \
297 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \ 310 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \
298 category, name, TRACE_EVENT_FLAG_NONE) 311 category_group, name, TRACE_EVENT_FLAG_NONE)
299 #define TRACE_EVENT_END1(category, name, arg1_name, arg1_val) \ 312 #define TRACE_EVENT_END1(category_group, name, arg1_name, arg1_val) \
300 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \ 313 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \
301 category, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val) 314 category_group, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val)
302 #define TRACE_EVENT_END2(category, name, arg1_name, arg1_val, \ 315 #define TRACE_EVENT_END2(category_group, name, arg1_name, arg1_val, \
303 arg2_name, arg2_val) \ 316 arg2_name, arg2_val) \
304 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \ 317 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \
305 category, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val, \ 318 category_group, name, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val, \
306 arg2_name, arg2_val) 319 arg2_name, arg2_val)
307 #define TRACE_EVENT_COPY_END0(category, name) \ 320 #define TRACE_EVENT_COPY_END0(category_group, name) \
308 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \ 321 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \
309 category, name, TRACE_EVENT_FLAG_COPY) 322 category_group, name, TRACE_EVENT_FLAG_COPY)
310 #define TRACE_EVENT_COPY_END1(category, name, arg1_name, arg1_val) \ 323 #define TRACE_EVENT_COPY_END1(category_group, name, arg1_name, arg1_val) \
311 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \ 324 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \
312 category, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val) 325 category_group, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val)
313 #define TRACE_EVENT_COPY_END2(category, name, arg1_name, arg1_val, \ 326 #define TRACE_EVENT_COPY_END2(category_group, name, arg1_name, arg1_val, \
314 arg2_name, arg2_val) \ 327 arg2_name, arg2_val) \
315 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \ 328 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_END, \
316 category, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val, \ 329 category_group, name, TRACE_EVENT_FLAG_COPY, arg1_name, arg1_val, \
317 arg2_name, arg2_val) 330 arg2_name, arg2_val)
318 331
319 // Similar to TRACE_EVENT_ENDx but with a custom |at| timestamp provided. 332 // Similar to TRACE_EVENT_ENDx but with a custom |at| timestamp provided.
320 // - |id| is used to match the _BEGIN event with the _END event. 333 // - |id| is used to match the _BEGIN event with the _END event.
321 // Events are considered to match if their category, name and id values all 334 // Events are considered to match if their category_group, name and id values
322 // match. |id| must either be a pointer or an integer value up to 64 bits. If 335 // all match. |id| must either be a pointer or an integer value up to 64 bits.
323 // it's a pointer, the bits will be xored with a hash of the process ID so 336 // If it's a pointer, the bits will be xored with a hash of the process ID so
324 // that the same pointer on two different processes will not collide. 337 // that the same pointer on two different processes will not collide.
325 #define TRACE_EVENT_END_WITH_ID_TID_AND_TIMESTAMP0(category, \ 338 #define TRACE_EVENT_END_WITH_ID_TID_AND_TIMESTAMP0(category_group, \
326 name, id, thread_id, timestamp) \ 339 name, id, thread_id, timestamp) \
327 INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP( \ 340 INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP( \
328 TRACE_EVENT_PHASE_ASYNC_END, category, name, id, thread_id, timestamp, \ 341 TRACE_EVENT_PHASE_ASYNC_END, category_group, name, id, thread_id, \
329 TRACE_EVENT_FLAG_NONE) 342 timestamp, TRACE_EVENT_FLAG_NONE)
330 #define TRACE_EVENT_COPY_END_WITH_ID_TID_AND_TIMESTAMP0( \ 343 #define TRACE_EVENT_COPY_END_WITH_ID_TID_AND_TIMESTAMP0( \
331 category, name, id, thread_id, timestamp) \ 344 category_group, name, id, thread_id, timestamp) \
332 INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP( \ 345 INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP( \
333 TRACE_EVENT_PHASE_ASYNC_END, category, name, id, thread_id, timestamp, \ 346 TRACE_EVENT_PHASE_ASYNC_END, category_group, name, id, thread_id, \
334 TRACE_EVENT_FLAG_COPY) 347 timestamp, TRACE_EVENT_FLAG_COPY)
335 348
336 // Records the value of a counter called "name" immediately. Value 349 // Records the value of a counter called "name" immediately. Value
337 // must be representable as a 32 bit integer. 350 // must be representable as a 32 bit integer.
338 // - category and name strings must have application lifetime (statics or 351 // - category and name strings must have application lifetime (statics or
339 // literals). They may not include " chars. 352 // literals). They may not include " chars.
340 #define TRACE_COUNTER1(category, name, value) \ 353 #define TRACE_COUNTER1(category_group, name, value) \
341 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_COUNTER, \ 354 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_COUNTER, \
342 category, name, TRACE_EVENT_FLAG_NONE, \ 355 category_group, name, TRACE_EVENT_FLAG_NONE, \
343 "value", static_cast<int>(value)) 356 "value", static_cast<int>(value))
344 #define TRACE_COPY_COUNTER1(category, name, value) \ 357 #define TRACE_COPY_COUNTER1(category_group, name, value) \
345 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_COUNTER, \ 358 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_COUNTER, \
346 category, name, TRACE_EVENT_FLAG_COPY, \ 359 category_group, name, TRACE_EVENT_FLAG_COPY, \
347 "value", static_cast<int>(value)) 360 "value", static_cast<int>(value))
348 361
349 // Records the values of a multi-parted counter called "name" immediately. 362 // Records the values of a multi-parted counter called "name" immediately.
350 // The UI will treat value1 and value2 as parts of a whole, displaying their 363 // The UI will treat value1 and value2 as parts of a whole, displaying their
351 // values as a stacked-bar chart. 364 // values as a stacked-bar chart.
352 // - category and name strings must have application lifetime (statics or 365 // - category and name strings must have application lifetime (statics or
353 // literals). They may not include " chars. 366 // literals). They may not include " chars.
354 #define TRACE_COUNTER2(category, name, value1_name, value1_val, \ 367 #define TRACE_COUNTER2(category_group, name, value1_name, value1_val, \
355 value2_name, value2_val) \ 368 value2_name, value2_val) \
356 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_COUNTER, \ 369 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_COUNTER, \
357 category, name, TRACE_EVENT_FLAG_NONE, \ 370 category_group, name, TRACE_EVENT_FLAG_NONE, \
358 value1_name, static_cast<int>(value1_val), \ 371 value1_name, static_cast<int>(value1_val), \
359 value2_name, static_cast<int>(value2_val)) 372 value2_name, static_cast<int>(value2_val))
360 #define TRACE_COPY_COUNTER2(category, name, value1_name, value1_val, \ 373 #define TRACE_COPY_COUNTER2(category_group, name, value1_name, value1_val, \
361 value2_name, value2_val) \ 374 value2_name, value2_val) \
362 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_COUNTER, \ 375 INTERNAL_TRACE_EVENT_ADD(TRACE_EVENT_PHASE_COUNTER, \
363 category, name, TRACE_EVENT_FLAG_COPY, \ 376 category_group, name, TRACE_EVENT_FLAG_COPY, \
364 value1_name, static_cast<int>(value1_val), \ 377 value1_name, static_cast<int>(value1_val), \
365 value2_name, static_cast<int>(value2_val)) 378 value2_name, static_cast<int>(value2_val))
366 379
367 // Records the value of a counter called "name" immediately. Value 380 // Records the value of a counter called "name" immediately. Value
368 // must be representable as a 32 bit integer. 381 // must be representable as a 32 bit integer.
369 // - category and name strings must have application lifetime (statics or 382 // - category and name strings must have application lifetime (statics or
370 // literals). They may not include " chars. 383 // literals). They may not include " chars.
371 // - |id| is used to disambiguate counters with the same name. It must either 384 // - |id| is used to disambiguate counters with the same name. It must either
372 // be a pointer or an integer value up to 64 bits. If it's a pointer, the bits 385 // be a pointer or an integer value up to 64 bits. If it's a pointer, the bits
373 // will be xored with a hash of the process ID so that the same pointer on 386 // will be xored with a hash of the process ID so that the same pointer on
374 // two different processes will not collide. 387 // two different processes will not collide.
375 #define TRACE_COUNTER_ID1(category, name, id, value) \ 388 #define TRACE_COUNTER_ID1(category_group, name, id, value) \
376 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_COUNTER, \ 389 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_COUNTER, \
377 category, name, id, TRACE_EVENT_FLAG_NONE, \ 390 category_group, name, id, TRACE_EVENT_FLAG_NONE, \
378 "value", static_cast<int>(value)) 391 "value", static_cast<int>(value))
379 #define TRACE_COPY_COUNTER_ID1(category, name, id, value) \ 392 #define TRACE_COPY_COUNTER_ID1(category_group, name, id, value) \
380 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_COUNTER, \ 393 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_COUNTER, \
381 category, name, id, TRACE_EVENT_FLAG_COPY, \ 394 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
382 "value", static_cast<int>(value)) 395 "value", static_cast<int>(value))
383 396
384 // Records the values of a multi-parted counter called "name" immediately. 397 // Records the values of a multi-parted counter called "name" immediately.
385 // The UI will treat value1 and value2 as parts of a whole, displaying their 398 // The UI will treat value1 and value2 as parts of a whole, displaying their
386 // values as a stacked-bar chart. 399 // values as a stacked-bar chart.
387 // - category and name strings must have application lifetime (statics or 400 // - category and name strings must have application lifetime (statics or
388 // literals). They may not include " chars. 401 // literals). They may not include " chars.
389 // - |id| is used to disambiguate counters with the same name. It must either 402 // - |id| is used to disambiguate counters with the same name. It must either
390 // be a pointer or an integer value up to 64 bits. If it's a pointer, the bits 403 // be a pointer or an integer value up to 64 bits. If it's a pointer, the bits
391 // will be xored with a hash of the process ID so that the same pointer on 404 // will be xored with a hash of the process ID so that the same pointer on
392 // two different processes will not collide. 405 // two different processes will not collide.
393 #define TRACE_COUNTER_ID2(category, name, id, value1_name, value1_val, \ 406 #define TRACE_COUNTER_ID2(category_group, name, id, value1_name, value1_val, \
394 value2_name, value2_val) \ 407 value2_name, value2_val) \
395 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_COUNTER, \ 408 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_COUNTER, \
396 category, name, id, TRACE_EVENT_FLAG_NONE, \ 409 category_group, name, id, TRACE_EVENT_FLAG_NONE, \
397 value1_name, static_cast<int>(value1_val), \ 410 value1_name, static_cast<int>(value1_val), \
398 value2_name, static_cast<int>(value2_val)) 411 value2_name, static_cast<int>(value2_val))
399 #define TRACE_COPY_COUNTER_ID2(category, name, id, value1_name, value1_val, \ 412 #define TRACE_COPY_COUNTER_ID2(category_group, name, id, value1_name, \
400 value2_name, value2_val) \ 413 value1_val, value2_name, value2_val) \
401 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_COUNTER, \ 414 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_COUNTER, \
402 category, name, id, TRACE_EVENT_FLAG_COPY, \ 415 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
403 value1_name, static_cast<int>(value1_val), \ 416 value1_name, static_cast<int>(value1_val), \
404 value2_name, static_cast<int>(value2_val)) 417 value2_name, static_cast<int>(value2_val))
405 418
406 419
407 // Records a single ASYNC_BEGIN event called "name" immediately, with 0, 1 or 2 420 // Records a single ASYNC_BEGIN event called "name" immediately, with 0, 1 or 2
408 // associated arguments. If the category is not enabled, then this 421 // associated arguments. If the category is not enabled, then this
409 // does nothing. 422 // does nothing.
410 // - category and name strings must have application lifetime (statics or 423 // - category and name strings must have application lifetime (statics or
411 // literals). They may not include " chars. 424 // literals). They may not include " chars.
412 // - |id| is used to match the ASYNC_BEGIN event with the ASYNC_END event. ASYNC 425 // - |id| is used to match the ASYNC_BEGIN event with the ASYNC_END event. ASYNC
413 // events are considered to match if their category, name and id values all 426 // events are considered to match if their category_group, name and id values
414 // match. |id| must either be a pointer or an integer value up to 64 bits. If 427 // all match. |id| must either be a pointer or an integer value up to 64 bits.
415 // it's a pointer, the bits will be xored with a hash of the process ID so 428 // If it's a pointer, the bits will be xored with a hash of the process ID so
416 // that the same pointer on two different processes will not collide. 429 // that the same pointer on two different processes will not collide.
417 // An asynchronous operation can consist of multiple phases. The first phase is 430 // An asynchronous operation can consist of multiple phases. The first phase is
418 // defined by the ASYNC_BEGIN calls. Additional phases can be defined using the 431 // defined by the ASYNC_BEGIN calls. Additional phases can be defined using the
419 // ASYNC_STEP macros. When the operation completes, call ASYNC_END. 432 // ASYNC_STEP macros. When the operation completes, call ASYNC_END.
420 // An ASYNC trace typically occur on a single thread (if not, they will only be 433 // An ASYNC trace typically occur on a single thread (if not, they will only be
421 // drawn on the thread defined in the ASYNC_BEGIN event), but all events in that 434 // drawn on the thread defined in the ASYNC_BEGIN event), but all events in that
422 // operation must use the same |name| and |id|. Each event can have its own 435 // operation must use the same |name| and |id|. Each event can have its own
423 // args. 436 // args.
424 #define TRACE_EVENT_ASYNC_BEGIN0(category, name, id) \ 437 #define TRACE_EVENT_ASYNC_BEGIN0(category_group, name, id) \
425 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \ 438 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \
426 category, name, id, TRACE_EVENT_FLAG_NONE) 439 category_group, name, id, TRACE_EVENT_FLAG_NONE)
427 #define TRACE_EVENT_ASYNC_BEGIN1(category, name, id, arg1_name, arg1_val) \ 440 #define TRACE_EVENT_ASYNC_BEGIN1(category_group, name, id, arg1_name, \
441 arg1_val) \
428 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \ 442 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \
429 category, name, id, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val) 443 category_group, name, id, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val)
430 #define TRACE_EVENT_ASYNC_BEGIN2(category, name, id, arg1_name, arg1_val, \ 444 #define TRACE_EVENT_ASYNC_BEGIN2(category_group, name, id, arg1_name, \
431 arg2_name, arg2_val) \ 445 arg1_val, arg2_name, arg2_val) \
432 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \ 446 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \
433 category, name, id, TRACE_EVENT_FLAG_NONE, \ 447 category_group, name, id, TRACE_EVENT_FLAG_NONE, \
434 arg1_name, arg1_val, arg2_name, arg2_val) 448 arg1_name, arg1_val, arg2_name, arg2_val)
435 #define TRACE_EVENT_COPY_ASYNC_BEGIN0(category, name, id) \ 449 #define TRACE_EVENT_COPY_ASYNC_BEGIN0(category_group, name, id) \
436 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \ 450 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \
437 category, name, id, TRACE_EVENT_FLAG_COPY) 451 category_group, name, id, TRACE_EVENT_FLAG_COPY)
438 #define TRACE_EVENT_COPY_ASYNC_BEGIN1(category, name, id, arg1_name, arg1_val) \ 452 #define TRACE_EVENT_COPY_ASYNC_BEGIN1(category_group, name, id, arg1_name, \
453 arg1_val) \
439 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \ 454 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \
440 category, name, id, TRACE_EVENT_FLAG_COPY, \ 455 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
441 arg1_name, arg1_val) 456 arg1_name, arg1_val)
442 #define TRACE_EVENT_COPY_ASYNC_BEGIN2(category, name, id, arg1_name, arg1_val, \ 457 #define TRACE_EVENT_COPY_ASYNC_BEGIN2(category_group, name, id, arg1_name, \
443 arg2_name, arg2_val) \ 458 arg1_val, arg2_name, arg2_val) \
444 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \ 459 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_BEGIN, \
445 category, name, id, TRACE_EVENT_FLAG_COPY, \ 460 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
446 arg1_name, arg1_val, arg2_name, arg2_val) 461 arg1_name, arg1_val, arg2_name, arg2_val)
447 462
448 // Records a single ASYNC_STEP event for |step| immediately. If the category 463 // Records a single ASYNC_STEP event for |step| immediately. If the category
449 // is not enabled, then this does nothing. The |name| and |id| must match the 464 // is not enabled, then this does nothing. The |name| and |id| must match the
450 // ASYNC_BEGIN event above. The |step| param identifies this step within the 465 // ASYNC_BEGIN event above. The |step| param identifies this step within the
451 // async event. This should be called at the beginning of the next phase of an 466 // async event. This should be called at the beginning of the next phase of an
452 // asynchronous operation. 467 // asynchronous operation.
453 #define TRACE_EVENT_ASYNC_STEP0(category, name, id, step) \ 468 #define TRACE_EVENT_ASYNC_STEP0(category_group, name, id, step) \
454 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_STEP, \ 469 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_STEP, \
455 category, name, id, TRACE_EVENT_FLAG_NONE, "step", step) 470 category_group, name, id, TRACE_EVENT_FLAG_NONE, "step", step)
456 #define TRACE_EVENT_ASYNC_STEP1(category, name, id, step, \ 471 #define TRACE_EVENT_ASYNC_STEP1(category_group, name, id, step, \
457 arg1_name, arg1_val) \ 472 arg1_name, arg1_val) \
458 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_STEP, \ 473 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_STEP, \
459 category, name, id, TRACE_EVENT_FLAG_NONE, "step", step, \ 474 category_group, name, id, TRACE_EVENT_FLAG_NONE, "step", step, \
460 arg1_name, arg1_val) 475 arg1_name, arg1_val)
461 #define TRACE_EVENT_COPY_ASYNC_STEP0(category, name, id, step) \ 476 #define TRACE_EVENT_COPY_ASYNC_STEP0(category_group, name, id, step) \
462 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_STEP, \ 477 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_STEP, \
463 category, name, id, TRACE_EVENT_FLAG_COPY, "step", step) 478 category_group, name, id, TRACE_EVENT_FLAG_COPY, "step", step)
464 #define TRACE_EVENT_COPY_ASYNC_STEP1(category, name, id, step, \ 479 #define TRACE_EVENT_COPY_ASYNC_STEP1(category_group, name, id, step, \
465 arg1_name, arg1_val) \ 480 arg1_name, arg1_val) \
466 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_STEP, \ 481 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_STEP, \
467 category, name, id, TRACE_EVENT_FLAG_COPY, "step", step, \ 482 category_group, name, id, TRACE_EVENT_FLAG_COPY, "step", step, \
468 arg1_name, arg1_val) 483 arg1_name, arg1_val)
469 484
470 // Records a single ASYNC_END event for "name" immediately. If the category 485 // Records a single ASYNC_END event for "name" immediately. If the category
471 // is not enabled, then this does nothing. 486 // is not enabled, then this does nothing.
472 #define TRACE_EVENT_ASYNC_END0(category, name, id) \ 487 #define TRACE_EVENT_ASYNC_END0(category_group, name, id) \
473 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \ 488 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \
474 category, name, id, TRACE_EVENT_FLAG_NONE) 489 category_group, name, id, TRACE_EVENT_FLAG_NONE)
475 #define TRACE_EVENT_ASYNC_END1(category, name, id, arg1_name, arg1_val) \ 490 #define TRACE_EVENT_ASYNC_END1(category_group, name, id, arg1_name, arg1_val) \
476 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \ 491 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \
477 category, name, id, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val) 492 category_group, name, id, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val)
478 #define TRACE_EVENT_ASYNC_END2(category, name, id, arg1_name, arg1_val, \ 493 #define TRACE_EVENT_ASYNC_END2(category_group, name, id, arg1_name, arg1_val, \
479 arg2_name, arg2_val) \ 494 arg2_name, arg2_val) \
480 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \ 495 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \
481 category, name, id, TRACE_EVENT_FLAG_NONE, \ 496 category_group, name, id, TRACE_EVENT_FLAG_NONE, \
482 arg1_name, arg1_val, arg2_name, arg2_val) 497 arg1_name, arg1_val, arg2_name, arg2_val)
483 #define TRACE_EVENT_COPY_ASYNC_END0(category, name, id) \ 498 #define TRACE_EVENT_COPY_ASYNC_END0(category_group, name, id) \
484 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \ 499 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \
485 category, name, id, TRACE_EVENT_FLAG_COPY) 500 category_group, name, id, TRACE_EVENT_FLAG_COPY)
486 #define TRACE_EVENT_COPY_ASYNC_END1(category, name, id, arg1_name, arg1_val) \ 501 #define TRACE_EVENT_COPY_ASYNC_END1(category_group, name, id, arg1_name, \
502 arg1_val) \
487 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \ 503 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \
488 category, name, id, TRACE_EVENT_FLAG_COPY, \ 504 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
489 arg1_name, arg1_val) 505 arg1_name, arg1_val)
490 #define TRACE_EVENT_COPY_ASYNC_END2(category, name, id, arg1_name, arg1_val, \ 506 #define TRACE_EVENT_COPY_ASYNC_END2(category_group, name, id, arg1_name, \
491 arg2_name, arg2_val) \ 507 arg1_val, arg2_name, arg2_val) \
492 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \ 508 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_ASYNC_END, \
493 category, name, id, TRACE_EVENT_FLAG_COPY, \ 509 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
494 arg1_name, arg1_val, arg2_name, arg2_val) 510 arg1_name, arg1_val, arg2_name, arg2_val)
495 511
496 512
497 // Records a single FLOW_BEGIN event called "name" immediately, with 0, 1 or 2 513 // Records a single FLOW_BEGIN event called "name" immediately, with 0, 1 or 2
498 // associated arguments. If the category is not enabled, then this 514 // associated arguments. If the category is not enabled, then this
499 // does nothing. 515 // does nothing.
500 // - category and name strings must have application lifetime (statics or 516 // - category and name strings must have application lifetime (statics or
501 // literals). They may not include " chars. 517 // literals). They may not include " chars.
502 // - |id| is used to match the FLOW_BEGIN event with the FLOW_END event. FLOW 518 // - |id| is used to match the FLOW_BEGIN event with the FLOW_END event. FLOW
503 // events are considered to match if their category, name and id values all 519 // events are considered to match if their category_group, name and id values
504 // match. |id| must either be a pointer or an integer value up to 64 bits. If 520 // all match. |id| must either be a pointer or an integer value up to 64 bits.
505 // it's a pointer, the bits will be xored with a hash of the process ID so 521 // If it's a pointer, the bits will be xored with a hash of the process ID so
506 // that the same pointer on two different processes will not collide. 522 // that the same pointer on two different processes will not collide.
507 // FLOW events are different from ASYNC events in how they are drawn by the 523 // FLOW events are different from ASYNC events in how they are drawn by the
508 // tracing UI. A FLOW defines asynchronous data flow, such as posting a task 524 // tracing UI. A FLOW defines asynchronous data flow, such as posting a task
509 // (FLOW_BEGIN) and later executing that task (FLOW_END). Expect FLOWs to be 525 // (FLOW_BEGIN) and later executing that task (FLOW_END). Expect FLOWs to be
510 // drawn as lines or arrows from FLOW_BEGIN scopes to FLOW_END scopes. Similar 526 // drawn as lines or arrows from FLOW_BEGIN scopes to FLOW_END scopes. Similar
511 // to ASYNC, a FLOW can consist of multiple phases. The first phase is defined 527 // to ASYNC, a FLOW can consist of multiple phases. The first phase is defined
512 // by the FLOW_BEGIN calls. Additional phases can be defined using the FLOW_STEP 528 // by the FLOW_BEGIN calls. Additional phases can be defined using the FLOW_STEP
513 // macros. When the operation completes, call FLOW_END. An async operation can 529 // macros. When the operation completes, call FLOW_END. An async operation can
514 // span threads and processes, but all events in that operation must use the 530 // span threads and processes, but all events in that operation must use the
515 // same |name| and |id|. Each event can have its own args. 531 // same |name| and |id|. Each event can have its own args.
516 #define TRACE_EVENT_FLOW_BEGIN0(category, name, id) \ 532 #define TRACE_EVENT_FLOW_BEGIN0(category_group, name, id) \
517 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \ 533 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \
518 category, name, id, TRACE_EVENT_FLAG_NONE) 534 category_group, name, id, TRACE_EVENT_FLAG_NONE)
519 #define TRACE_EVENT_FLOW_BEGIN1(category, name, id, arg1_name, arg1_val) \ 535 #define TRACE_EVENT_FLOW_BEGIN1(category_group, name, id, arg1_name, arg1_val) \
520 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \ 536 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \
521 category, name, id, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val) 537 category_group, name, id, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val)
522 #define TRACE_EVENT_FLOW_BEGIN2(category, name, id, arg1_name, arg1_val, \ 538 #define TRACE_EVENT_FLOW_BEGIN2(category_group, name, id, arg1_name, arg1_val, \
523 arg2_name, arg2_val) \ 539 arg2_name, arg2_val) \
524 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \ 540 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \
525 category, name, id, TRACE_EVENT_FLAG_NONE, \ 541 category_group, name, id, TRACE_EVENT_FLAG_NONE, \
526 arg1_name, arg1_val, arg2_name, arg2_val) 542 arg1_name, arg1_val, arg2_name, arg2_val)
527 #define TRACE_EVENT_COPY_FLOW_BEGIN0(category, name, id) \ 543 #define TRACE_EVENT_COPY_FLOW_BEGIN0(category_group, name, id) \
528 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \ 544 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \
529 category, name, id, TRACE_EVENT_FLAG_COPY) 545 category_group, name, id, TRACE_EVENT_FLAG_COPY)
530 #define TRACE_EVENT_COPY_FLOW_BEGIN1(category, name, id, arg1_name, arg1_val) \ 546 #define TRACE_EVENT_COPY_FLOW_BEGIN1(category_group, name, id, arg1_name, \
547 arg1_val) \
531 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \ 548 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \
532 category, name, id, TRACE_EVENT_FLAG_COPY, \ 549 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
533 arg1_name, arg1_val) 550 arg1_name, arg1_val)
534 #define TRACE_EVENT_COPY_FLOW_BEGIN2(category, name, id, arg1_name, arg1_val, \ 551 #define TRACE_EVENT_COPY_FLOW_BEGIN2(category_group, name, id, arg1_name, \
535 arg2_name, arg2_val) \ 552 arg1_val, arg2_name, arg2_val) \
536 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \ 553 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_BEGIN, \
537 category, name, id, TRACE_EVENT_FLAG_COPY, \ 554 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
538 arg1_name, arg1_val, arg2_name, arg2_val) 555 arg1_name, arg1_val, arg2_name, arg2_val)
539 556
540 // Records a single FLOW_STEP event for |step| immediately. If the category 557 // Records a single FLOW_STEP event for |step| immediately. If the category
541 // is not enabled, then this does nothing. The |name| and |id| must match the 558 // is not enabled, then this does nothing. The |name| and |id| must match the
542 // FLOW_BEGIN event above. The |step| param identifies this step within the 559 // FLOW_BEGIN event above. The |step| param identifies this step within the
543 // async event. This should be called at the beginning of the next phase of an 560 // async event. This should be called at the beginning of the next phase of an
544 // asynchronous operation. 561 // asynchronous operation.
545 #define TRACE_EVENT_FLOW_STEP0(category, name, id, step) \ 562 #define TRACE_EVENT_FLOW_STEP0(category_group, name, id, step) \
546 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_STEP, \ 563 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_STEP, \
547 category, name, id, TRACE_EVENT_FLAG_NONE, "step", step) 564 category_group, name, id, TRACE_EVENT_FLAG_NONE, "step", step)
548 #define TRACE_EVENT_FLOW_STEP1(category, name, id, step, \ 565 #define TRACE_EVENT_FLOW_STEP1(category_group, name, id, step, \
549 arg1_name, arg1_val) \ 566 arg1_name, arg1_val) \
550 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_STEP, \ 567 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_STEP, \
551 category, name, id, TRACE_EVENT_FLAG_NONE, "step", step, \ 568 category_group, name, id, TRACE_EVENT_FLAG_NONE, "step", step, \
552 arg1_name, arg1_val) 569 arg1_name, arg1_val)
553 #define TRACE_EVENT_COPY_FLOW_STEP0(category, name, id, step) \ 570 #define TRACE_EVENT_COPY_FLOW_STEP0(category_group, name, id, step) \
554 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_STEP, \ 571 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_STEP, \
555 category, name, id, TRACE_EVENT_FLAG_COPY, "step", step) 572 category_group, name, id, TRACE_EVENT_FLAG_COPY, "step", step)
556 #define TRACE_EVENT_COPY_FLOW_STEP1(category, name, id, step, \ 573 #define TRACE_EVENT_COPY_FLOW_STEP1(category_group, name, id, step, \
557 arg1_name, arg1_val) \ 574 arg1_name, arg1_val) \
558 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_STEP, \ 575 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_STEP, \
559 category, name, id, TRACE_EVENT_FLAG_COPY, "step", step, \ 576 category_group, name, id, TRACE_EVENT_FLAG_COPY, "step", step, \
560 arg1_name, arg1_val) 577 arg1_name, arg1_val)
561 578
562 // Records a single FLOW_END event for "name" immediately. If the category 579 // Records a single FLOW_END event for "name" immediately. If the category
563 // is not enabled, then this does nothing. 580 // is not enabled, then this does nothing.
564 #define TRACE_EVENT_FLOW_END0(category, name, id) \ 581 #define TRACE_EVENT_FLOW_END0(category_group, name, id) \
565 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \ 582 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \
566 category, name, id, TRACE_EVENT_FLAG_NONE) 583 category_group, name, id, TRACE_EVENT_FLAG_NONE)
567 #define TRACE_EVENT_FLOW_END1(category, name, id, arg1_name, arg1_val) \ 584 #define TRACE_EVENT_FLOW_END1(category_group, name, id, arg1_name, arg1_val) \
568 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \ 585 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \
569 category, name, id, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val) 586 category_group, name, id, TRACE_EVENT_FLAG_NONE, arg1_name, arg1_val)
570 #define TRACE_EVENT_FLOW_END2(category, name, id, arg1_name, arg1_val, \ 587 #define TRACE_EVENT_FLOW_END2(category_group, name, id, arg1_name, arg1_val, \
571 arg2_name, arg2_val) \ 588 arg2_name, arg2_val) \
572 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \ 589 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \
573 category, name, id, TRACE_EVENT_FLAG_NONE, \ 590 category_group, name, id, TRACE_EVENT_FLAG_NONE, \
574 arg1_name, arg1_val, arg2_name, arg2_val) 591 arg1_name, arg1_val, arg2_name, arg2_val)
575 #define TRACE_EVENT_COPY_FLOW_END0(category, name, id) \ 592 #define TRACE_EVENT_COPY_FLOW_END0(category_group, name, id) \
576 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \ 593 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \
577 category, name, id, TRACE_EVENT_FLAG_COPY) 594 category_group, name, id, TRACE_EVENT_FLAG_COPY)
578 #define TRACE_EVENT_COPY_FLOW_END1(category, name, id, arg1_name, arg1_val) \ 595 #define TRACE_EVENT_COPY_FLOW_END1(category_group, name, id, arg1_name, \
596 arg1_val) \
579 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \ 597 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \
580 category, name, id, TRACE_EVENT_FLAG_COPY, \ 598 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
581 arg1_name, arg1_val) 599 arg1_name, arg1_val)
582 #define TRACE_EVENT_COPY_FLOW_END2(category, name, id, arg1_name, arg1_val, \ 600 #define TRACE_EVENT_COPY_FLOW_END2(category_group, name, id, arg1_name, \
583 arg2_name, arg2_val) \ 601 arg1_val, arg2_name, arg2_val) \
584 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \ 602 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_FLOW_END, \
585 category, name, id, TRACE_EVENT_FLAG_COPY, \ 603 category_group, name, id, TRACE_EVENT_FLAG_COPY, \
586 arg1_name, arg1_val, arg2_name, arg2_val) 604 arg1_name, arg1_val, arg2_name, arg2_val)
587 605
588 // Macros to track the life time of arbitratry client objects. 606 // Macros to track the life time of arbitratry client objects.
589 // See also TraceTrackableObject. 607 // See also TraceTrackableObject.
590 #define TRACE_EVENT_OBJECT_CREATED_WITH_ID(category, name, id) \ 608 #define TRACE_EVENT_OBJECT_CREATED_WITH_ID(category, name, id) \
591 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_CREATE_OBJECT, \ 609 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_CREATE_OBJECT, \
592 category, name, id, TRACE_EVENT_FLAG_NONE) 610 category, name, id, TRACE_EVENT_FLAG_NONE)
593 611
594 #define TRACE_EVENT_OBJECT_DELETED_WITH_ID(category, name, id) \ 612 #define TRACE_EVENT_OBJECT_DELETED_WITH_ID(category, name, id) \
595 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_DELETE_OBJECT, \ 613 INTERNAL_TRACE_EVENT_ADD_WITH_ID(TRACE_EVENT_PHASE_DELETE_OBJECT, \
596 category, name, id, TRACE_EVENT_FLAG_NONE) 614 category, name, id, TRACE_EVENT_FLAG_NONE)
597 615
598 // Implementation detail: trace event macros create temporary variables 616 // Implementation detail: trace event macros create temporary variables
599 // to keep instrumentation overhead low. These macros give each temporary 617 // to keep instrumentation overhead low. These macros give each temporary
600 // variable a unique name based on the line number to prevent name collissions. 618 // variable a unique name based on the line number to prevent name collissions.
601 #define INTERNAL_TRACE_EVENT_UID3(a,b) \ 619 #define INTERNAL_TRACE_EVENT_UID3(a,b) \
602 trace_event_unique_##a##b 620 trace_event_unique_##a##b
603 #define INTERNAL_TRACE_EVENT_UID2(a,b) \ 621 #define INTERNAL_TRACE_EVENT_UID2(a,b) \
604 INTERNAL_TRACE_EVENT_UID3(a,b) 622 INTERNAL_TRACE_EVENT_UID3(a,b)
605 #define INTERNAL_TRACE_EVENT_UID(name_prefix) \ 623 #define INTERNAL_TRACE_EVENT_UID(name_prefix) \
606 INTERNAL_TRACE_EVENT_UID2(name_prefix, __LINE__) 624 INTERNAL_TRACE_EVENT_UID2(name_prefix, __LINE__)
607 625
608 // Implementation detail: internal macro to create static category. 626 // Implementation detail: internal macro to create static category.
609 // No barriers are needed, because this code is designed to operate safely 627 // No barriers are needed, because this code is designed to operate safely
610 // even when the unsigned char* points to garbage data (which may be the case 628 // even when the unsigned char* points to garbage data (which may be the case
611 // on processors without cache coherency). 629 // on processors without cache coherency).
612 #define INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category) \ 630 #define INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category_group) \
613 static TRACE_EVENT_API_ATOMIC_WORD INTERNAL_TRACE_EVENT_UID(atomic) = 0; \ 631 static TRACE_EVENT_API_ATOMIC_WORD INTERNAL_TRACE_EVENT_UID(atomic) = 0; \
614 const unsigned char* INTERNAL_TRACE_EVENT_UID(catstatic) = \ 632 const unsigned char* INTERNAL_TRACE_EVENT_UID(catstatic) = \
615 reinterpret_cast<const unsigned char*>(TRACE_EVENT_API_ATOMIC_LOAD( \ 633 reinterpret_cast<const unsigned char*>(TRACE_EVENT_API_ATOMIC_LOAD( \
616 INTERNAL_TRACE_EVENT_UID(atomic))); \ 634 INTERNAL_TRACE_EVENT_UID(atomic))); \
617 if (!INTERNAL_TRACE_EVENT_UID(catstatic)) { \ 635 if (!INTERNAL_TRACE_EVENT_UID(catstatic)) { \
618 INTERNAL_TRACE_EVENT_UID(catstatic) = \ 636 INTERNAL_TRACE_EVENT_UID(catstatic) = \
619 TRACE_EVENT_API_GET_CATEGORY_ENABLED(category); \ 637 TRACE_EVENT_API_GET_CATEGORY_GROUP_ENABLED(category_group); \
620 TRACE_EVENT_API_ATOMIC_STORE(INTERNAL_TRACE_EVENT_UID(atomic), \ 638 TRACE_EVENT_API_ATOMIC_STORE(INTERNAL_TRACE_EVENT_UID(atomic), \
621 reinterpret_cast<TRACE_EVENT_API_ATOMIC_WORD>( \ 639 reinterpret_cast<TRACE_EVENT_API_ATOMIC_WORD>( \
622 INTERNAL_TRACE_EVENT_UID(catstatic))); \ 640 INTERNAL_TRACE_EVENT_UID(catstatic))); \
623 } 641 }
624 642
625 // Implementation detail: internal macro to create static category and add 643 // Implementation detail: internal macro to create static category and add
626 // event if the category is enabled. 644 // event if the category is enabled.
627 #define INTERNAL_TRACE_EVENT_ADD(phase, category, name, flags, ...) \ 645 #define INTERNAL_TRACE_EVENT_ADD(phase, category_group, name, flags, ...) \
628 do { \ 646 do { \
629 INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category); \ 647 INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category_group); \
630 if (*INTERNAL_TRACE_EVENT_UID(catstatic)) { \ 648 if (*INTERNAL_TRACE_EVENT_UID(catstatic)) { \
631 trace_event_internal::AddTraceEvent( \ 649 trace_event_internal::AddTraceEvent( \
632 phase, INTERNAL_TRACE_EVENT_UID(catstatic), name, \ 650 phase, INTERNAL_TRACE_EVENT_UID(catstatic), name, \
633 trace_event_internal::kNoEventId, flags, ##__VA_ARGS__); \ 651 trace_event_internal::kNoEventId, flags, ##__VA_ARGS__); \
634 } \ 652 } \
635 } while (0) 653 } while (0)
636 654
637 // Implementation detail: internal macro to create static category and add begin 655 // Implementation detail: internal macro to create static category and add begin
638 // event if the category is enabled. Also adds the end event when the scope 656 // event if the category is enabled. Also adds the end event when the scope
639 // ends. 657 // ends.
640 #define INTERNAL_TRACE_EVENT_ADD_SCOPED(category, name, ...) \ 658 #define INTERNAL_TRACE_EVENT_ADD_SCOPED(category_group, name, ...) \
641 INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category); \ 659 INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category_group); \
642 trace_event_internal::TraceEndOnScopeClose \ 660 trace_event_internal::TraceEndOnScopeClose \
643 INTERNAL_TRACE_EVENT_UID(profileScope); \ 661 INTERNAL_TRACE_EVENT_UID(profileScope); \
644 if (*INTERNAL_TRACE_EVENT_UID(catstatic)) { \ 662 if (*INTERNAL_TRACE_EVENT_UID(catstatic)) { \
645 trace_event_internal::AddTraceEvent( \ 663 trace_event_internal::AddTraceEvent( \
646 TRACE_EVENT_PHASE_BEGIN, \ 664 TRACE_EVENT_PHASE_BEGIN, \
647 INTERNAL_TRACE_EVENT_UID(catstatic), \ 665 INTERNAL_TRACE_EVENT_UID(catstatic), \
648 name, trace_event_internal::kNoEventId, \ 666 name, trace_event_internal::kNoEventId, \
649 TRACE_EVENT_FLAG_NONE, ##__VA_ARGS__); \ 667 TRACE_EVENT_FLAG_NONE, ##__VA_ARGS__); \
650 INTERNAL_TRACE_EVENT_UID(profileScope).Initialize( \ 668 INTERNAL_TRACE_EVENT_UID(profileScope).Initialize( \
651 INTERNAL_TRACE_EVENT_UID(catstatic), name); \ 669 INTERNAL_TRACE_EVENT_UID(catstatic), name); \
652 } 670 }
653 671
654 // Implementation detail: internal macro to create static category and add 672 // Implementation detail: internal macro to create static category and add
655 // event if the category is enabled. 673 // event if the category is enabled.
656 #define INTERNAL_TRACE_EVENT_ADD_WITH_ID(phase, category, name, id, flags, \ 674 #define INTERNAL_TRACE_EVENT_ADD_WITH_ID(phase, category_group, name, id, \
657 ...) \ 675 flags, ...) \
658 do { \ 676 do { \
659 INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category); \ 677 INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category_group); \
660 if (*INTERNAL_TRACE_EVENT_UID(catstatic)) { \ 678 if (*INTERNAL_TRACE_EVENT_UID(catstatic)) { \
661 unsigned char trace_event_flags = flags | TRACE_EVENT_FLAG_HAS_ID; \ 679 unsigned char trace_event_flags = flags | TRACE_EVENT_FLAG_HAS_ID; \
662 trace_event_internal::TraceID trace_event_trace_id( \ 680 trace_event_internal::TraceID trace_event_trace_id( \
663 id, &trace_event_flags); \ 681 id, &trace_event_flags); \
664 trace_event_internal::AddTraceEvent( \ 682 trace_event_internal::AddTraceEvent( \
665 phase, INTERNAL_TRACE_EVENT_UID(catstatic), \ 683 phase, INTERNAL_TRACE_EVENT_UID(catstatic), \
666 name, trace_event_trace_id.data(), trace_event_flags, \ 684 name, trace_event_trace_id.data(), trace_event_flags, \
667 ##__VA_ARGS__); \ 685 ##__VA_ARGS__); \
668 } \ 686 } \
669 } while (0) 687 } while (0)
670 688
671 // Implementation detail: internal macro to create static category and add 689 // Implementation detail: internal macro to create static category and add
672 // event if the category is enabled. 690 // event if the category is enabled.
673 #define INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP(phase, category, \ 691 #define INTERNAL_TRACE_EVENT_ADD_WITH_ID_TID_AND_TIMESTAMP(phase, \
674 name, id, thread_id, timestamp, flags, ...) \ 692 category_group, name, id, thread_id, timestamp, flags, ...) \
675 do { \ 693 do { \
676 INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category); \ 694 INTERNAL_TRACE_EVENT_GET_CATEGORY_INFO(category_group); \
677 if (*INTERNAL_TRACE_EVENT_UID(catstatic)) { \ 695 if (*INTERNAL_TRACE_EVENT_UID(catstatic)) { \
678 unsigned char trace_event_flags = flags | TRACE_EVENT_FLAG_HAS_ID; \ 696 unsigned char trace_event_flags = flags | TRACE_EVENT_FLAG_HAS_ID; \
679 trace_event_internal::TraceID trace_event_trace_id( \ 697 trace_event_internal::TraceID trace_event_trace_id( \
680 id, &trace_event_flags); \ 698 id, &trace_event_flags); \
681 trace_event_internal::AddTraceEventWithThreadIdAndTimestamp( \ 699 trace_event_internal::AddTraceEventWithThreadIdAndTimestamp( \
682 phase, INTERNAL_TRACE_EVENT_UID(catstatic), \ 700 phase, INTERNAL_TRACE_EVENT_UID(catstatic), \
683 name, trace_event_trace_id.data(), \ 701 name, trace_event_trace_id.data(), \
684 thread_id, base::TimeTicks::FromInternalValue(timestamp), \ 702 thread_id, base::TimeTicks::FromInternalValue(timestamp), \
685 trace_event_flags, ##__VA_ARGS__); \ 703 trace_event_flags, ##__VA_ARGS__); \
686 } \ 704 } \
(...skipping 287 matching lines...) Expand 10 before | Expand all | Expand 10 after
974 // Used by TRACE_EVENTx macro. Do not use directly. 992 // Used by TRACE_EVENTx macro. Do not use directly.
975 class TRACE_EVENT_API_CLASS_EXPORT TraceEndOnScopeClose { 993 class TRACE_EVENT_API_CLASS_EXPORT TraceEndOnScopeClose {
976 public: 994 public:
977 // Note: members of data_ intentionally left uninitialized. See Initialize. 995 // Note: members of data_ intentionally left uninitialized. See Initialize.
978 TraceEndOnScopeClose() : p_data_(NULL) {} 996 TraceEndOnScopeClose() : p_data_(NULL) {}
979 ~TraceEndOnScopeClose() { 997 ~TraceEndOnScopeClose() {
980 if (p_data_) 998 if (p_data_)
981 AddEventIfEnabled(); 999 AddEventIfEnabled();
982 } 1000 }
983 1001
984 void Initialize(const unsigned char* category_enabled, 1002 void Initialize(const unsigned char* category_group_enabled,
985 const char* name) { 1003 const char* name) {
986 data_.category_enabled = category_enabled; 1004 data_.category_group_enabled = category_group_enabled;
987 data_.name = name; 1005 data_.name = name;
988 p_data_ = &data_; 1006 p_data_ = &data_;
989 } 1007 }
990 1008
991 private: 1009 private:
992 // Add the end event if the category is still enabled. 1010 // Add the end event if the category is still enabled.
993 void AddEventIfEnabled() { 1011 void AddEventIfEnabled() {
994 // Only called when p_data_ is non-null. 1012 // Only called when p_data_ is non-null.
995 if (*p_data_->category_enabled) { 1013 if (*p_data_->category_group_enabled) {
996 TRACE_EVENT_API_ADD_TRACE_EVENT( 1014 TRACE_EVENT_API_ADD_TRACE_EVENT(
997 TRACE_EVENT_PHASE_END, 1015 TRACE_EVENT_PHASE_END,
998 p_data_->category_enabled, 1016 p_data_->category_group_enabled,
999 p_data_->name, kNoEventId, 1017 p_data_->name, kNoEventId,
1000 kZeroNumArgs, NULL, NULL, NULL, 1018 kZeroNumArgs, NULL, NULL, NULL,
1001 TRACE_EVENT_FLAG_NONE); 1019 TRACE_EVENT_FLAG_NONE);
1002 } 1020 }
1003 } 1021 }
1004 1022
1005 // This Data struct workaround is to avoid initializing all the members 1023 // This Data struct workaround is to avoid initializing all the members
1006 // in Data during construction of this object, since this object is always 1024 // in Data during construction of this object, since this object is always
1007 // constructed, even when tracing is disabled. If the members of Data were 1025 // constructed, even when tracing is disabled. If the members of Data were
1008 // members of this class instead, compiler warnings occur about potential 1026 // members of this class instead, compiler warnings occur about potential
1009 // uninitialized accesses. 1027 // uninitialized accesses.
1010 struct Data { 1028 struct Data {
1011 const unsigned char* category_enabled; 1029 const unsigned char* category_group_enabled;
1012 const char* name; 1030 const char* name;
1013 }; 1031 };
1014 Data* p_data_; 1032 Data* p_data_;
1015 Data data_; 1033 Data data_;
1016 }; 1034 };
1017 1035
1018 // Used by TRACE_EVENT_BINARY_EFFICIENTx macro. Do not use directly. 1036 // Used by TRACE_EVENT_BINARY_EFFICIENTx macro. Do not use directly.
1019 class TRACE_EVENT_API_CLASS_EXPORT ScopedTrace { 1037 class TRACE_EVENT_API_CLASS_EXPORT ScopedTrace {
1020 public: 1038 public:
1021 ScopedTrace(TRACE_EVENT_API_ATOMIC_WORD* event_uid, const char* name); 1039 ScopedTrace(TRACE_EVENT_API_ATOMIC_WORD* event_uid, const char* name);
1022 ~ScopedTrace(); 1040 ~ScopedTrace();
1023 1041
1024 private: 1042 private:
1025 const unsigned char* category_enabled_; 1043 const unsigned char* category_enabled_;
1026 const char* name_; 1044 const char* name_;
1027 }; 1045 };
1028 1046
1029 // A support macro for TRACE_EVENT_BINARY_EFFICIENTx 1047 // A support macro for TRACE_EVENT_BINARY_EFFICIENTx
1030 #define INTERNAL_TRACE_EVENT_BINARY_EFFICIENT_ADD_SCOPED( \ 1048 #define INTERNAL_TRACE_EVENT_BINARY_EFFICIENT_ADD_SCOPED( \
1031 category, name, ...) \ 1049 category_group, name, ...) \
1032 static TRACE_EVENT_API_ATOMIC_WORD INTERNAL_TRACE_EVENT_UID(atomic) = 0; \ 1050 static TRACE_EVENT_API_ATOMIC_WORD INTERNAL_TRACE_EVENT_UID(atomic) = 0; \
1033 trace_event_internal::ScopedTrace \ 1051 trace_event_internal::ScopedTrace \
1034 INTERNAL_TRACE_EVENT_UID(profileScope)( \ 1052 INTERNAL_TRACE_EVENT_UID(profileScope)( \
1035 &INTERNAL_TRACE_EVENT_UID(atomic), name); \ 1053 &INTERNAL_TRACE_EVENT_UID(atomic), name); \
1036 1054
1037 // This macro generates less code then TRACE_EVENT0 but is also 1055 // This macro generates less code then TRACE_EVENT0 but is also
1038 // slower to execute when tracing is off. It should generally only be 1056 // slower to execute when tracing is off. It should generally only be
1039 // used with code that is seldom executed or conditionally executed 1057 // used with code that is seldom executed or conditionally executed
1040 // when debugging. 1058 // when debugging.
1041 #define TRACE_EVENT_BINARY_EFFICIENT0(category, name) \ 1059 #define TRACE_EVENT_BINARY_EFFICIENT0(category_group, name) \
1042 INTERNAL_TRACE_EVENT_BINARY_EFFICIENT_ADD_SCOPED(category, name) 1060 INTERNAL_TRACE_EVENT_BINARY_EFFICIENT_ADD_SCOPED(category_group, name)
1043 1061
1044 } // namespace trace_event_internal 1062 } // namespace trace_event_internal
1045 1063
1046 #endif // BASE_DEBUG_TRACE_EVENT_INTERNAL_H_ 1064 #endif // BASE_DEBUG_TRACE_EVENT_INTERNAL_H_
OLDNEW

Powered by Google App Engine
This is Rietveld 408576698