Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(201)

Side by Side Diff: base/metrics/persistent_memory_allocator.h

Issue 2578323002: Improved support for objects inside persistent memory. (Closed)
Patch Set: addressed review comments by asvitkine Created 3 years, 11 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 // Copyright (c) 2015 The Chromium Authors. All rights reserved. 1 // Copyright (c) 2015 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be 2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file. 3 // found in the LICENSE file.
4 4
5 #ifndef BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_ 5 #ifndef BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_
6 #define BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_ 6 #define BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_
7 7
8 #include <stdint.h> 8 #include <stdint.h>
9 9
10 #include <atomic> 10 #include <atomic>
(...skipping 31 matching lines...) Expand 10 before | Expand all | Expand 10 after
42 // Construction of this object can accept new, clean (i.e. zeroed) memory 42 // Construction of this object can accept new, clean (i.e. zeroed) memory
43 // or previously initialized memory. In the first case, construction must 43 // or previously initialized memory. In the first case, construction must
44 // be allowed to complete before letting other allocators attach to the same 44 // be allowed to complete before letting other allocators attach to the same
45 // segment. In other words, don't share the segment until at least one 45 // segment. In other words, don't share the segment until at least one
46 // allocator has been attached to it. 46 // allocator has been attached to it.
47 // 47 //
48 // Note that memory not in active use is not accessed so it is possible to 48 // Note that memory not in active use is not accessed so it is possible to
49 // use virtual memory, including memory-mapped files, as backing storage with 49 // use virtual memory, including memory-mapped files, as backing storage with
50 // the OS "pinning" new (zeroed) physical RAM pages only as they are needed. 50 // the OS "pinning" new (zeroed) physical RAM pages only as they are needed.
51 // 51 //
52 // All persistent memory segments can be freely accessed by builds of different 52 // OBJECTS: Although the allocator can be used in a "malloc" sense, fetching
53 // natural word widths (i.e. 32/64-bit) but users of this module must manually 53 // character arrays and manipulating that memory manually, the better way is
54 // ensure that the data recorded within are similarly safe. The GetAsObject<>() 54 // generally to use the "Object" methods to create and manage allocations. In
55 // methods use the kExpectedInstanceSize attribute of the structs to check this. 55 // this way the sizing, type-checking, and construction are all automatic. For
56 // this to work, however, every type of stored object must define two public
57 // "constexpr" values, kPersistentTypeId and kExpectedInstanceSize, as such:
56 // 58 //
57 // Memory segments can NOT, however, be exchanged between CPUs of different 59 // struct MyPersistentObjectType {
58 // endianess. Attempts to do so will simply see the existing data as corrupt 60 // // SHA1(MyPersistentObjectType): Increment this if structure changes!
59 // and refuse to access any of it. 61 // static constexpr uint32_t kPersistentTypeId = 0x3E15F6DE + 1;
62 //
63 // // Expected size for 32/64-bit check. Update this if structure changes!
64 // static constexpr size_t kExpectedInstanceSize = 20;
65 //
66 // ...
67 // };
68 //
69 // kPersistentTypeId: This value is an arbitrary identifier that allows the
70 // identification of these objects in the allocator, including the ability
71 // to find them via iteration. The number is arbitrary but using the first
72 // four bytes of the SHA1 hash of the type name means that there shouldn't
73 // be any conflicts with other types that may also be stored in the memory.
74 // The fully qualified name (e.g. base::debug::MyPersistentObjectType) could
75 // be used to generate the hash if the type name seems common. Use a command
76 // like this to get the hash: echo -n "MyPersistentObjectType" | sha1sum
77 // If the structure layout changes, ALWAYS increment this number so that
78 // newer versions of the code don't try to interpret persistent data written
79 // by older versions with a different layout.
80 //
81 // kExpectedInstanceSize: This value is the hard-coded number that matches
82 // what sizeof(T) would return. By providing it explicitly, the allocator can
83 // verify that the structure is compatible between both 32-bit and 64-bit
84 // versions of the code.
85 //
86 // Using AllocateObject (and ChangeObject) will zero the memory and then call
87 // the default constructor for the object. Given that objects are persistent,
88 // no destructor is ever called automatically though a caller can explicitly
89 // call DeleteObject to destruct it and change the type to something indicating
90 // it is no longer in use.
91 //
92 // Though persistent memory segments are transferrable between programs built
93 // for different natural word widths, they CANNOT be exchanged between CPUs
94 // of different endianess. Attempts to do so will simply see the existing data
95 // as corrupt and refuse to access any of it.
60 class BASE_EXPORT PersistentMemoryAllocator { 96 class BASE_EXPORT PersistentMemoryAllocator {
61 public: 97 public:
62 typedef uint32_t Reference; 98 typedef uint32_t Reference;
63 99
64 // Iterator for going through all iterable memory records in an allocator. 100 // Iterator for going through all iterable memory records in an allocator.
65 // Like the allocator itself, iterators are lock-free and thread-secure. 101 // Like the allocator itself, iterators are lock-free and thread-secure.
66 // That means that multiple threads can share an iterator and the same 102 // That means that multiple threads can share an iterator and the same
67 // reference will not be returned twice. 103 // reference will not be returned twice.
68 // 104 //
69 // The order of the items returned by an iterator matches the order in which 105 // The order of the items returned by an iterator matches the order in which
(...skipping 37 matching lines...) Expand 10 before | Expand all | Expand 10 after
107 // zero if there are no more. GetNext() may still be called again at a 143 // zero if there are no more. GetNext() may still be called again at a
108 // later time to retrieve any new allocations that have been added. 144 // later time to retrieve any new allocations that have been added.
109 Reference GetNext(uint32_t* type_return); 145 Reference GetNext(uint32_t* type_return);
110 146
111 // Similar to above but gets the next iterable of a specific |type_match|. 147 // Similar to above but gets the next iterable of a specific |type_match|.
112 // This should not be mixed with calls to GetNext() because any allocations 148 // This should not be mixed with calls to GetNext() because any allocations
113 // skipped here due to a type mis-match will never be returned by later 149 // skipped here due to a type mis-match will never be returned by later
114 // calls to GetNext() meaning it's possible to completely miss entries. 150 // calls to GetNext() meaning it's possible to completely miss entries.
115 Reference GetNextOfType(uint32_t type_match); 151 Reference GetNextOfType(uint32_t type_match);
116 152
153 // As above but works using object type.
154 template <typename T>
155 Reference GetNextOfType() {
156 return GetNextOfType(T::kPersistentTypeId);
157 }
158
159 // As above but works using objects and returns null if not found.
160 template <typename T>
161 const T* GetNextOfObject() {
162 return GetAsObject<T>(GetNextOfType<T>());
163 }
164
117 // Converts references to objects. This is a convenience method so that 165 // Converts references to objects. This is a convenience method so that
118 // users of the iterator don't need to also have their own pointer to the 166 // users of the iterator don't need to also have their own pointer to the
119 // allocator over which the iterator runs in order to retrieve objects. 167 // allocator over which the iterator runs in order to retrieve objects.
120 // Because the iterator is not read/write, only "const" objects can be 168 // Because the iterator is not read/write, only "const" objects can be
121 // fetched. Non-const objects can be fetched using the reference on a 169 // fetched. Non-const objects can be fetched using the reference on a
122 // non-const (external) pointer to the same allocator (or use const_cast 170 // non-const (external) pointer to the same allocator (or use const_cast
123 // to remove the qualifier). 171 // to remove the qualifier).
124 template <typename T> 172 template <typename T>
125 const T* GetAsObject(Reference ref, uint32_t type_id) const { 173 const T* GetAsObject(Reference ref) const {
126 return allocator_->GetAsObject<T>(ref, type_id); 174 return allocator_->GetAsObject<T>(ref);
127 } 175 }
128 176
129 // Similar to GetAsObject() but converts references to arrays of objects. 177 // Similar to GetAsObject() but converts references to arrays of things.
130 template <typename T> 178 template <typename T>
131 const T* GetAsArray(Reference ref, uint32_t type_id, size_t count) const { 179 const T* GetAsArray(Reference ref, uint32_t type_id, size_t count) const {
132 return allocator_->GetAsArray<T>(ref, type_id, count); 180 return allocator_->GetAsArray<T>(ref, type_id, count);
133 } 181 }
134 182
183 // Convert a generic pointer back into a reference. A null reference will
184 // be returned if |memory| is not inside the persistent segment or does not
185 // point to an object of the specified |type_id|.
186 Reference GetAsReference(const void* memory, uint32_t type_id) const {
187 return allocator_->GetAsReference(memory, type_id);
188 }
189
190 // As above but convert an object back into a reference.
191 template <typename T>
192 Reference GetAsReference(const T* obj) const {
193 return allocator_->GetAsReference(obj);
194 }
195
135 private: 196 private:
136 // Weak-pointer to memory allocator being iterated over. 197 // Weak-pointer to memory allocator being iterated over.
137 const PersistentMemoryAllocator* allocator_; 198 const PersistentMemoryAllocator* allocator_;
138 199
139 // The last record that was returned. 200 // The last record that was returned.
140 std::atomic<Reference> last_record_; 201 std::atomic<Reference> last_record_;
141 202
142 // The number of records found; used for detecting loops. 203 // The number of records found; used for detecting loops.
143 std::atomic<uint32_t> record_count_; 204 std::atomic<uint32_t> record_count_;
144 205
145 DISALLOW_COPY_AND_ASSIGN(Iterator); 206 DISALLOW_COPY_AND_ASSIGN(Iterator);
146 }; 207 };
147 208
148 // Returned information about the internal state of the heap. 209 // Returned information about the internal state of the heap.
149 struct MemoryInfo { 210 struct MemoryInfo {
150 size_t total; 211 size_t total;
151 size_t free; 212 size_t free;
152 }; 213 };
153 214
154 enum : Reference { 215 enum : Reference {
155 kReferenceNull = 0 // A common "null" reference value. 216 // A common "null" reference value.
156 }; 217 kReferenceNull = 0,
157 218
158 enum : uint32_t { 219 // A value indicating that the type is in transition. Work is being done
159 kTypeIdAny = 0 // Match any type-id inside GetAsObject(). 220 // on the contents to prepare it for a new type to come.
221 kReferenceTransitioning = 0xFFFFFFFF,
160 }; 222 };
161 223
162 enum : size_t { 224 enum : size_t {
163 kSizeAny = 1 // Constant indicating that any array size is acceptable. 225 kSizeAny = 1 // Constant indicating that any array size is acceptable.
164 }; 226 };
165 227
166 // This is the standard file extension (suitable for being passed to the 228 // This is the standard file extension (suitable for being passed to the
167 // AddExtension() method of base::FilePath) for dumps of persistent memory. 229 // AddExtension() method of base::FilePath) for dumps of persistent memory.
168 static const base::FilePath::CharType kFileExtension[]; 230 static const base::FilePath::CharType kFileExtension[];
169 231
(...skipping 100 matching lines...) Expand 10 before | Expand all | Expand 10 after
270 // segment, it makes no guarantees of the validity of the data within the 332 // segment, it makes no guarantees of the validity of the data within the
271 // object itself. If it is expected that the contents of the segment could 333 // object itself. If it is expected that the contents of the segment could
272 // be compromised with malicious intent, the object must be hardened as well. 334 // be compromised with malicious intent, the object must be hardened as well.
273 // 335 //
274 // Though the persistent data may be "volatile" if it is shared with 336 // Though the persistent data may be "volatile" if it is shared with
275 // other processes, such is not necessarily the case. The internal 337 // other processes, such is not necessarily the case. The internal
276 // "volatile" designation is discarded so as to not propagate the viral 338 // "volatile" designation is discarded so as to not propagate the viral
277 // nature of that keyword to the caller. It can add it back, if necessary, 339 // nature of that keyword to the caller. It can add it back, if necessary,
278 // based on knowledge of how the allocator is being used. 340 // based on knowledge of how the allocator is being used.
279 template <typename T> 341 template <typename T>
280 T* GetAsObject(Reference ref, uint32_t type_id) { 342 T* GetAsObject(Reference ref) {
281 static_assert(std::is_pod<T>::value, "only simple objects"); 343 static_assert(std::is_standard_layout<T>::value, "only standard objects");
344 static_assert(!std::is_array<T>::value, "use GetAsArray<>()");
282 static_assert(T::kExpectedInstanceSize == sizeof(T), "inconsistent size"); 345 static_assert(T::kExpectedInstanceSize == sizeof(T), "inconsistent size");
283 return const_cast<T*>( 346 return const_cast<T*>(reinterpret_cast<volatile T*>(
284 reinterpret_cast<volatile T*>(GetBlockData(ref, type_id, sizeof(T)))); 347 GetBlockData(ref, T::kPersistentTypeId, sizeof(T))));
285 } 348 }
286 template <typename T> 349 template <typename T>
287 const T* GetAsObject(Reference ref, uint32_t type_id) const { 350 const T* GetAsObject(Reference ref) const {
288 static_assert(std::is_pod<T>::value, "only simple objects"); 351 static_assert(std::is_standard_layout<T>::value, "only standard objects");
352 static_assert(!std::is_array<T>::value, "use GetAsArray<>()");
289 static_assert(T::kExpectedInstanceSize == sizeof(T), "inconsistent size"); 353 static_assert(T::kExpectedInstanceSize == sizeof(T), "inconsistent size");
290 return const_cast<const T*>( 354 return const_cast<const T*>(reinterpret_cast<const volatile T*>(
291 reinterpret_cast<const volatile T*>(GetBlockData( 355 GetBlockData(ref, T::kPersistentTypeId, sizeof(T))));
292 ref, type_id, sizeof(T))));
293 } 356 }
294 357
295 // Like GetAsObject but get an array of simple, fixed-size types. 358 // Like GetAsObject but get an array of simple, fixed-size types.
296 // 359 //
297 // Use a |count| of the required number of array elements, or kSizeAny. 360 // Use a |count| of the required number of array elements, or kSizeAny.
298 // GetAllocSize() can be used to calculate the upper bound but isn't reliable 361 // GetAllocSize() can be used to calculate the upper bound but isn't reliable
299 // because padding can make space for extra elements that were not written. 362 // because padding can make space for extra elements that were not written.
300 // 363 //
301 // Remember that an array of char is a string but may not be NUL terminated. 364 // Remember that an array of char is a string but may not be NUL terminated.
302 // 365 //
(...skipping 11 matching lines...) Expand all
314 static_assert(std::is_fundamental<T>::value, "use GetAsObject<>()"); 377 static_assert(std::is_fundamental<T>::value, "use GetAsObject<>()");
315 return const_cast<const char*>(reinterpret_cast<const volatile T*>( 378 return const_cast<const char*>(reinterpret_cast<const volatile T*>(
316 GetBlockData(ref, type_id, count * sizeof(T)))); 379 GetBlockData(ref, type_id, count * sizeof(T))));
317 } 380 }
318 381
319 // Get the corresponding reference for an object held in persistent memory. 382 // Get the corresponding reference for an object held in persistent memory.
320 // If the |memory| is not valid or the type does not match, a kReferenceNull 383 // If the |memory| is not valid or the type does not match, a kReferenceNull
321 // result will be returned. 384 // result will be returned.
322 Reference GetAsReference(const void* memory, uint32_t type_id) const; 385 Reference GetAsReference(const void* memory, uint32_t type_id) const;
323 386
387 // As above but works with objects allocated from persistent memory.
388 template <typename T>
389 Reference GetAsReference(const T* obj) const {
390 return GetAsReference(obj, T::kPersistentTypeId);
391 }
392
324 // Get the number of bytes allocated to a block. This is useful when storing 393 // Get the number of bytes allocated to a block. This is useful when storing
325 // arrays in order to validate the ending boundary. The returned value will 394 // arrays in order to validate the ending boundary. The returned value will
326 // include any padding added to achieve the required alignment and so could 395 // include any padding added to achieve the required alignment and so could
327 // be larger than given in the original Allocate() request. 396 // be larger than given in the original Allocate() request.
328 size_t GetAllocSize(Reference ref) const; 397 size_t GetAllocSize(Reference ref) const;
329 398
330 // Access the internal "type" of an object. This generally isn't necessary 399 // Access the internal "type" of an object. This generally isn't necessary
331 // but can be used to "clear" the type and so effectively mark it as deleted 400 // but can be used to "clear" the type and so effectively mark it as deleted
332 // even though the memory stays valid and allocated. Changing the type is 401 // even though the memory stays valid and allocated. Changing the type is
333 // an atomic compare/exchange and so requires knowing the existing value. 402 // an atomic compare/exchange and so requires knowing the existing value.
334 // It will return false if the existing type is not what is expected. 403 // It will return false if the existing type is not what is expected.
404 // Changing the type doesn't mean the data is compatible with the new type.
405 // It will likely be necessary to clear or reconstruct the type before it
406 // can be used. Changing the type WILL NOT invalidate existing pointers to
407 // the data, either in this process or others, so changing the data structure
408 // could have unpredicatable results. USE WITH CARE!
335 uint32_t GetType(Reference ref) const; 409 uint32_t GetType(Reference ref) const;
336 bool ChangeType(Reference ref, uint32_t to_type_id, uint32_t from_type_id); 410 bool ChangeType(Reference ref, uint32_t to_type_id, uint32_t from_type_id);
337 411
412 // Like ChangeType() but gets the "to" type from the object type, clears
413 // the memory, and constructs a new object of the desired type just as
414 // though it was fresh from AllocateObject<>(). The old type simply ceases
415 // to exist; no destructor is called for it. Calling this will not invalidate
416 // existing pointers to the object, either in this process or others, so
417 // changing the object could have unpredictable results. USE WITH CARE!
418 template <typename T>
419 T* ChangeObject(Reference ref, uint32_t from_type_id) {
420 DCHECK_LE(sizeof(T), GetAllocSize(ref)) << "alloc not big enough for obj";
421 // Make sure the memory is appropriate. This won't be used until after
422 // the type is changed but checking first avoids the possibility of having
423 // to change the type back.
424 void* mem = const_cast<void*>(GetBlockData(ref, 0, sizeof(T)));
425 if (!mem)
426 return nullptr;
427 // Ensure the allocator's internal alignment is sufficient for this object.
428 // This protects against coding errors in the allocator.
429 DCHECK_EQ(0U, reinterpret_cast<uintptr_t>(mem) & (ALIGNOF(T) - 1));
430 // First change the type to "transitioning" so that there is no race
431 // condition with the clearing and construction of the object should
432 // another thread be simultaneously iterating over data. This will
433 // "acquire" the memory so no changes get reordered before it.
434 if (!ChangeType(ref, kReferenceTransitioning, from_type_id))
435 return nullptr;
436 // Clear the memory so that the property of all memory being zero after an
437 // allocation also applies here.
438 memset(mem, 0, GetAllocSize(ref));
439 // Construct an object of the desired type on this memory, just as if
440 // AllocateObject had been called to create it.
441 T* obj = new (mem) T();
442 // Finally change the type to the desired one. This will "release" all of
443 // the changes above and so provide a consistent view to other threads.
444 bool success =
445 ChangeType(ref, T::kPersistentTypeId, kReferenceTransitioning);
446 DCHECK(success);
447 return obj;
448 }
449
338 // Reserve space in the memory segment of the desired |size| and |type_id|. 450 // Reserve space in the memory segment of the desired |size| and |type_id|.
339 // A return value of zero indicates the allocation failed, otherwise the 451 // A return value of zero indicates the allocation failed, otherwise the
340 // returned reference can be used by any process to get a real pointer via 452 // returned reference can be used by any process to get a real pointer via
341 // the GetAsObject() call. 453 // the GetAsObject() call.
342 Reference Allocate(size_t size, uint32_t type_id); 454 Reference Allocate(size_t size, uint32_t type_id);
343 455
456 // Allocate and construct an object in persistent memory. The type must have
457 // both (size_t) kExpectedInstanceSize and (uint32_t) kPersistentTypeId
458 // static constexpr fields that are used to ensure compatibility between
459 // software versions. An optional size parameter can be specified to force
460 // the allocation to be bigger than the size of the object; this is useful
461 // when the last field is actually variable length.
462 template <typename T>
463 T* AllocateObject(size_t size) {
464 if (size < sizeof(T))
465 size = sizeof(T);
466 Reference ref = Allocate(size, T::kPersistentTypeId);
467 void* mem =
468 const_cast<void*>(GetBlockData(ref, T::kPersistentTypeId, size));
469 if (!mem)
470 return nullptr;
471 DCHECK_EQ(0U, reinterpret_cast<uintptr_t>(mem) & (ALIGNOF(T) - 1));
472 return new (mem) T();
473 }
474 template <typename T>
475 T* AllocateObject() {
476 return AllocateObject<T>(sizeof(T));
477 }
478
479 // Deletes an object by destructing it and then changing the type to a
480 // different value (default 0).
481 template <typename T>
482 void DeleteObject(T* obj, uint32_t new_type) {
483 // Get the reference for the object.
484 Reference ref = GetAsReference<T>(obj);
485 // First change the type to "transitioning" so there is no race condition
486 // where another thread could find the object through iteration while it
487 // is been destructed. This will "acquire" the memory so no changes get
488 // reordered before it. It will fail if |ref| is invalid.
489 if (!ChangeType(ref, kReferenceTransitioning, T::kPersistentTypeId))
490 return;
491 // Destruct the object.
492 obj->~T();
493 // Finally change the type to the desired value. This will "release" all
494 // the changes above.
495 bool success = ChangeType(ref, new_type, kReferenceTransitioning);
496 DCHECK(success);
497 }
498 template <typename T>
499 void DeleteObject(T* obj) {
500 DeleteObject<T>(obj, 0);
501 }
502
344 // Allocated objects can be added to an internal list that can then be 503 // Allocated objects can be added to an internal list that can then be
345 // iterated over by other processes. If an allocated object can be found 504 // iterated over by other processes. If an allocated object can be found
346 // another way, such as by having its reference within a different object 505 // another way, such as by having its reference within a different object
347 // that will be made iterable, then this call is not necessary. This always 506 // that will be made iterable, then this call is not necessary. This always
348 // succeeds unless corruption is detected; check IsCorrupted() to find out. 507 // succeeds unless corruption is detected; check IsCorrupted() to find out.
349 // Once an object is made iterable, its position in iteration can never 508 // Once an object is made iterable, its position in iteration can never
350 // change; new iterable objects will always be added after it in the series. 509 // change; new iterable objects will always be added after it in the series.
510 // Changing the type does not alter its "iterable" status.
351 void MakeIterable(Reference ref); 511 void MakeIterable(Reference ref);
352 512
513 // As above but works with an object allocated from persistent memory.
514 template <typename T>
515 void MakeIterable(const T* obj) {
516 MakeIterable(GetAsReference<T>(obj));
517 }
518
353 // Get the information about the amount of free space in the allocator. The 519 // Get the information about the amount of free space in the allocator. The
354 // amount of free space should be treated as approximate due to extras from 520 // amount of free space should be treated as approximate due to extras from
355 // alignment and metadata. Concurrent allocations from other threads will 521 // alignment and metadata. Concurrent allocations from other threads will
356 // also make the true amount less than what is reported. 522 // also make the true amount less than what is reported.
357 void GetMemoryInfo(MemoryInfo* meminfo) const; 523 void GetMemoryInfo(MemoryInfo* meminfo) const;
358 524
359 // If there is some indication that the memory has become corrupted, 525 // If there is some indication that the memory has become corrupted,
360 // calling this will attempt to prevent further damage by indicating to 526 // calling this will attempt to prevent further damage by indicating to
361 // all processes that something is not as expected. 527 // all processes that something is not as expected.
362 void SetCorrupt() const; 528 void SetCorrupt() const;
(...skipping 168 matching lines...) Expand 10 before | Expand all | Expand 10 after
531 private: 697 private:
532 std::unique_ptr<MemoryMappedFile> mapped_file_; 698 std::unique_ptr<MemoryMappedFile> mapped_file_;
533 699
534 DISALLOW_COPY_AND_ASSIGN(FilePersistentMemoryAllocator); 700 DISALLOW_COPY_AND_ASSIGN(FilePersistentMemoryAllocator);
535 }; 701 };
536 #endif // !defined(OS_NACL) 702 #endif // !defined(OS_NACL)
537 703
538 } // namespace base 704 } // namespace base
539 705
540 #endif // BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_ 706 #endif // BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_
OLDNEW
« no previous file with comments | « base/metrics/persistent_histogram_allocator.cc ('k') | base/metrics/persistent_memory_allocator.cc » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698