Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(497)

Side by Side Diff: base/metrics/persistent_memory_allocator.h

Issue 2578323002: Improved support for objects inside persistent memory. (Closed)
Patch Set: rebased Created 3 years, 11 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 // Copyright (c) 2015 The Chromium Authors. All rights reserved. 1 // Copyright (c) 2015 The Chromium Authors. All rights reserved.
2 // Use of this source code is governed by a BSD-style license that can be 2 // Use of this source code is governed by a BSD-style license that can be
3 // found in the LICENSE file. 3 // found in the LICENSE file.
4 4
5 #ifndef BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_ 5 #ifndef BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_
6 #define BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_ 6 #define BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_
7 7
8 #include <stdint.h> 8 #include <stdint.h>
9 9
10 #include <atomic> 10 #include <atomic>
(...skipping 31 matching lines...) Expand 10 before | Expand all | Expand 10 after
42 // Construction of this object can accept new, clean (i.e. zeroed) memory 42 // Construction of this object can accept new, clean (i.e. zeroed) memory
43 // or previously initialized memory. In the first case, construction must 43 // or previously initialized memory. In the first case, construction must
44 // be allowed to complete before letting other allocators attach to the same 44 // be allowed to complete before letting other allocators attach to the same
45 // segment. In other words, don't share the segment until at least one 45 // segment. In other words, don't share the segment until at least one
46 // allocator has been attached to it. 46 // allocator has been attached to it.
47 // 47 //
48 // Note that memory not in active use is not accessed so it is possible to 48 // Note that memory not in active use is not accessed so it is possible to
49 // use virtual memory, including memory-mapped files, as backing storage with 49 // use virtual memory, including memory-mapped files, as backing storage with
50 // the OS "pinning" new (zeroed) physical RAM pages only as they are needed. 50 // the OS "pinning" new (zeroed) physical RAM pages only as they are needed.
51 // 51 //
52 // All persistent memory segments can be freely accessed by builds of different 52 // OBJECTS: Although the allocator can be used in a "malloc" sense, fetching
53 // natural word widths (i.e. 32/64-bit) but users of this module must manually 53 // character arrays and manipulating that memory manually, the better way is
54 // ensure that the data recorded within are similarly safe. The GetAsObject<>() 54 // generally to use the "Object" methods to create and manage allocations. In
55 // methods use the kExpectedInstanceSize attribute of the structs to check this. 55 // this way the sizing, type-checking, and construction are all automatic. For
56 // this to work, however, every type of stored object must define two public
57 // "constexpr" values, kPersistentTypeId and kExpectedInstanceSize, as such:
56 // 58 //
57 // Memory segments can NOT, however, be exchanged between CPUs of different 59 // struct MyPersistentObjectType {
58 // endianess. Attempts to do so will simply see the existing data as corrupt 60 // // SHA1(MyPersistentObjectType): Increment this if structure changes!
59 // and refuse to access any of it. 61 // static constexpr uint32_t kPersistentTypeId = 0x3E15F6DE + 1;
62 //
63 // // Expected size for 32/64-bit check. Update this if structure changes!
64 // static constexpr size_t kExpectedInstanceSize = 20;
65 //
66 // ...
67 // };
68 //
69 // kPersistentTypeId: This value is an arbitrary identifier that allows the
70 // identification of these objects in the allocator, including the ability
71 // to find them via iteration. The number is arbitrary but using the first
72 // four bytes of the SHA1 hash of the type name means that there shouldn't
73 // be any conflicts with other types that may also be stored in the memory.
74 // The fully qualified name (e.g. base::debug::MyPersistentObjectType) could
75 // be used to generate the hash if the type name seems common. Use a command
76 // like this to get the hash: echo -n "MyPersistentObjectType" | sha1sum
77 // If the structure layout changes, ALWAYS increment this number so that
78 // newer versions of the code don't try to interpret persistent data written
79 // by older versions with a different layout.
80 //
81 // kExpectedInstanceSize: This value is the hard-coded number that matches
82 // what sizeof(T) would return. By providing it explicitly, the allocator can
83 // verify that the structure is compatible between both 32-bit and 64-bit
84 // versions of the code.
85 //
86 // Using AllocateObject (and ChangeObject) will zero the memory and then call
87 // the default constructor for the object. Given that objects are persistent,
88 // no destructor is ever called automatically though a caller can explicitly
89 // call DeleteObject to destruct it and change the type to something indicating
90 // it is no longer in use.
91 //
92 // Though persistent memory segments are transferrable between programs built
93 // for different natural word widths, they CANNOT be exchanged between CPUs
94 // of different endianess. Attempts to do so will simply see the existing data
95 // as corrupt and refuse to access any of it.
60 class BASE_EXPORT PersistentMemoryAllocator { 96 class BASE_EXPORT PersistentMemoryAllocator {
61 public: 97 public:
62 typedef uint32_t Reference; 98 typedef uint32_t Reference;
63 99
64 // Iterator for going through all iterable memory records in an allocator. 100 // Iterator for going through all iterable memory records in an allocator.
65 // Like the allocator itself, iterators are lock-free and thread-secure. 101 // Like the allocator itself, iterators are lock-free and thread-secure.
66 // That means that multiple threads can share an iterator and the same 102 // That means that multiple threads can share an iterator and the same
67 // reference will not be returned twice. 103 // reference will not be returned twice.
68 // 104 //
69 // The order of the items returned by an iterator matches the order in which 105 // The order of the items returned by an iterator matches the order in which
(...skipping 37 matching lines...) Expand 10 before | Expand all | Expand 10 after
107 // zero if there are no more. GetNext() may still be called again at a 143 // zero if there are no more. GetNext() may still be called again at a
108 // later time to retrieve any new allocations that have been added. 144 // later time to retrieve any new allocations that have been added.
109 Reference GetNext(uint32_t* type_return); 145 Reference GetNext(uint32_t* type_return);
110 146
111 // Similar to above but gets the next iterable of a specific |type_match|. 147 // Similar to above but gets the next iterable of a specific |type_match|.
112 // This should not be mixed with calls to GetNext() because any allocations 148 // This should not be mixed with calls to GetNext() because any allocations
113 // skipped here due to a type mis-match will never be returned by later 149 // skipped here due to a type mis-match will never be returned by later
114 // calls to GetNext() meaning it's possible to completely miss entries. 150 // calls to GetNext() meaning it's possible to completely miss entries.
115 Reference GetNextOfType(uint32_t type_match); 151 Reference GetNextOfType(uint32_t type_match);
116 152
153 // As above but works using object type.
154 template <typename T>
155 Reference GetNextOfType() {
156 return GetNextOfType(T::kPersistentTypeId);
157 }
158
159 // As above but works using objects and returns null if not found.
160 template <typename T>
161 const T* GetNextOfObject() {
162 return GetAsObject<T>(GetNextOfType<T>());
163 }
164
117 // Converts references to objects. This is a convenience method so that 165 // Converts references to objects. This is a convenience method so that
118 // users of the iterator don't need to also have their own pointer to the 166 // users of the iterator don't need to also have their own pointer to the
119 // allocator over which the iterator runs in order to retrieve objects. 167 // allocator over which the iterator runs in order to retrieve objects.
120 // Because the iterator is not read/write, only "const" objects can be 168 // Because the iterator is not read/write, only "const" objects can be
121 // fetched. Non-const objects can be fetched using the reference on a 169 // fetched. Non-const objects can be fetched using the reference on a
122 // non-const (external) pointer to the same allocator (or use const_cast 170 // non-const (external) pointer to the same allocator (or use const_cast
123 // to remove the qualifier). 171 // to remove the qualifier).
124 template <typename T> 172 template <typename T>
125 const T* GetAsObject(Reference ref, uint32_t type_id) const { 173 const T* GetAsObject(Reference ref) const {
126 return allocator_->GetAsObject<T>(ref, type_id); 174 return allocator_->GetAsObject<T>(ref);
127 } 175 }
128 176
129 // Similar to GetAsObject() but converts references to arrays of objects. 177 // Similar to GetAsObject() but converts references to arrays of things.
130 template <typename T> 178 template <typename T>
131 const T* GetAsArray(Reference ref, uint32_t type_id, size_t count) const { 179 const T* GetAsArray(Reference ref, uint32_t type_id, size_t count) const {
132 return allocator_->GetAsArray<T>(ref, type_id, count); 180 return allocator_->GetAsArray<T>(ref, type_id, count);
133 } 181 }
134 182
183 // Helper function to convert a generic pointer back into a reference.
Alexei Svitkine (slow) 2017/01/06 16:29:33 Nit: "Helper function to" is redundant. Just "Conv
bcwhite 2017/01/06 17:27:07 Done.
184 Reference GetAsReference(const void* memory, uint32_t type_id) const {
185 return allocator_->GetAsReference(memory, type_id);
186 }
187
188 // Helper function to convert an object back into a reference.
189 template <typename T>
190 Reference GetAsReference(const T* obj) const {
191 return allocator_->GetAsReference(obj);
192 }
193
135 private: 194 private:
136 // Weak-pointer to memory allocator being iterated over. 195 // Weak-pointer to memory allocator being iterated over.
137 const PersistentMemoryAllocator* allocator_; 196 const PersistentMemoryAllocator* allocator_;
138 197
139 // The last record that was returned. 198 // The last record that was returned.
140 std::atomic<Reference> last_record_; 199 std::atomic<Reference> last_record_;
141 200
142 // The number of records found; used for detecting loops. 201 // The number of records found; used for detecting loops.
143 std::atomic<uint32_t> record_count_; 202 std::atomic<uint32_t> record_count_;
144 203
145 DISALLOW_COPY_AND_ASSIGN(Iterator); 204 DISALLOW_COPY_AND_ASSIGN(Iterator);
146 }; 205 };
147 206
148 // Returned information about the internal state of the heap. 207 // Returned information about the internal state of the heap.
149 struct MemoryInfo { 208 struct MemoryInfo {
150 size_t total; 209 size_t total;
151 size_t free; 210 size_t free;
152 }; 211 };
153 212
154 enum : Reference { 213 enum : Reference {
155 kReferenceNull = 0 // A common "null" reference value. 214 // A common "null" reference value.
156 }; 215 kReferenceNull = 0,
157 216
158 enum : uint32_t { 217 // A value indicating that the type is in transition. Work is being done
159 kTypeIdAny = 0 // Match any type-id inside GetAsObject(). 218 // on the contents to prepare it for a new type to come.
219 kReferenceTransitioning = 0xFFFFFFFF,
160 }; 220 };
161 221
162 enum : size_t { 222 enum : size_t {
163 kSizeAny = 1 // Constant indicating that any array size is acceptable. 223 kSizeAny = 1 // Constant indicating that any array size is acceptable.
164 }; 224 };
165 225
166 // This is the standard file extension (suitable for being passed to the 226 // This is the standard file extension (suitable for being passed to the
167 // AddExtension() method of base::FilePath) for dumps of persistent memory. 227 // AddExtension() method of base::FilePath) for dumps of persistent memory.
168 static const base::FilePath::CharType kFileExtension[]; 228 static const base::FilePath::CharType kFileExtension[];
169 229
(...skipping 100 matching lines...) Expand 10 before | Expand all | Expand 10 after
270 // segment, it makes no guarantees of the validity of the data within the 330 // segment, it makes no guarantees of the validity of the data within the
271 // object itself. If it is expected that the contents of the segment could 331 // object itself. If it is expected that the contents of the segment could
272 // be compromised with malicious intent, the object must be hardened as well. 332 // be compromised with malicious intent, the object must be hardened as well.
273 // 333 //
274 // Though the persistent data may be "volatile" if it is shared with 334 // Though the persistent data may be "volatile" if it is shared with
275 // other processes, such is not necessarily the case. The internal 335 // other processes, such is not necessarily the case. The internal
276 // "volatile" designation is discarded so as to not propagate the viral 336 // "volatile" designation is discarded so as to not propagate the viral
277 // nature of that keyword to the caller. It can add it back, if necessary, 337 // nature of that keyword to the caller. It can add it back, if necessary,
278 // based on knowledge of how the allocator is being used. 338 // based on knowledge of how the allocator is being used.
279 template <typename T> 339 template <typename T>
280 T* GetAsObject(Reference ref, uint32_t type_id) { 340 T* GetAsObject(Reference ref) {
281 static_assert(std::is_pod<T>::value, "only simple objects"); 341 static_assert(std::is_standard_layout<T>::value, "only standard objects");
342 static_assert(!std::is_array<T>::value, "use GetAsArray<>()");
282 static_assert(T::kExpectedInstanceSize == sizeof(T), "inconsistent size"); 343 static_assert(T::kExpectedInstanceSize == sizeof(T), "inconsistent size");
283 return const_cast<T*>( 344 return const_cast<T*>(reinterpret_cast<volatile T*>(
284 reinterpret_cast<volatile T*>(GetBlockData(ref, type_id, sizeof(T)))); 345 GetBlockData(ref, T::kPersistentTypeId, sizeof(T))));
285 } 346 }
286 template <typename T> 347 template <typename T>
287 const T* GetAsObject(Reference ref, uint32_t type_id) const { 348 const T* GetAsObject(Reference ref) const {
288 static_assert(std::is_pod<T>::value, "only simple objects"); 349 static_assert(std::is_standard_layout<T>::value, "only standard objects");
350 static_assert(!std::is_array<T>::value, "use GetAsArray<>()");
289 static_assert(T::kExpectedInstanceSize == sizeof(T), "inconsistent size"); 351 static_assert(T::kExpectedInstanceSize == sizeof(T), "inconsistent size");
290 return const_cast<const T*>( 352 return const_cast<const T*>(reinterpret_cast<const volatile T*>(
291 reinterpret_cast<const volatile T*>(GetBlockData( 353 GetBlockData(ref, T::kPersistentTypeId, sizeof(T))));
292 ref, type_id, sizeof(T))));
293 } 354 }
294 355
295 // Like GetAsObject but get an array of simple, fixed-size types. 356 // Like GetAsObject but get an array of simple, fixed-size types.
296 // 357 //
297 // Use a |count| of the required number of array elements, or kSizeAny. 358 // Use a |count| of the required number of array elements, or kSizeAny.
298 // GetAllocSize() can be used to calculate the upper bound but isn't reliable 359 // GetAllocSize() can be used to calculate the upper bound but isn't reliable
299 // because padding can make space for extra elements that were not written. 360 // because padding can make space for extra elements that were not written.
300 // 361 //
301 // Remember that an array of char is a string but may not be NUL terminated. 362 // Remember that an array of char is a string but may not be NUL terminated.
302 // 363 //
(...skipping 11 matching lines...) Expand all
314 static_assert(std::is_fundamental<T>::value, "use GetAsObject<>()"); 375 static_assert(std::is_fundamental<T>::value, "use GetAsObject<>()");
315 return const_cast<const char*>(reinterpret_cast<const volatile T*>( 376 return const_cast<const char*>(reinterpret_cast<const volatile T*>(
316 GetBlockData(ref, type_id, count * sizeof(T)))); 377 GetBlockData(ref, type_id, count * sizeof(T))));
317 } 378 }
318 379
319 // Get the corresponding reference for an object held in persistent memory. 380 // Get the corresponding reference for an object held in persistent memory.
320 // If the |memory| is not valid or the type does not match, a kReferenceNull 381 // If the |memory| is not valid or the type does not match, a kReferenceNull
321 // result will be returned. 382 // result will be returned.
322 Reference GetAsReference(const void* memory, uint32_t type_id) const; 383 Reference GetAsReference(const void* memory, uint32_t type_id) const;
323 384
385 // As above but works with objects allocated from persistent memory.
386 template <typename T>
387 Reference GetAsReference(const T* obj) const {
388 return GetAsReference(obj, T::kPersistentTypeId);
389 }
390
324 // Get the number of bytes allocated to a block. This is useful when storing 391 // Get the number of bytes allocated to a block. This is useful when storing
325 // arrays in order to validate the ending boundary. The returned value will 392 // arrays in order to validate the ending boundary. The returned value will
326 // include any padding added to achieve the required alignment and so could 393 // include any padding added to achieve the required alignment and so could
327 // be larger than given in the original Allocate() request. 394 // be larger than given in the original Allocate() request.
328 size_t GetAllocSize(Reference ref) const; 395 size_t GetAllocSize(Reference ref) const;
329 396
330 // Access the internal "type" of an object. This generally isn't necessary 397 // Access the internal "type" of an object. This generally isn't necessary
331 // but can be used to "clear" the type and so effectively mark it as deleted 398 // but can be used to "clear" the type and so effectively mark it as deleted
332 // even though the memory stays valid and allocated. Changing the type is 399 // even though the memory stays valid and allocated. Changing the type is
333 // an atomic compare/exchange and so requires knowing the existing value. 400 // an atomic compare/exchange and so requires knowing the existing value.
334 // It will return false if the existing type is not what is expected. 401 // It will return false if the existing type is not what is expected.
402 // Changing the type doesn't mean the data is compatible with the new type.
403 // It will likely be necessary to clear or reconstruct the type before it
404 // can be used.
Alexei Svitkine (slow) 2017/01/06 16:29:33 I'm curious what the use case for ChangeType and C
bcwhite 2017/01/06 17:27:06 The nature of persistent allocation is such that "
335 uint32_t GetType(Reference ref) const; 405 uint32_t GetType(Reference ref) const;
336 bool ChangeType(Reference ref, uint32_t to_type_id, uint32_t from_type_id); 406 bool ChangeType(Reference ref, uint32_t to_type_id, uint32_t from_type_id);
337 407
408 // Like ChangeType() but gets the "to" type from the object type, clears
409 // the memory, and constructs a new object of the desired type just as
410 // though it was fresh from AllocateObject<>(). The old type simply ceases
411 // to exist; no destructor is called for it.
Alexei Svitkine (slow) 2017/01/06 16:29:34 Is it worth adding a warning here that this may re
bcwhite 2017/01/06 17:27:06 Done.
412 template <typename T>
413 T* ChangeObject(Reference ref, uint32_t from_type_id) {
414 DCHECK_LE(sizeof(T), GetAllocSize(ref)) << "alloc not big enough for obj";
415 // Make sure the memory is appropriate. This won't be used until after
416 // the type is changed but checking first avoids the possibility of having
417 // to change the type back.
418 void* mem = const_cast<void*>(GetBlockData(ref, 0, sizeof(T)));
419 if (!mem)
420 return nullptr;
421 DCHECK_EQ(0U, reinterpret_cast<uintptr_t>(mem) & (ALIGNOF(T) - 1));
Alexei Svitkine (slow) 2017/01/06 16:29:34 Nit: Add a comment about this - why is it expected
bcwhite 2017/01/06 17:27:07 Done.
422 // First change the type to "transitioning" so that there is no race
423 // condition with the clearing and construction of the object should
424 // another thread be simultaneously iterating over data. This will
425 // "acquire" the memory so no changes get reordered before it.
426 if (!ChangeType(ref, kReferenceTransitioning, from_type_id))
427 return nullptr;
428 // Clear the memory so that the property of all memory being zero after an
429 // allocation also applies here.
430 memset(mem, 0, GetAllocSize(ref));
431 // Construct an object of the desired type on this memory, just as if
432 // AllocateObject had been called to create it.
433 T* obj = new (mem) T();
434 // Finally change the type to the desired one. This will "release" all of
435 // the changes above and so provide a consistent view to other threads.
436 bool success =
437 ChangeType(ref, T::kPersistentTypeId, kReferenceTransitioning);
438 DCHECK(success);
439 return obj;
440 }
441
338 // Reserve space in the memory segment of the desired |size| and |type_id|. 442 // Reserve space in the memory segment of the desired |size| and |type_id|.
339 // A return value of zero indicates the allocation failed, otherwise the 443 // A return value of zero indicates the allocation failed, otherwise the
340 // returned reference can be used by any process to get a real pointer via 444 // returned reference can be used by any process to get a real pointer via
341 // the GetAsObject() call. 445 // the GetAsObject() call.
342 Reference Allocate(size_t size, uint32_t type_id); 446 Reference Allocate(size_t size, uint32_t type_id);
343 447
448 // Allocate and construct an object in persistent memory. The type must have
449 // both (size_t) kExpectedInstanceSize and (uint32_t) kPersistentTypeId
450 // static constexpr fields that are used to ensure compatibility between
451 // software versions. An optional size parameter can be specified to force
452 // the allocation to be bigger than the size of the object; this is useful
453 // when the last field is actually variable length.
454 template <typename T>
455 T* AllocateObject(size_t size) {
456 if (size < sizeof(T))
457 size = sizeof(T);
458 Reference ref = Allocate(size, T::kPersistentTypeId);
459 void* mem =
460 const_cast<void*>(GetBlockData(ref, T::kPersistentTypeId, size));
461 if (!mem)
462 return nullptr;
463 DCHECK_EQ(0U, reinterpret_cast<uintptr_t>(mem) & (ALIGNOF(T) - 1));
464 return new (mem) T();
465 }
466 template <typename T>
467 T* AllocateObject() {
468 return AllocateObject<T>(sizeof(T));
469 }
470
471 // Deletes an object by destructing it and then changing the type to a
472 // different value (default 0).
473 template <typename T>
474 void DeleteObject(T* obj, uint32_t new_type) {
475 // Get the reference for the object.
476 Reference ref = GetAsReference<T>(obj);
477 // First change the type to "transitioning" so there is no race condition
478 // where another thread could find the object through iteration while it
479 // is been destructed. This will "acquire" the memory so no changes get
480 // reordered before it. It will fail if |ref| is invalid.
481 if (!ChangeType(ref, kReferenceTransitioning, T::kPersistentTypeId))
482 return;
483 // Destruct the object.
484 obj->~T();
485 // Finally change the type to the desired value. This will "release" all
486 // the changes above.
487 bool success = ChangeType(ref, new_type, kReferenceTransitioning);
488 DCHECK(success);
489 }
490 template <typename T>
491 void DeleteObject(T* obj) {
492 DeleteObject<T>(obj, 0);
493 }
494
344 // Allocated objects can be added to an internal list that can then be 495 // Allocated objects can be added to an internal list that can then be
345 // iterated over by other processes. If an allocated object can be found 496 // iterated over by other processes. If an allocated object can be found
346 // another way, such as by having its reference within a different object 497 // another way, such as by having its reference within a different object
347 // that will be made iterable, then this call is not necessary. This always 498 // that will be made iterable, then this call is not necessary. This always
348 // succeeds unless corruption is detected; check IsCorrupted() to find out. 499 // succeeds unless corruption is detected; check IsCorrupted() to find out.
349 // Once an object is made iterable, its position in iteration can never 500 // Once an object is made iterable, its position in iteration can never
350 // change; new iterable objects will always be added after it in the series. 501 // change; new iterable objects will always be added after it in the series.
502 // Changing the type does not alter its "iterable" status.
351 void MakeIterable(Reference ref); 503 void MakeIterable(Reference ref);
352 504
505 // As above but works with an object allocated from persistent memory.
506 template <typename T>
507 void MakeIterable(const T* obj) {
508 MakeIterable(GetAsReference<T>(obj));
509 }
510
353 // Get the information about the amount of free space in the allocator. The 511 // Get the information about the amount of free space in the allocator. The
354 // amount of free space should be treated as approximate due to extras from 512 // amount of free space should be treated as approximate due to extras from
355 // alignment and metadata. Concurrent allocations from other threads will 513 // alignment and metadata. Concurrent allocations from other threads will
356 // also make the true amount less than what is reported. 514 // also make the true amount less than what is reported.
357 void GetMemoryInfo(MemoryInfo* meminfo) const; 515 void GetMemoryInfo(MemoryInfo* meminfo) const;
358 516
359 // If there is some indication that the memory has become corrupted, 517 // If there is some indication that the memory has become corrupted,
360 // calling this will attempt to prevent further damage by indicating to 518 // calling this will attempt to prevent further damage by indicating to
361 // all processes that something is not as expected. 519 // all processes that something is not as expected.
362 void SetCorrupt() const; 520 void SetCorrupt() const;
(...skipping 168 matching lines...) Expand 10 before | Expand all | Expand 10 after
531 private: 689 private:
532 std::unique_ptr<MemoryMappedFile> mapped_file_; 690 std::unique_ptr<MemoryMappedFile> mapped_file_;
533 691
534 DISALLOW_COPY_AND_ASSIGN(FilePersistentMemoryAllocator); 692 DISALLOW_COPY_AND_ASSIGN(FilePersistentMemoryAllocator);
535 }; 693 };
536 #endif // !defined(OS_NACL) 694 #endif // !defined(OS_NACL)
537 695
538 } // namespace base 696 } // namespace base
539 697
540 #endif // BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_ 698 #endif // BASE_METRICS_PERSISTENT_MEMORY_ALLOCATOR_H_
OLDNEW
« no previous file with comments | « base/metrics/persistent_histogram_allocator.cc ('k') | base/metrics/persistent_memory_allocator.cc » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698