Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(118)

Unified Diff: base/memory/shared_memory_allocator.h

Issue 1410213004: Create "persistent memory allocator" for persisting and sharing objects. (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: moved flags to Atomic32 Created 5 years, 1 month ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View side-by-side diff with in-line comments
Download patch
Index: base/memory/shared_memory_allocator.h
diff --git a/base/memory/shared_memory_allocator.h b/base/memory/shared_memory_allocator.h
new file mode 100644
index 0000000000000000000000000000000000000000..60eec1271c0658960003bee98ebd470cf1c569d7
--- /dev/null
+++ b/base/memory/shared_memory_allocator.h
@@ -0,0 +1,135 @@
+// Copyright (c) 2015 The Chromium Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style license that can be
+// found in the LICENSE file.
+
+#ifndef BASE_MEMORY_SHARED_MEMORY_ALLOCATOR_H_
+#define BASE_MEMORY_SHARED_MEMORY_ALLOCATOR_H_
+
+#include <stdint.h>
+
+#include "base/atomicops.h"
+#include "base/base_export.h"
+#include "base/macros.h"
+
+namespace base {
+
+// Simple allocator for pieces of a memory block that may be shared across
+// multiple processes.
+//
+// This class provides for thread-secure (i.e. safe against other threads
+// or processes that may be compromised and thus have malicious intent)
+// allocation of memory within a designated block and also a mechanism by
+// which other threads can learn of the allocations with any additional
+// shared information.
+//
+// There is (currently) no way to release an allocated block of data because
+// doing so would risk invalidating pointers held by other processes and
+// greatly complicate the allocation algorithm.
+//
+// Construction of this object can accept new, clean (i.e. zeroed) memory
+// or previously initialized memory. In the first case, construction must
Alexander Potapenko 2015/11/03 08:12:54 Please run clang-format in order to fix the extra
bcwhite 2015/11/03 16:28:20 It's on my to-do list.
+// be allowed to complete before letting other allocators attach to the same
+// segment. In other words, don't share the segment until at least one
+// allocator has been attached to it.
+//
+// It should be noted that memory doesn't need to actually have zeros written
+// throughout; it just needs to read as zero until something diffferent is
+// written to a location. This is an important distinction as it supports the
Dmitry Vyukov 2015/11/03 14:06:46 Is it really important? Looks more confusing than
bcwhite 2015/11/03 16:28:20 It's important because the histogram data is likel
+// use-case of non-pinned memory, such as from a demand-allocated region by
+// the OS or a memory-mapped file that auto-grows from a starting size of zero.
+class BASE_EXPORT SharedMemoryAllocator {
+ public:
+ struct Iterator {
+ int32_t last;
+ int32_t loop_detector;
+ };
+
+ struct MemoryInfo {
+ int32_t total;
+ int32_t free;
+ };
+
+ // The allocator operates on any arbitrary block of memory. Creation and
+ // sharing of that block with another process is the responsibility of the
+ // caller. The allocator needs to know only the block's |base| address, the
+ // total |size| of the block, and any internal |page| size (zero if not
+ // paged) across which allocations should not span.
+ //
+ // SharedMemoryAllocator does NOT take ownership of this memory block. The
+ // caller must manage it and ensure it stays available throughout the lifetime
+ // of this object.
+ SharedMemoryAllocator(void* base, int32_t size, int32_t page_size);
+ ~SharedMemoryAllocator();
+
+ // Get an object referenced by an |offset|. For safety reasons, the |type|
+ // code and size-of |unused| are compared to ensure the reference is valid
Dmitry Vyukov 2015/11/03 14:06:46 I guess it is s/unused/T/
bcwhite 2015/11/03 16:28:20 Done.
+ // and cannot return an object outside of the memory segment. A |type| of
+ // zero will match any though the size is still checked. NULL is returned
+ // if any problem is detected, such as corrupted storage or incorrect
+ // parameters. Callers MUST check that the returned value is not-null EVERY
+ // TIME before accessing it or risk crashing! Once dereferenced, the pointer
+ // is safe to reuse forever.
+ //
+ // NOTE: Though this method will guarantee that an object of the specified
+ // type can be accessed without going outside the bounds of the memory
+ // segment, it makes not guarantees of the validity of the data within the
+ // object itself. If it is expected that the contents of the segment could
+ // be compromised with malicious intent, the object must be hardened as well.
+ template<typename T> T* GetType(int32_t offset, int32_t type) {
+ return static_cast<T*>(GetBlockData(offset, type, sizeof(T), false));
+ }
+
+ // Reserve space in the memory segment of the desired |size| and |type|.
+ // A return value of zero indicates the allocation failed, otherwise the
+ // returned offset can be used by any process to get a real pointer via
+ // the GetObject() call.
Dmitry Vyukov 2015/11/03 14:06:46 s/GetObject/GetType/
bcwhite 2015/11/03 16:28:20 Done.
+ int32_t Allocate(int32_t size, int32_t type);
+
+ // Get the information about the amount of free space in the allocator. The
+ // amount of free space should be treated as approximate due to extras from
+ // alignment and metadata, but will never return less than could actually
+ // be allocated.
+ void GetMemoryInfo(MemoryInfo* meminfo);
+
+ // Allocated objects can be added to an internal list that can then be
+ // iterated over by other processes.
+ void MakeIterable(int32_t offset);
+
+ // Iterating uses a |state| structure (initialized by CreateIterator) and
+ // returns both the offset reference to the object as well as the |type| of
+ // that object. A zero return value indicates there are currently no more
+ // objects to be found but future attempts can be made without having to
+ // reset the iterator to "first".
+ void CreateIterator(Iterator* state);
+ int32_t GetNextIterable(Iterator* state, int32_t* type);
+
+ // If there is some indication that the shared memory has become corrupted,
+ // calling this will attempt to prevent further damage by indicating to
+ // all processes that something is not as expected.
+ void SetCorrupted();
Alexander Potapenko 2015/11/03 08:12:54 Shall we move these two to the private section?
bcwhite 2015/11/03 16:28:20 IsCorrupted definitely needs to be public. SetCor
+ bool IsCorrupted();
+
+ // Flag set if an allocation has failed because memory was full.
+ bool IsFull();
+
+ private:
+ struct SharedMetadata;
+ struct BlockHeader;
+
+ BlockHeader* GetBlock(int32_t offset, int32_t type, int32_t size,
+ bool special);
+ void* GetBlockData(int32_t offset, int32_t type, int32_t size, bool special);
+
+ SharedMetadata* shared_meta_;
+ char* mem_base_; // char because sizeof guaranteed 1 -- easy offset calc
+ int32_t mem_size_;
+ int32_t mem_page_;
+ int32_t last_seen_;
+ subtle::Atomic32 corrupted_; // TODO(bcwhite): Use std::atomic<char> when ok.
+
+ DISALLOW_COPY_AND_ASSIGN(SharedMemoryAllocator);
+};
+
+} // namespace base
+
+#endif // BASE_MEMORY_SHARED_MEMORY_ALLOCATOR_H_

Powered by Google App Engine
This is Rietveld 408576698