Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(119)

Side by Side Diff: Source/wtf/PartitionAlloc.h

Issue 21666003: Enhancements to PartitionAlloc to support threading and arbitrary allocation sizes. (Closed) Base URL: svn://svn.chromium.org/blink/trunk
Patch Set: Created 7 years, 4 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « Source/core/rendering/RenderObject.cpp ('k') | Source/wtf/PartitionAlloc.cpp » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 /* 1 /*
2 * Copyright (C) 2013 Google Inc. All rights reserved. 2 * Copyright (C) 2013 Google Inc. All rights reserved.
3 * 3 *
4 * Redistribution and use in source and binary forms, with or without 4 * Redistribution and use in source and binary forms, with or without
5 * modification, are permitted provided that the following conditions are 5 * modification, are permitted provided that the following conditions are
6 * met: 6 * met:
7 * 7 *
8 * * Redistributions of source code must retain the above copyright 8 * * Redistributions of source code must retain the above copyright
9 * notice, this list of conditions and the following disclaimer. 9 * notice, this list of conditions and the following disclaimer.
10 * * Redistributions in binary form must reproduce the above 10 * * Redistributions in binary form must reproduce the above
(...skipping 23 matching lines...) Expand all
34 // DESCRIPTION 34 // DESCRIPTION
35 // partitionAlloc() and partitionFree() are approximately analagous 35 // partitionAlloc() and partitionFree() are approximately analagous
36 // to malloc() and free(). 36 // to malloc() and free().
37 // The main difference is that a PartitionRoot object must be supplied to 37 // The main difference is that a PartitionRoot object must be supplied to
38 // partitionAlloc(), representing a specific "heap partition" that will 38 // partitionAlloc(), representing a specific "heap partition" that will
39 // be used to satisfy the allocation. Different partitions are guaranteed to 39 // be used to satisfy the allocation. Different partitions are guaranteed to
40 // exist in separate address spaces, including being separate from the main 40 // exist in separate address spaces, including being separate from the main
41 // system heap. If the contained objects are all freed, physical memory is 41 // system heap. If the contained objects are all freed, physical memory is
42 // returned to the system but the address space remains reserved. 42 // returned to the system but the address space remains reserved.
43 // 43 //
44 // Allocations using this API are only supported from the Blink main thread. 44 // Allocations and frees against a single partition must be single threaded.
45 // This is ASSERT()ed in Debug builds. 45 // Allocations must not exceed a max size, typically 4088 at this time.
46 // Allocation sizes must be aligned to the system pointer size.
47 // The separate APIs partitionAllocGeneric and partitionFreeGeneric are
48 // provided, and they do not have the above three restrictions. In return, you
49 // take a small performance hit and are also obliged to keep track of
50 // allocation sizes, and pass them to partitionFreeGeneric.
46 // 51 //
47 // This allocator is designed to be extremely fast, thanks to the following 52 // This allocator is designed to be extremely fast, thanks to the following
48 // properties and design: 53 // properties and design:
49 // - Just a single (reasonably predicatable) branch in the hot / fast path for 54 // - Just a single (reasonably predicatable) branch in the hot / fast path for
50 // both allocating and (significantly) freeing. 55 // both allocating and (significantly) freeing.
51 // - A minimal number of operations in the hot / fast path, with the slow paths 56 // - A minimal number of operations in the hot / fast path, with the slow paths
52 // in separate functions, leading to the possibility of inlining. 57 // in separate functions, leading to the possibility of inlining.
53 // - Each partition page (which is usually multiple physical pages) has a header 58 // - Each partition page (which is usually multiple physical pages) has a header
54 // structure which allows fast mapping of free() address to an underlying 59 // structure which allows fast mapping of free() address to an underlying
55 // bucket. 60 // bucket.
56 // - No support for threading yet, leading to simpler design. 61 // - Supports a lock-free API for fast performance in single-threaded cases.
57 // - The freelist for a given bucket is split across a number of partition 62 // - The freelist for a given bucket is split across a number of partition
58 // pages, enabling various simple tricks to try and minimize fragmentation. 63 // pages, enabling various simple tricks to try and minimize fragmentation.
59 // - Fine-grained bucket sizes leading to less waste and better packing. 64 // - Fine-grained bucket sizes leading to less waste and better packing.
60 // 65 //
61 // The following security properties are provided at this time: 66 // The following security properties are provided at this time:
62 // - Linear overflows cannot corrupt into the partition. 67 // - Linear overflows cannot corrupt into the partition.
63 // - Linear overflows cannot corrupt out of the partition. 68 // - Linear overflows cannot corrupt out of the partition.
64 // - Freed pages will only be re-used within the partition. 69 // - Freed pages will only be re-used within the partition.
65 // - Freed pages will only hold same-sized objects when re-used. 70 // - Freed pages will only hold same-sized objects when re-used.
66 // - Dereference of freelist pointer will fault. 71 // - Dereference of freelist pointer will fault.
67 // 72 //
68 // The following security properties could be investigated in the future: 73 // The following security properties could be investigated in the future:
69 // - No double-free detection (tcmalloc has some but it may be only a detection 74 // - No double-free detection (tcmalloc has some but it may be only a detection
70 // and not a defense). 75 // and not a defense).
71 // - No randomness in freelist pointers. 76 // - No randomness in freelist pointers.
72 // - Per-object bucketing (instead of per-size) is mostly available at the API, 77 // - Per-object bucketing (instead of per-size) is mostly available at the API,
73 // but not used yet. 78 // but not used yet.
74 // - No randomness of freelist entries or bucket position. 79 // - No randomness of freelist entries or bucket position.
80 // - No specific protection against corruption of page header metadata.
75 81
76 #include "wtf/Assertions.h" 82 #include "wtf/Assertions.h"
77 #include "wtf/MainThread.h" 83 #include "wtf/Atomics.h"
84 #include "wtf/FastMalloc.h"
78 85
79 #include <stdlib.h> 86 #include <stdlib.h>
80 87
81 namespace WTF { 88 namespace WTF {
82 89
83 // Allocation granularity of sizeof(void*) bytes. 90 // Allocation granularity of sizeof(void*) bytes.
84 static const size_t kBucketShift = (sizeof(void*) == 8) ? 3 : 2; 91 static const size_t kAllocationGranularity = sizeof(void*);
85 // Support allocations up to kMaxAllocation bytes. 92 static const size_t kAllocationGranularityMask = kAllocationGranularity - 1;
86 static const size_t kMaxAllocation = 4096; 93 static const size_t kBucketShift = (kAllocationGranularity == 8) ? 3 : 2;
87 static const size_t kNumBuckets = kMaxAllocation / (1 << kBucketShift); 94 // Supports allocations up to 4088 (one bucket is used for metadata).
95 static const size_t kMaxAllocationOrder = 12;
96 static const size_t kMaxAllocation = (1 << kMaxAllocationOrder) - kAllocationGra nularity;
97 static const size_t kNumBuckets = (1 << kMaxAllocationOrder) / (1 << kBucketShif t);
88 // Underlying partition storage pages are a power-of-two size. It is typical 98 // Underlying partition storage pages are a power-of-two size. It is typical
89 // for a partition page to be based on multiple system pages. We rarely deal 99 // for a partition page to be based on multiple system pages. We rarely deal
90 // with system pages. Most references to "page" refer to partition pages. We 100 // with system pages. Most references to "page" refer to partition pages. We
91 // do also have the concept of "super pages" -- these are the underlying 101 // do also have the concept of "super pages" -- these are the underlying
92 // system allocations we make. Super pages can typically fit multiple 102 // system allocations we make. Super pages can typically fit multiple
93 // partition pages inside them. See PageAllocator.h for more details on 103 // partition pages inside them. See PageAllocator.h for more details on
94 // super pages. 104 // super pages.
95 static const size_t kPartitionPageSize = 1 << 14; // 16KB 105 static const size_t kPartitionPageSize = 1 << 14; // 16KB
96 static const size_t kPartitionPageOffsetMask = kPartitionPageSize - 1; 106 static const size_t kPartitionPageOffsetMask = kPartitionPageSize - 1;
97 static const size_t kPartitionPageBaseMask = ~kPartitionPageOffsetMask; 107 static const size_t kPartitionPageBaseMask = ~kPartitionPageOffsetMask;
(...skipping 21 matching lines...) Expand all
119 }; 129 };
120 130
121 struct PartitionBucket { 131 struct PartitionBucket {
122 PartitionRoot* root; 132 PartitionRoot* root;
123 PartitionPageHeader* currPage; 133 PartitionPageHeader* currPage;
124 PartitionFreepagelistEntry* freePages; 134 PartitionFreepagelistEntry* freePages;
125 size_t numFullPages; 135 size_t numFullPages;
126 }; 136 };
127 137
128 struct PartitionRoot { 138 struct PartitionRoot {
139 int lock;
129 PartitionPageHeader seedPage; 140 PartitionPageHeader seedPage;
130 PartitionBucket seedBucket; 141 PartitionBucket seedBucket;
131 PartitionBucket buckets[kNumBuckets]; 142 PartitionBucket buckets[kNumBuckets];
132 char* nextSuperPage; 143 char* nextSuperPage;
133 char* nextPartitionPage; 144 char* nextPartitionPage;
134 char* nextPartitionPageEnd; 145 char* nextPartitionPageEnd;
135 bool initialized; 146 bool initialized;
136 }; 147 };
137 148
138 WTF_EXPORT void partitionAllocInit(PartitionRoot*); 149 WTF_EXPORT void partitionAllocInit(PartitionRoot*);
139 WTF_EXPORT bool partitionAllocShutdown(PartitionRoot*); 150 WTF_EXPORT bool partitionAllocShutdown(PartitionRoot*);
140 151
141 WTF_EXPORT NEVER_INLINE void* partitionAllocSlowPath(PartitionBucket*); 152 WTF_EXPORT NEVER_INLINE void* partitionAllocSlowPath(PartitionBucket*);
142 WTF_EXPORT NEVER_INLINE void partitionFreeSlowPath(PartitionPageHeader*); 153 WTF_EXPORT NEVER_INLINE void partitionFreeSlowPath(PartitionPageHeader*);
154 WTF_EXPORT NEVER_INLINE void* partitionReallocGeneric(PartitionRoot*, void*, siz e_t, size_t);
143 155
144 ALWAYS_INLINE PartitionFreelistEntry* partitionFreelistMask(PartitionFreelistEnt ry* ptr) 156 ALWAYS_INLINE PartitionFreelistEntry* partitionFreelistMask(PartitionFreelistEnt ry* ptr)
145 { 157 {
146 // For now, use a simple / fast mask that guarantees an invalid pointer in 158 // For now, use a simple / fast mask that guarantees an invalid pointer in
147 // case it gets used as a vtable pointer. 159 // case it gets used as a vtable pointer.
148 // The one attack we're fully mitigating is where an object is freed and its 160 // The one attack we're fully mitigating is where an object is freed and its
149 // vtable used where the attacker doesn't get the chance to run allocations 161 // vtable used where the attacker doesn't get the chance to run allocations
150 // between the free and use. 162 // between the free and use.
151 // We're deliberately not trying to defend against OOB reads or writes. 163 // We're deliberately not trying to defend against OOB reads or writes.
152 uintptr_t masked = ~reinterpret_cast<uintptr_t>(ptr); 164 uintptr_t masked = ~reinterpret_cast<uintptr_t>(ptr);
(...skipping 19 matching lines...) Expand all
172 if (LIKELY(ret != 0)) { 184 if (LIKELY(ret != 0)) {
173 page->freelistHead = partitionFreelistMask(ret->next); 185 page->freelistHead = partitionFreelistMask(ret->next);
174 page->numAllocatedSlots++; 186 page->numAllocatedSlots++;
175 return ret; 187 return ret;
176 } 188 }
177 return partitionAllocSlowPath(bucket); 189 return partitionAllocSlowPath(bucket);
178 } 190 }
179 191
180 ALWAYS_INLINE void* partitionAlloc(PartitionRoot* root, size_t size) 192 ALWAYS_INLINE void* partitionAlloc(PartitionRoot* root, size_t size)
181 { 193 {
182 ASSERT(isMainThread());
183 ASSERT(size);
184 size_t index = size >> kBucketShift; 194 size_t index = size >> kBucketShift;
185 ASSERT(index < kNumBuckets); 195 ASSERT(index < kNumBuckets);
186 ASSERT(size == index << kBucketShift); 196 ASSERT(size == index << kBucketShift);
187 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR) 197 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
188 return malloc(size); 198 return malloc(size);
189 #else 199 #else
190 PartitionBucket* bucket = &root->buckets[index]; 200 PartitionBucket* bucket = &root->buckets[index];
191 return partitionBucketAlloc(bucket); 201 return partitionBucketAlloc(bucket);
192 #endif 202 #endif
193 } 203 }
194 204
195 ALWAYS_INLINE void partitionFree(void* ptr) 205 ALWAYS_INLINE PartitionPageHeader* partitionPointerToPage(void* ptr)
196 { 206 {
197 ASSERT(isMainThread());
198 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
199 free(ptr);
200 #else
201 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(ptr); 207 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(ptr);
202 // Checks that the pointer is after the page header. You can't free the 208 // Checks that the pointer is after the page header. You can't free the
203 // page header! 209 // page header!
204 ASSERT((pointerAsUint & kPartitionPageOffsetMask) >= sizeof(PartitionPageHea der)); 210 ASSERT((pointerAsUint & kPartitionPageOffsetMask) >= sizeof(PartitionPageHea der));
205 PartitionPageHeader* page = reinterpret_cast<PartitionPageHeader*>(pointerAs Uint & kPartitionPageBaseMask); 211 PartitionPageHeader* page = reinterpret_cast<PartitionPageHeader*>(pointerAs Uint & kPartitionPageBaseMask);
206 // Checks that the pointer is a multiple of bucket size. 212 // Checks that the pointer is a multiple of bucket size.
207 ASSERT(!(((pointerAsUint & kPartitionPageOffsetMask) - sizeof(PartitionPageH eader)) % partitionBucketSize(page->bucket))); 213 ASSERT(!(((pointerAsUint & kPartitionPageOffsetMask) - sizeof(PartitionPageH eader)) % partitionBucketSize(page->bucket)));
214 return page;
215 }
216
217 ALWAYS_INLINE void partitionFreeWithPage(void* ptr, PartitionPageHeader* page)
218 {
208 PartitionFreelistEntry* entry = static_cast<PartitionFreelistEntry*>(ptr); 219 PartitionFreelistEntry* entry = static_cast<PartitionFreelistEntry*>(ptr);
209 entry->next = partitionFreelistMask(page->freelistHead); 220 entry->next = partitionFreelistMask(page->freelistHead);
210 page->freelistHead = entry; 221 page->freelistHead = entry;
211 --page->numAllocatedSlots; 222 --page->numAllocatedSlots;
212 if (UNLIKELY(page->numAllocatedSlots <= 0)) 223 if (UNLIKELY(page->numAllocatedSlots <= 0))
213 partitionFreeSlowPath(page); 224 partitionFreeSlowPath(page);
225 }
226
227 ALWAYS_INLINE void partitionFree(void* ptr)
228 {
229 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
230 free(ptr);
231 #else
232 PartitionPageHeader* page = partitionPointerToPage(ptr);
233 partitionFreeWithPage(ptr, page);
214 #endif 234 #endif
215 } 235 }
216 236
237 ALWAYS_INLINE void* partitionAllocGeneric(PartitionRoot* root, size_t size)
238 {
239 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
240 return malloc(size);
241 #else
242 if (LIKELY(size <= kMaxAllocation)) {
243 size += kAllocationGranularityMask;
244 size &= ~kAllocationGranularityMask;
245 while (atomicIncrement(&root->lock) != 1)
246 atomicDecrement(&root->lock);
abarth-chromium 2013/08/02 23:58:04 Is this better than TCMalloc_SpinLock? In any cas
247 void* ret = partitionAlloc(root, size);
248 atomicDecrement(&root->lock);
249 return ret;
250 }
251 return WTF::fastMalloc(size);
252 #endif
253 }
254
255 ALWAYS_INLINE void partitionFreeGeneric(void* ptr, size_t size)
256 {
257 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
258 free(ptr);
259 #else
260 if (LIKELY(size <= kMaxAllocation)) {
261 PartitionPageHeader* page = partitionPointerToPage(ptr);
262 PartitionRoot* root = page->bucket->root;
263 while (atomicIncrement(&root->lock) != 1)
264 atomicDecrement(&root->lock);
265 partitionFreeWithPage(ptr, page);
266 atomicDecrement(&root->lock);
267 return;
268 }
269 return WTF::fastFree(ptr);
270 #endif
271 }
272
217 } // namespace WTF 273 } // namespace WTF
218 274
219 using WTF::PartitionRoot; 275 using WTF::PartitionRoot;
220 using WTF::partitionAllocInit; 276 using WTF::partitionAllocInit;
221 using WTF::partitionAllocShutdown; 277 using WTF::partitionAllocShutdown;
222 using WTF::partitionAlloc; 278 using WTF::partitionAlloc;
223 using WTF::partitionFree; 279 using WTF::partitionFree;
280 using WTF::partitionAllocGeneric;
281 using WTF::partitionFreeGeneric;
282 using WTF::partitionReallocGeneric;
224 283
225 #endif // WTF_PartitionAlloc_h 284 #endif // WTF_PartitionAlloc_h
OLDNEW
« no previous file with comments | « Source/core/rendering/RenderObject.cpp ('k') | Source/wtf/PartitionAlloc.cpp » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698