Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(12)

Side by Side Diff: third_party/WebKit/Source/wtf/PartitionAlloc.h

Issue 1436153002: Apply clang-format with Chromium-style without column limit. (Closed) Base URL: https://chromium.googlesource.com/chromium/src.git@master
Patch Set: Created 5 years, 1 month ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 /* 1 /*
2 * Copyright (C) 2013 Google Inc. All rights reserved. 2 * Copyright (C) 2013 Google Inc. All rights reserved.
3 * 3 *
4 * Redistribution and use in source and binary forms, with or without 4 * Redistribution and use in source and binary forms, with or without
5 * modification, are permitted provided that the following conditions are 5 * modification, are permitted provided that the following conditions are
6 * met: 6 * met:
7 * 7 *
8 * * Redistributions of source code must retain the above copyright 8 * * Redistributions of source code must retain the above copyright
9 * notice, this list of conditions and the following disclaimer. 9 * notice, this list of conditions and the following disclaimer.
10 * * Redistributions in binary form must reproduce the above 10 * * Redistributions in binary form must reproduce the above
(...skipping 115 matching lines...) Expand 10 before | Expand all | Expand 10 after
126 // We also have the concept of "super pages" -- these are the underlying system 126 // We also have the concept of "super pages" -- these are the underlying system
127 // allocations we make. Super pages contain multiple partition pages inside them 127 // allocations we make. Super pages contain multiple partition pages inside them
128 // and include space for a small amount of metadata per partition page. 128 // and include space for a small amount of metadata per partition page.
129 // Inside super pages, we store "slot spans". A slot span is a continguous range 129 // Inside super pages, we store "slot spans". A slot span is a continguous range
130 // of one or more partition pages that stores allocations of the same size. 130 // of one or more partition pages that stores allocations of the same size.
131 // Slot span sizes are adjusted depending on the allocation size, to make sure 131 // Slot span sizes are adjusted depending on the allocation size, to make sure
132 // the packing does not lead to unused (wasted) space at the end of the last 132 // the packing does not lead to unused (wasted) space at the end of the last
133 // system page of the span. For our current max slot span size of 64k and other 133 // system page of the span. For our current max slot span size of 64k and other
134 // constant values, we pack _all_ partitionAllocGeneric() sizes perfectly up 134 // constant values, we pack _all_ partitionAllocGeneric() sizes perfectly up
135 // against the end of a system page. 135 // against the end of a system page.
136 static const size_t kPartitionPageShift = 14; // 16KB 136 static const size_t kPartitionPageShift = 14; // 16KB
137 static const size_t kPartitionPageSize = 1 << kPartitionPageShift; 137 static const size_t kPartitionPageSize = 1 << kPartitionPageShift;
138 static const size_t kPartitionPageOffsetMask = kPartitionPageSize - 1; 138 static const size_t kPartitionPageOffsetMask = kPartitionPageSize - 1;
139 static const size_t kPartitionPageBaseMask = ~kPartitionPageOffsetMask; 139 static const size_t kPartitionPageBaseMask = ~kPartitionPageOffsetMask;
140 static const size_t kMaxPartitionPagesPerSlotSpan = 4; 140 static const size_t kMaxPartitionPagesPerSlotSpan = 4;
141 141
142 // To avoid fragmentation via never-used freelist entries, we hand out partition 142 // To avoid fragmentation via never-used freelist entries, we hand out partition
143 // freelist sections gradually, in units of the dominant system page size. 143 // freelist sections gradually, in units of the dominant system page size.
144 // What we're actually doing is avoiding filling the full partition page (16 KB) 144 // What we're actually doing is avoiding filling the full partition page (16 KB)
145 // with freelist pointers right away. Writing freelist pointers will fault and 145 // with freelist pointers right away. Writing freelist pointers will fault and
146 // dirty a private page, which is very wasteful if we never actually store 146 // dirty a private page, which is very wasteful if we never actually store
(...skipping 21 matching lines...) Expand all
168 // 168 //
169 // | SuperPageExtentEntry (32B) | PartitionPage of slot span 1 (32B, used) | PartitionPage of slot span 1 (32B, unused) | PartitionPage of slot span 1 (32B, unused) | PartitionPage of slot span 2 (32B, used) | PartitionPage of slot span 3 (32B, used) | ... | PartitionPage of slot span N (32B, unused) | 169 // | SuperPageExtentEntry (32B) | PartitionPage of slot span 1 (32B, used) | PartitionPage of slot span 1 (32B, unused) | PartitionPage of slot span 1 (32B, unused) | PartitionPage of slot span 2 (32B, used) | PartitionPage of slot span 3 (32B, used) | ... | PartitionPage of slot span N (32B, unused) |
170 // 170 //
171 // A direct mapped page has a similar layout to fake it looking like a super pag e: 171 // A direct mapped page has a similar layout to fake it looking like a super pag e:
172 // 172 //
173 // | Guard page (4KB) | Metadata page (4KB) | Guard pages (8KB) | Direct map ped object | Guard page (4KB) | 173 // | Guard page (4KB) | Metadata page (4KB) | Guard pages (8KB) | Direct map ped object | Guard page (4KB) |
174 // 174 //
175 // - The metadata page has the following layout: 175 // - The metadata page has the following layout:
176 // 176 //
177 // | SuperPageExtentEntry (32B) | PartitionPage (32B) | PartitionBucket (32B ) | PartitionDirectMapExtent (8B) | 177 // | SuperPageExtentEntry (32B) | PartitionPage (32B) | PartitionBucket (32B ) | PartitionDirectMapExtent (8B) |
178 static const size_t kSuperPageShift = 21; // 2MB 178 static const size_t kSuperPageShift = 21; // 2MB
179 static const size_t kSuperPageSize = 1 << kSuperPageShift; 179 static const size_t kSuperPageSize = 1 << kSuperPageShift;
180 static const size_t kSuperPageOffsetMask = kSuperPageSize - 1; 180 static const size_t kSuperPageOffsetMask = kSuperPageSize - 1;
181 static const size_t kSuperPageBaseMask = ~kSuperPageOffsetMask; 181 static const size_t kSuperPageBaseMask = ~kSuperPageOffsetMask;
182 static const size_t kNumPartitionPagesPerSuperPage = kSuperPageSize / kPartition PageSize; 182 static const size_t kNumPartitionPagesPerSuperPage = kSuperPageSize / kPartition PageSize;
183 183
184 static const size_t kPageMetadataShift = 5; // 32 bytes per partition page. 184 static const size_t kPageMetadataShift = 5; // 32 bytes per partition page.
185 static const size_t kPageMetadataSize = 1 << kPageMetadataShift; 185 static const size_t kPageMetadataSize = 1 << kPageMetadataShift;
186 186
187 // The following kGeneric* constants apply to the generic variants of the API. 187 // The following kGeneric* constants apply to the generic variants of the API.
188 // The "order" of an allocation is closely related to the power-of-two size of 188 // The "order" of an allocation is closely related to the power-of-two size of
189 // the allocation. More precisely, the order is the bit index of the 189 // the allocation. More precisely, the order is the bit index of the
190 // most-significant-bit in the allocation size, where the bit numbers starts 190 // most-significant-bit in the allocation size, where the bit numbers starts
191 // at index 1 for the least-significant-bit. 191 // at index 1 for the least-significant-bit.
192 // In terms of allocation sizes, order 0 covers 0, order 1 covers 1, order 2 192 // In terms of allocation sizes, order 0 covers 0, order 1 covers 1, order 2
193 // covers 2->3, order 3 covers 4->7, order 4 covers 8->15. 193 // covers 2->3, order 3 covers 4->7, order 4 covers 8->15.
194 static const size_t kGenericMinBucketedOrder = 4; // 8 bytes. 194 static const size_t kGenericMinBucketedOrder = 4; // 8 bytes.
195 static const size_t kGenericMaxBucketedOrder = 20; // Largest bucketed order is 1<<(20-1) (storing 512KB -> almost 1MB) 195 static const size_t kGenericMaxBucketedOrder = 20; // Largest bucketed order is 1<<(20-1) (storing 512KB -> almost 1MB)
196 static const size_t kGenericNumBucketedOrders = (kGenericMaxBucketedOrder - kGen ericMinBucketedOrder) + 1; 196 static const size_t kGenericNumBucketedOrders = (kGenericMaxBucketedOrder - kGen ericMinBucketedOrder) + 1;
197 static const size_t kGenericNumBucketsPerOrderBits = 3; // Eight buckets per ord er (for the higher orders), e.g. order 8 is 128, 144, 160, ..., 240 197 static const size_t kGenericNumBucketsPerOrderBits = 3; // Eight buckets per or der (for the higher orders), e.g. order 8 is 128, 144, 160, ..., 240
198 static const size_t kGenericNumBucketsPerOrder = 1 << kGenericNumBucketsPerOrder Bits; 198 static const size_t kGenericNumBucketsPerOrder = 1 << kGenericNumBucketsPerOrder Bits;
199 static const size_t kGenericNumBuckets = kGenericNumBucketedOrders * kGenericNum BucketsPerOrder; 199 static const size_t kGenericNumBuckets = kGenericNumBucketedOrders * kGenericNum BucketsPerOrder;
200 static const size_t kGenericSmallestBucket = 1 << (kGenericMinBucketedOrder - 1) ; 200 static const size_t kGenericSmallestBucket = 1 << (kGenericMinBucketedOrder - 1) ;
201 static const size_t kGenericMaxBucketSpacing = 1 << ((kGenericMaxBucketedOrder - 1) - kGenericNumBucketsPerOrderBits); 201 static const size_t kGenericMaxBucketSpacing = 1 << ((kGenericMaxBucketedOrder - 1) - kGenericNumBucketsPerOrderBits);
202 static const size_t kGenericMaxBucketed = (1 << (kGenericMaxBucketedOrder - 1)) + ((kGenericNumBucketsPerOrder - 1) * kGenericMaxBucketSpacing); 202 static const size_t kGenericMaxBucketed = (1 << (kGenericMaxBucketedOrder - 1)) + ((kGenericNumBucketsPerOrder - 1) * kGenericMaxBucketSpacing);
203 static const size_t kGenericMinDirectMappedDownsize = kGenericMaxBucketed + 1; / / Limit when downsizing a direct mapping using realloc(). 203 static const size_t kGenericMinDirectMappedDownsize = kGenericMaxBucketed + 1; // Limit when downsizing a direct mapping using realloc().
204 static const size_t kGenericMaxDirectMapped = INT_MAX - kSystemPageSize; 204 static const size_t kGenericMaxDirectMapped = INT_MAX - kSystemPageSize;
205 static const size_t kBitsPerSizet = sizeof(void*) * CHAR_BIT; 205 static const size_t kBitsPerSizet = sizeof(void*) * CHAR_BIT;
206 206
207 // Constants for the memory reclaim logic. 207 // Constants for the memory reclaim logic.
208 static const size_t kMaxFreeableSpans = 16; 208 static const size_t kMaxFreeableSpans = 16;
209 209
210 // If the total size in bytes of allocated but not committed pages exceeds this 210 // If the total size in bytes of allocated but not committed pages exceeds this
211 // value (probably it is a "out of virtual address space" crash), 211 // value (probably it is a "out of virtual address space" crash),
212 // a special crash stack trace is generated at |partitionOutOfMemory|. 212 // a special crash stack trace is generated at |partitionOutOfMemory|.
213 // This is to distinguish "out of virtual address space" from 213 // This is to distinguish "out of virtual address space" from
214 // "out of physical memory" in crash reports. 214 // "out of physical memory" in crash reports.
215 static const size_t kReasonableSizeOfUnusedPages = 1024 * 1024 * 1024; // 1GiB 215 static const size_t kReasonableSizeOfUnusedPages = 1024 * 1024 * 1024; // 1GiB
216 216
217 #if ENABLE(ASSERT) 217 #if ENABLE(ASSERT)
218 // These two byte values match tcmalloc. 218 // These two byte values match tcmalloc.
219 static const unsigned char kUninitializedByte = 0xAB; 219 static const unsigned char kUninitializedByte = 0xAB;
220 static const unsigned char kFreedByte = 0xCD; 220 static const unsigned char kFreedByte = 0xCD;
221 static const size_t kCookieSize = 16; // Handles alignment up to XMM instruction s on Intel. 221 static const size_t kCookieSize = 16; // Handles alignment up to XMM instructio ns on Intel.
222 static const unsigned char kCookieValue[kCookieSize] = { 0xDE, 0xAD, 0xBE, 0xEF, 0xCA, 0xFE, 0xD0, 0x0D, 0x13, 0x37, 0xF0, 0x05, 0xBA, 0x11, 0xAB, 0x1E }; 222 static const unsigned char kCookieValue[kCookieSize] = {0xDE, 0xAD, 0xBE, 0xEF, 0xCA, 0xFE, 0xD0, 0x0D, 0x13, 0x37, 0xF0, 0x05, 0xBA, 0x11, 0xAB, 0x1E};
223 #endif 223 #endif
224 224
225 struct PartitionBucket; 225 struct PartitionBucket;
226 struct PartitionRootBase; 226 struct PartitionRootBase;
227 227
228 struct PartitionFreelistEntry { 228 struct PartitionFreelistEntry {
229 PartitionFreelistEntry* next; 229 PartitionFreelistEntry* next;
230 }; 230 };
231 231
232 // Some notes on page states. A page can be in one of four major states: 232 // Some notes on page states. A page can be in one of four major states:
233 // 1) Active. 233 // 1) Active.
234 // 2) Full. 234 // 2) Full.
235 // 3) Empty. 235 // 3) Empty.
236 // 4) Decommitted. 236 // 4) Decommitted.
237 // An active page has available free slots. A full page has no free slots. An 237 // An active page has available free slots. A full page has no free slots. An
238 // empty page has no free slots, and a decommitted page is an empty page that 238 // empty page has no free slots, and a decommitted page is an empty page that
239 // had its backing memory released back to the system. 239 // had its backing memory released back to the system.
240 // There are two linked lists tracking the pages. The "active page" list is an 240 // There are two linked lists tracking the pages. The "active page" list is an
241 // approximation of a list of active pages. It is an approximation because 241 // approximation of a list of active pages. It is an approximation because
242 // full, empty and decommitted pages may briefly be present in the list until 242 // full, empty and decommitted pages may briefly be present in the list until
243 // we next do a scan over it. 243 // we next do a scan over it.
244 // The "empty page" list is an accurate list of pages which are either empty 244 // The "empty page" list is an accurate list of pages which are either empty
245 // or decommitted. 245 // or decommitted.
246 // 246 //
247 // The significant page transitions are: 247 // The significant page transitions are:
248 // - free() will detect when a full page has a slot free()'d and immediately 248 // - free() will detect when a full page has a slot free()'d and immediately
249 // return the page to the head of the active list. 249 // return the page to the head of the active list.
250 // - free() will detect when a page is fully emptied. It _may_ add it to the 250 // - free() will detect when a page is fully emptied. It _may_ add it to the
251 // empty list or it _may_ leave it on the active list until a future list scan. 251 // empty list or it _may_ leave it on the active list until a future list scan.
252 // - malloc() _may_ scan the active page list in order to fulfil the request. 252 // - malloc() _may_ scan the active page list in order to fulfil the request.
253 // If it does this, full, empty and decommitted pages encountered will be 253 // If it does this, full, empty and decommitted pages encountered will be
254 // booted out of the active list. If there are no suitable active pages found, 254 // booted out of the active list. If there are no suitable active pages found,
255 // an empty or decommitted page (if one exists) will be pulled from the empty 255 // an empty or decommitted page (if one exists) will be pulled from the empty
256 // list on to the active list. 256 // list on to the active list.
257 struct PartitionPage { 257 struct PartitionPage {
258 PartitionFreelistEntry* freelistHead; 258 PartitionFreelistEntry* freelistHead;
259 PartitionPage* nextPage; 259 PartitionPage* nextPage;
260 PartitionBucket* bucket; 260 PartitionBucket* bucket;
261 int16_t numAllocatedSlots; // Deliberately signed, 0 for empty or decommitte d page, -n for full pages. 261 int16_t numAllocatedSlots; // Deliberately signed, 0 for empty or decommitted page, -n for full pages.
262 uint16_t numUnprovisionedSlots; 262 uint16_t numUnprovisionedSlots;
263 uint16_t pageOffset; 263 uint16_t pageOffset;
264 int16_t emptyCacheIndex; // -1 if not in the empty cache. 264 int16_t emptyCacheIndex; // -1 if not in the empty cache.
265 }; 265 };
266 266
267 struct PartitionBucket { 267 struct PartitionBucket {
268 PartitionPage* activePagesHead; // Accessed most in hot path => goes first. 268 PartitionPage* activePagesHead; // Accessed most in hot path => goes first.
269 PartitionPage* emptyPagesHead; 269 PartitionPage* emptyPagesHead;
270 PartitionPage* decommittedPagesHead; 270 PartitionPage* decommittedPagesHead;
271 uint32_t slotSize; 271 uint32_t slotSize;
272 uint16_t numSystemPagesPerSlotSpan; 272 uint16_t numSystemPagesPerSlotSpan;
273 uint16_t numFullPages; 273 uint16_t numFullPages;
274 }; 274 };
275 275
276 // An "extent" is a span of consecutive superpages. We link to the partition's 276 // An "extent" is a span of consecutive superpages. We link to the partition's
277 // next extent (if there is one) at the very start of a superpage's metadata 277 // next extent (if there is one) at the very start of a superpage's metadata
278 // area. 278 // area.
279 struct PartitionSuperPageExtentEntry { 279 struct PartitionSuperPageExtentEntry {
280 PartitionRootBase* root; 280 PartitionRootBase* root;
281 char* superPageBase; 281 char* superPageBase;
282 char* superPagesEnd; 282 char* superPagesEnd;
283 PartitionSuperPageExtentEntry* next; 283 PartitionSuperPageExtentEntry* next;
284 }; 284 };
285 285
286 struct PartitionDirectMapExtent { 286 struct PartitionDirectMapExtent {
287 PartitionDirectMapExtent* nextExtent; 287 PartitionDirectMapExtent* nextExtent;
288 PartitionDirectMapExtent* prevExtent; 288 PartitionDirectMapExtent* prevExtent;
289 PartitionBucket* bucket; 289 PartitionBucket* bucket;
290 size_t mapSize; // Mapped size, not including guard pages and meta-data. 290 size_t mapSize; // Mapped size, not including guard pages and meta-data.
291 }; 291 };
292 292
293 struct WTF_EXPORT PartitionRootBase { 293 struct WTF_EXPORT PartitionRootBase {
294 size_t totalSizeOfCommittedPages; 294 size_t totalSizeOfCommittedPages;
295 size_t totalSizeOfSuperPages; 295 size_t totalSizeOfSuperPages;
296 size_t totalSizeOfDirectMappedPages; 296 size_t totalSizeOfDirectMappedPages;
297 // Invariant: totalSizeOfCommittedPages <= totalSizeOfSuperPages + totalSize OfDirectMappedPages. 297 // Invariant: totalSizeOfCommittedPages <= totalSizeOfSuperPages + totalSizeOf DirectMappedPages.
298 unsigned numBuckets; 298 unsigned numBuckets;
299 unsigned maxAllocation; 299 unsigned maxAllocation;
300 bool initialized; 300 bool initialized;
301 char* nextSuperPage; 301 char* nextSuperPage;
302 char* nextPartitionPage; 302 char* nextPartitionPage;
303 char* nextPartitionPageEnd; 303 char* nextPartitionPageEnd;
304 PartitionSuperPageExtentEntry* currentExtent; 304 PartitionSuperPageExtentEntry* currentExtent;
305 PartitionSuperPageExtentEntry* firstExtent; 305 PartitionSuperPageExtentEntry* firstExtent;
306 PartitionDirectMapExtent* directMapList; 306 PartitionDirectMapExtent* directMapList;
307 PartitionPage* globalEmptyPageRing[kMaxFreeableSpans]; 307 PartitionPage* globalEmptyPageRing[kMaxFreeableSpans];
308 int16_t globalEmptyPageRingIndex; 308 int16_t globalEmptyPageRingIndex;
309 uintptr_t invertedSelf; 309 uintptr_t invertedSelf;
310 310
311 static int gInitializedLock; 311 static int gInitializedLock;
312 static bool gInitialized; 312 static bool gInitialized;
313 // gSeedPage is used as a sentinel to indicate that there is no page 313 // gSeedPage is used as a sentinel to indicate that there is no page
314 // in the active page list. We can use nullptr, but in that case we need 314 // in the active page list. We can use nullptr, but in that case we need
315 // to add a null-check branch to the hot allocation path. We want to avoid 315 // to add a null-check branch to the hot allocation path. We want to avoid
316 // that. 316 // that.
317 static PartitionPage gSeedPage; 317 static PartitionPage gSeedPage;
318 static PartitionBucket gPagedBucket; 318 static PartitionBucket gPagedBucket;
319 // gOomHandlingFunction is invoked when ParitionAlloc hits OutOfMemory. 319 // gOomHandlingFunction is invoked when ParitionAlloc hits OutOfMemory.
320 static void (*gOomHandlingFunction)(); 320 static void (*gOomHandlingFunction)();
321 }; 321 };
322 322
323 // Never instantiate a PartitionRoot directly, instead use PartitionAlloc. 323 // Never instantiate a PartitionRoot directly, instead use PartitionAlloc.
324 struct PartitionRoot : public PartitionRootBase { 324 struct PartitionRoot : public PartitionRootBase {
325 // The PartitionAlloc templated class ensures the following is correct. 325 // The PartitionAlloc templated class ensures the following is correct.
326 ALWAYS_INLINE PartitionBucket* buckets() { return reinterpret_cast<Partition Bucket*>(this + 1); } 326 ALWAYS_INLINE PartitionBucket* buckets() { return reinterpret_cast<PartitionBu cket*>(this + 1); }
327 ALWAYS_INLINE const PartitionBucket* buckets() const { return reinterpret_ca st<const PartitionBucket*>(this + 1); } 327 ALWAYS_INLINE const PartitionBucket* buckets() const { return reinterpret_cast <const PartitionBucket*>(this + 1); }
328 }; 328 };
329 329
330 // Never instantiate a PartitionRootGeneric directly, instead use PartitionAlloc atorGeneric. 330 // Never instantiate a PartitionRootGeneric directly, instead use PartitionAlloc atorGeneric.
331 struct PartitionRootGeneric : public PartitionRootBase { 331 struct PartitionRootGeneric : public PartitionRootBase {
332 int lock; 332 int lock;
333 // Some pre-computed constants. 333 // Some pre-computed constants.
334 size_t orderIndexShifts[kBitsPerSizet + 1]; 334 size_t orderIndexShifts[kBitsPerSizet + 1];
335 size_t orderSubIndexMasks[kBitsPerSizet + 1]; 335 size_t orderSubIndexMasks[kBitsPerSizet + 1];
336 // The bucket lookup table lets us map a size_t to a bucket quickly. 336 // The bucket lookup table lets us map a size_t to a bucket quickly.
337 // The trailing +1 caters for the overflow case for very large allocation si zes. 337 // The trailing +1 caters for the overflow case for very large allocation size s.
338 // It is one flat array instead of a 2D array because in the 2D world, we'd 338 // It is one flat array instead of a 2D array because in the 2D world, we'd
339 // need to index array[blah][max+1] which risks undefined behavior. 339 // need to index array[blah][max+1] which risks undefined behavior.
340 PartitionBucket* bucketLookups[((kBitsPerSizet + 1) * kGenericNumBucketsPerO rder) + 1]; 340 PartitionBucket* bucketLookups[((kBitsPerSizet + 1) * kGenericNumBucketsPerOrd er) + 1];
341 PartitionBucket buckets[kGenericNumBuckets]; 341 PartitionBucket buckets[kGenericNumBuckets];
342 }; 342 };
343 343
344 // Flags for partitionAllocGenericFlags. 344 // Flags for partitionAllocGenericFlags.
345 enum PartitionAllocFlags { 345 enum PartitionAllocFlags {
346 PartitionAllocReturnNull = 1 << 0, 346 PartitionAllocReturnNull = 1 << 0,
347 }; 347 };
348 348
349 // Struct used to retrieve total memory usage of a partition. Used by 349 // Struct used to retrieve total memory usage of a partition. Used by
350 // PartitionStatsDumper implementation. 350 // PartitionStatsDumper implementation.
351 struct PartitionMemoryStats { 351 struct PartitionMemoryStats {
352 size_t totalMmappedBytes; // Total bytes mmaped from the system. 352 size_t totalMmappedBytes; // Total bytes mmaped from the system.
353 size_t totalCommittedBytes; // Total size of commmitted pages. 353 size_t totalCommittedBytes; // Total size of commmitted pages.
354 size_t totalResidentBytes; // Total bytes provisioned by the partition. 354 size_t totalResidentBytes; // Total bytes provisioned by the partition.
355 size_t totalActiveBytes; // Total active bytes in the partition. 355 size_t totalActiveBytes; // Total active bytes in the partition.
356 size_t totalDecommittableBytes; // Total bytes that could be decommitted. 356 size_t totalDecommittableBytes; // Total bytes that could be decommitted.
357 size_t totalDiscardableBytes; // Total bytes that could be discarded. 357 size_t totalDiscardableBytes; // Total bytes that could be discarded.
358 }; 358 };
359 359
360 // Struct used to retrieve memory statistics about a partition bucket. Used by 360 // Struct used to retrieve memory statistics about a partition bucket. Used by
361 // PartitionStatsDumper implementation. 361 // PartitionStatsDumper implementation.
362 struct PartitionBucketMemoryStats { 362 struct PartitionBucketMemoryStats {
363 bool isValid; // Used to check if the stats is valid. 363 bool isValid; // Used to check if the stats is valid.
364 bool isDirectMap; // True if this is a direct mapping; size will not be uniq ue. 364 bool isDirectMap; // True if this is a direct mapping; size will not be unique.
365 uint32_t bucketSlotSize; // The size of the slot in bytes. 365 uint32_t bucketSlotSize; // The size of the slot in bytes.
366 uint32_t allocatedPageSize; // Total size the partition page allocated from the system. 366 uint32_t allocatedPageSize; // Total size the partition page allocated from the system.
367 uint32_t activeBytes; // Total active bytes used in the bucket. 367 uint32_t activeBytes; // Total active bytes used in the bucket.
368 uint32_t residentBytes; // Total bytes provisioned in the bucket. 368 uint32_t residentBytes; // Total bytes provisioned in the bucket.
369 uint32_t decommittableBytes; // Total bytes that could be decommitted. 369 uint32_t decommittableBytes; // Total bytes that could be decommitted.
370 uint32_t discardableBytes; // Total bytes that could be discarded. 370 uint32_t discardableBytes; // Total bytes that could be discarded.
371 uint32_t numFullPages; // Number of pages with all slots allocated. 371 uint32_t numFullPages; // Number of pages with all slots allocated.
372 uint32_t numActivePages; // Number of pages that have at least one provision ed slot. 372 uint32_t numActivePages; // Number of pages that have at least one provi sioned slot.
373 uint32_t numEmptyPages; // Number of pages that are empty but not decommitte d. 373 uint32_t numEmptyPages; // Number of pages that are empty but not decom mitted.
374 uint32_t numDecommittedPages; // Number of pages that are empty and decommit ted. 374 uint32_t numDecommittedPages; // Number of pages that are empty and decommitt ed.
375 }; 375 };
376 376
377 // Interface that is passed to partitionDumpStats and 377 // Interface that is passed to partitionDumpStats and
378 // partitionDumpStatsGeneric for using the memory statistics. 378 // partitionDumpStatsGeneric for using the memory statistics.
379 class WTF_EXPORT PartitionStatsDumper { 379 class WTF_EXPORT PartitionStatsDumper {
380 public: 380 public:
381 // Called to dump total memory used by partition, once per partition. 381 // Called to dump total memory used by partition, once per partition.
382 virtual void partitionDumpTotals(const char* partitionName, const PartitionM emoryStats*) = 0; 382 virtual void partitionDumpTotals(const char* partitionName, const PartitionMem oryStats*) = 0;
383 383
384 // Called to dump stats about buckets, for each bucket. 384 // Called to dump stats about buckets, for each bucket.
385 virtual void partitionsDumpBucketStats(const char* partitionName, const Part itionBucketMemoryStats*) = 0; 385 virtual void partitionsDumpBucketStats(const char* partitionName, const Partit ionBucketMemoryStats*) = 0;
386 }; 386 };
387 387
388 WTF_EXPORT void partitionAllocGlobalInit(void (*oomHandlingFunction)()); 388 WTF_EXPORT void partitionAllocGlobalInit(void (*oomHandlingFunction)());
389 WTF_EXPORT void partitionAllocInit(PartitionRoot*, size_t numBuckets, size_t max Allocation); 389 WTF_EXPORT void partitionAllocInit(PartitionRoot*, size_t numBuckets, size_t max Allocation);
390 WTF_EXPORT bool partitionAllocShutdown(PartitionRoot*); 390 WTF_EXPORT bool partitionAllocShutdown(PartitionRoot*);
391 WTF_EXPORT void partitionAllocGenericInit(PartitionRootGeneric*); 391 WTF_EXPORT void partitionAllocGenericInit(PartitionRootGeneric*);
392 WTF_EXPORT bool partitionAllocGenericShutdown(PartitionRootGeneric*); 392 WTF_EXPORT bool partitionAllocGenericShutdown(PartitionRootGeneric*);
393 393
394 enum PartitionPurgeFlags { 394 enum PartitionPurgeFlags {
395 // Decommitting the ring list of empty pages is reasonably fast. 395 // Decommitting the ring list of empty pages is reasonably fast.
396 PartitionPurgeDecommitEmptyPages = 1 << 0, 396 PartitionPurgeDecommitEmptyPages = 1 << 0,
397 // Discarding unused system pages is slower, because it involves walking all 397 // Discarding unused system pages is slower, because it involves walking all
398 // freelists in all active partition pages of all buckets >= system page 398 // freelists in all active partition pages of all buckets >= system page
399 // size. It often frees a similar amount of memory to decommitting the empty 399 // size. It often frees a similar amount of memory to decommitting the empty
400 // pages, though. 400 // pages, though.
401 PartitionPurgeDiscardUnusedSystemPages = 1 << 1, 401 PartitionPurgeDiscardUnusedSystemPages = 1 << 1,
402 }; 402 };
403 403
404 WTF_EXPORT void partitionPurgeMemory(PartitionRoot*, int); 404 WTF_EXPORT void partitionPurgeMemory(PartitionRoot*, int);
405 WTF_EXPORT void partitionPurgeMemoryGeneric(PartitionRootGeneric*, int); 405 WTF_EXPORT void partitionPurgeMemoryGeneric(PartitionRootGeneric*, int);
406 406
407 WTF_EXPORT NEVER_INLINE void* partitionAllocSlowPath(PartitionRootBase*, int, si ze_t, PartitionBucket*); 407 WTF_EXPORT NEVER_INLINE void* partitionAllocSlowPath(PartitionRootBase*, int, si ze_t, PartitionBucket*);
408 WTF_EXPORT NEVER_INLINE void partitionFreeSlowPath(PartitionPage*); 408 WTF_EXPORT NEVER_INLINE void partitionFreeSlowPath(PartitionPage*);
409 WTF_EXPORT NEVER_INLINE void* partitionReallocGeneric(PartitionRootGeneric*, voi d*, size_t); 409 WTF_EXPORT NEVER_INLINE void* partitionReallocGeneric(PartitionRootGeneric*, voi d*, size_t);
410 410
411 WTF_EXPORT void partitionDumpStats(PartitionRoot*, const char* partitionName, bo ol isLightDump, PartitionStatsDumper*); 411 WTF_EXPORT void partitionDumpStats(PartitionRoot*, const char* partitionName, bo ol isLightDump, PartitionStatsDumper*);
412 WTF_EXPORT void partitionDumpStatsGeneric(PartitionRootGeneric*, const char* par titionName, bool isLightDump, PartitionStatsDumper*); 412 WTF_EXPORT void partitionDumpStatsGeneric(PartitionRootGeneric*, const char* par titionName, bool isLightDump, PartitionStatsDumper*);
413 413
414 class WTF_EXPORT PartitionAllocHooks { 414 class WTF_EXPORT PartitionAllocHooks {
415 public: 415 public:
416 typedef void AllocationHook(void* address, size_t); 416 typedef void AllocationHook(void* address, size_t);
417 typedef void FreeHook(void* address); 417 typedef void FreeHook(void* address);
418 418
419 static void setAllocationHook(AllocationHook* hook) { m_allocationHook = hoo k; } 419 static void setAllocationHook(AllocationHook* hook) { m_allocationHook = hook; }
420 static void setFreeHook(FreeHook* hook) { m_freeHook = hook; } 420 static void setFreeHook(FreeHook* hook) { m_freeHook = hook; }
421 421
422 static void allocationHookIfEnabled(void* address, size_t size) 422 static void allocationHookIfEnabled(void* address, size_t size) {
423 { 423 AllocationHook* allocationHook = m_allocationHook;
424 AllocationHook* allocationHook = m_allocationHook; 424 if (UNLIKELY(allocationHook != nullptr))
425 if (UNLIKELY(allocationHook != nullptr)) 425 allocationHook(address, size);
426 allocationHook(address, size); 426 }
427
428 static void freeHookIfEnabled(void* address) {
429 FreeHook* freeHook = m_freeHook;
430 if (UNLIKELY(freeHook != nullptr))
431 freeHook(address);
432 }
433
434 static void reallocHookIfEnabled(void* oldAddress, void* newAddress, size_t si ze) {
435 // Report a reallocation as a free followed by an allocation.
436 AllocationHook* allocationHook = m_allocationHook;
437 FreeHook* freeHook = m_freeHook;
438 if (UNLIKELY(allocationHook && freeHook)) {
439 freeHook(oldAddress);
440 allocationHook(newAddress, size);
427 } 441 }
428 442 }
429 static void freeHookIfEnabled(void* address) 443
430 { 444 private:
431 FreeHook* freeHook = m_freeHook; 445 // Pointers to hook functions that PartitionAlloc will call on allocation and
432 if (UNLIKELY(freeHook != nullptr)) 446 // free if the pointers are non-null.
433 freeHook(address); 447 static AllocationHook* m_allocationHook;
434 } 448 static FreeHook* m_freeHook;
435
436 static void reallocHookIfEnabled(void* oldAddress, void* newAddress, size_t size)
437 {
438 // Report a reallocation as a free followed by an allocation.
439 AllocationHook* allocationHook = m_allocationHook;
440 FreeHook* freeHook = m_freeHook;
441 if (UNLIKELY(allocationHook && freeHook)) {
442 freeHook(oldAddress);
443 allocationHook(newAddress, size);
444 }
445 }
446
447 private:
448 // Pointers to hook functions that PartitionAlloc will call on allocation an d
449 // free if the pointers are non-null.
450 static AllocationHook* m_allocationHook;
451 static FreeHook* m_freeHook;
452 }; 449 };
453 450
454 ALWAYS_INLINE PartitionFreelistEntry* partitionFreelistMask(PartitionFreelistEnt ry* ptr) 451 ALWAYS_INLINE PartitionFreelistEntry* partitionFreelistMask(PartitionFreelistEnt ry* ptr) {
455 { 452 // We use bswap on little endian as a fast mask for two reasons:
456 // We use bswap on little endian as a fast mask for two reasons: 453 // 1) If an object is freed and its vtable used where the attacker doesn't
457 // 1) If an object is freed and its vtable used where the attacker doesn't 454 // get the chance to run allocations between the free and use, the vtable
458 // get the chance to run allocations between the free and use, the vtable 455 // dereference is likely to fault.
459 // dereference is likely to fault. 456 // 2) If the attacker has a linear buffer overflow and elects to try and
460 // 2) If the attacker has a linear buffer overflow and elects to try and 457 // corrupt a freelist pointer, partial pointer overwrite attacks are
461 // corrupt a freelist pointer, partial pointer overwrite attacks are 458 // thwarted.
462 // thwarted. 459 // For big endian, similar guarantees are arrived at with a negation.
463 // For big endian, similar guarantees are arrived at with a negation.
464 #if CPU(BIG_ENDIAN) 460 #if CPU(BIG_ENDIAN)
465 uintptr_t masked = ~reinterpret_cast<uintptr_t>(ptr); 461 uintptr_t masked = ~reinterpret_cast<uintptr_t>(ptr);
466 #else 462 #else
467 uintptr_t masked = bswapuintptrt(reinterpret_cast<uintptr_t>(ptr)); 463 uintptr_t masked = bswapuintptrt(reinterpret_cast<uintptr_t>(ptr));
468 #endif 464 #endif
469 return reinterpret_cast<PartitionFreelistEntry*>(masked); 465 return reinterpret_cast<PartitionFreelistEntry*>(masked);
470 } 466 }
471 467
472 ALWAYS_INLINE size_t partitionCookieSizeAdjustAdd(size_t size) 468 ALWAYS_INLINE size_t partitionCookieSizeAdjustAdd(size_t size) {
473 { 469 #if ENABLE(ASSERT)
474 #if ENABLE(ASSERT) 470 // Add space for cookies, checking for integer overflow.
475 // Add space for cookies, checking for integer overflow. 471 ASSERT(size + (2 * kCookieSize) > size);
476 ASSERT(size + (2 * kCookieSize) > size); 472 size += 2 * kCookieSize;
477 size += 2 * kCookieSize; 473 #endif
478 #endif 474 return size;
479 return size; 475 }
480 } 476
481 477 ALWAYS_INLINE size_t partitionCookieSizeAdjustSubtract(size_t size) {
482 ALWAYS_INLINE size_t partitionCookieSizeAdjustSubtract(size_t size) 478 #if ENABLE(ASSERT)
483 { 479 // Remove space for cookies.
484 #if ENABLE(ASSERT) 480 ASSERT(size >= 2 * kCookieSize);
485 // Remove space for cookies. 481 size -= 2 * kCookieSize;
486 ASSERT(size >= 2 * kCookieSize); 482 #endif
487 size -= 2 * kCookieSize; 483 return size;
488 #endif 484 }
489 return size; 485
490 } 486 ALWAYS_INLINE void* partitionCookieFreePointerAdjust(void* ptr) {
491 487 #if ENABLE(ASSERT)
492 ALWAYS_INLINE void* partitionCookieFreePointerAdjust(void* ptr) 488 // The value given to the application is actually just after the cookie.
493 { 489 ptr = static_cast<char*>(ptr) - kCookieSize;
494 #if ENABLE(ASSERT) 490 #endif
495 // The value given to the application is actually just after the cookie. 491 return ptr;
496 ptr = static_cast<char*>(ptr) - kCookieSize; 492 }
497 #endif 493
498 return ptr; 494 ALWAYS_INLINE void partitionCookieWriteValue(void* ptr) {
499 } 495 #if ENABLE(ASSERT)
500 496 unsigned char* cookiePtr = reinterpret_cast<unsigned char*>(ptr);
501 ALWAYS_INLINE void partitionCookieWriteValue(void* ptr) 497 for (size_t i = 0; i < kCookieSize; ++i, ++cookiePtr)
502 { 498 *cookiePtr = kCookieValue[i];
503 #if ENABLE(ASSERT) 499 #endif
504 unsigned char* cookiePtr = reinterpret_cast<unsigned char*>(ptr); 500 }
505 for (size_t i = 0; i < kCookieSize; ++i, ++cookiePtr) 501
506 *cookiePtr = kCookieValue[i]; 502 ALWAYS_INLINE void partitionCookieCheckValue(void* ptr) {
507 #endif 503 #if ENABLE(ASSERT)
508 } 504 unsigned char* cookiePtr = reinterpret_cast<unsigned char*>(ptr);
509 505 for (size_t i = 0; i < kCookieSize; ++i, ++cookiePtr)
510 ALWAYS_INLINE void partitionCookieCheckValue(void* ptr) 506 ASSERT(*cookiePtr == kCookieValue[i]);
511 { 507 #endif
512 #if ENABLE(ASSERT) 508 }
513 unsigned char* cookiePtr = reinterpret_cast<unsigned char*>(ptr); 509
514 for (size_t i = 0; i < kCookieSize; ++i, ++cookiePtr) 510 ALWAYS_INLINE char* partitionSuperPageToMetadataArea(char* ptr) {
515 ASSERT(*cookiePtr == kCookieValue[i]); 511 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(ptr);
516 #endif 512 ASSERT(!(pointerAsUint & kSuperPageOffsetMask));
517 } 513 // The metadata area is exactly one system page (the guard page) into the
518 514 // super page.
519 ALWAYS_INLINE char* partitionSuperPageToMetadataArea(char* ptr) 515 return reinterpret_cast<char*>(pointerAsUint + kSystemPageSize);
520 { 516 }
521 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(ptr); 517
522 ASSERT(!(pointerAsUint & kSuperPageOffsetMask)); 518 ALWAYS_INLINE PartitionPage* partitionPointerToPageNoAlignmentCheck(void* ptr) {
523 // The metadata area is exactly one system page (the guard page) into the 519 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(ptr);
524 // super page. 520 char* superPagePtr = reinterpret_cast<char*>(pointerAsUint & kSuperPageBaseMas k);
525 return reinterpret_cast<char*>(pointerAsUint + kSystemPageSize); 521 uintptr_t partitionPageIndex = (pointerAsUint & kSuperPageOffsetMask) >> kPart itionPageShift;
526 } 522 // Index 0 is invalid because it is the metadata and guard area and
527 523 // the last index is invalid because it is a guard page.
528 ALWAYS_INLINE PartitionPage* partitionPointerToPageNoAlignmentCheck(void* ptr) 524 ASSERT(partitionPageIndex);
529 { 525 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1);
530 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(ptr); 526 PartitionPage* page = reinterpret_cast<PartitionPage*>(partitionSuperPageToMet adataArea(superPagePtr) + (partitionPageIndex << kPageMetadataShift));
531 char* superPagePtr = reinterpret_cast<char*>(pointerAsUint & kSuperPageBaseM ask); 527 // Partition pages in the same slot span can share the same page object. Adjus t for that.
532 uintptr_t partitionPageIndex = (pointerAsUint & kSuperPageOffsetMask) >> kPa rtitionPageShift; 528 size_t delta = page->pageOffset << kPageMetadataShift;
533 // Index 0 is invalid because it is the metadata and guard area and 529 page = reinterpret_cast<PartitionPage*>(reinterpret_cast<char*>(page) - delta) ;
534 // the last index is invalid because it is a guard page. 530 return page;
535 ASSERT(partitionPageIndex); 531 }
536 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); 532
537 PartitionPage* page = reinterpret_cast<PartitionPage*>(partitionSuperPageToM etadataArea(superPagePtr) + (partitionPageIndex << kPageMetadataShift)); 533 ALWAYS_INLINE void* partitionPageToPointer(const PartitionPage* page) {
538 // Partition pages in the same slot span can share the same page object. Adj ust for that. 534 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(page);
539 size_t delta = page->pageOffset << kPageMetadataShift; 535 uintptr_t superPageOffset = (pointerAsUint & kSuperPageOffsetMask);
540 page = reinterpret_cast<PartitionPage*>(reinterpret_cast<char*>(page) - delt a); 536 ASSERT(superPageOffset > kSystemPageSize);
541 return page; 537 ASSERT(superPageOffset < kSystemPageSize + (kNumPartitionPagesPerSuperPage * k PageMetadataSize));
542 } 538 uintptr_t partitionPageIndex = (superPageOffset - kSystemPageSize) >> kPageMet adataShift;
543 539 // Index 0 is invalid because it is the metadata area and the last index is in valid because it is a guard page.
544 ALWAYS_INLINE void* partitionPageToPointer(const PartitionPage* page) 540 ASSERT(partitionPageIndex);
545 { 541 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1);
546 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(page); 542 uintptr_t superPageBase = (pointerAsUint & kSuperPageBaseMask);
547 uintptr_t superPageOffset = (pointerAsUint & kSuperPageOffsetMask); 543 void* ret = reinterpret_cast<void*>(superPageBase + (partitionPageIndex << kPa rtitionPageShift));
548 ASSERT(superPageOffset > kSystemPageSize); 544 return ret;
549 ASSERT(superPageOffset < kSystemPageSize + (kNumPartitionPagesPerSuperPage * kPageMetadataSize)); 545 }
550 uintptr_t partitionPageIndex = (superPageOffset - kSystemPageSize) >> kPageM etadataShift; 546
551 // Index 0 is invalid because it is the metadata area and the last index is invalid because it is a guard page. 547 ALWAYS_INLINE PartitionPage* partitionPointerToPage(void* ptr) {
552 ASSERT(partitionPageIndex); 548 PartitionPage* page = partitionPointerToPageNoAlignmentCheck(ptr);
553 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); 549 // Checks that the pointer is a multiple of bucket size.
554 uintptr_t superPageBase = (pointerAsUint & kSuperPageBaseMask); 550 ASSERT(!((reinterpret_cast<uintptr_t>(ptr) - reinterpret_cast<uintptr_t>(parti tionPageToPointer(page))) % page->bucket->slotSize));
555 void* ret = reinterpret_cast<void*>(superPageBase + (partitionPageIndex << k PartitionPageShift)); 551 return page;
556 return ret; 552 }
557 } 553
558 554 ALWAYS_INLINE bool partitionBucketIsDirectMapped(const PartitionBucket* bucket) {
559 ALWAYS_INLINE PartitionPage* partitionPointerToPage(void* ptr) 555 return !bucket->numSystemPagesPerSlotSpan;
560 { 556 }
561 PartitionPage* page = partitionPointerToPageNoAlignmentCheck(ptr); 557
562 // Checks that the pointer is a multiple of bucket size. 558 ALWAYS_INLINE size_t partitionBucketBytes(const PartitionBucket* bucket) {
563 ASSERT(!((reinterpret_cast<uintptr_t>(ptr) - reinterpret_cast<uintptr_t>(par titionPageToPointer(page))) % page->bucket->slotSize)); 559 return bucket->numSystemPagesPerSlotSpan * kSystemPageSize;
564 return page; 560 }
565 } 561
566 562 ALWAYS_INLINE uint16_t partitionBucketSlots(const PartitionBucket* bucket) {
567 ALWAYS_INLINE bool partitionBucketIsDirectMapped(const PartitionBucket* bucket) 563 return static_cast<uint16_t>(partitionBucketBytes(bucket) / bucket->slotSize);
568 { 564 }
569 return !bucket->numSystemPagesPerSlotSpan; 565
570 } 566 ALWAYS_INLINE size_t* partitionPageGetRawSizePtr(PartitionPage* page) {
571 567 // For single-slot buckets which span more than one partition page, we
572 ALWAYS_INLINE size_t partitionBucketBytes(const PartitionBucket* bucket) 568 // have some spare metadata space to store the raw allocation size. We
573 { 569 // can use this to report better statistics.
574 return bucket->numSystemPagesPerSlotSpan * kSystemPageSize; 570 PartitionBucket* bucket = page->bucket;
575 } 571 if (bucket->slotSize <= kMaxSystemPagesPerSlotSpan * kSystemPageSize)
576 572 return nullptr;
577 ALWAYS_INLINE uint16_t partitionBucketSlots(const PartitionBucket* bucket) 573
578 { 574 ASSERT((bucket->slotSize % kSystemPageSize) == 0);
579 return static_cast<uint16_t>(partitionBucketBytes(bucket) / bucket->slotSize ); 575 ASSERT(partitionBucketIsDirectMapped(bucket) || partitionBucketSlots(bucket) = = 1);
580 } 576 page++;
581 577 return reinterpret_cast<size_t*>(&page->freelistHead);
582 ALWAYS_INLINE size_t* partitionPageGetRawSizePtr(PartitionPage* page) 578 }
583 { 579
584 // For single-slot buckets which span more than one partition page, we 580 ALWAYS_INLINE size_t partitionPageGetRawSize(PartitionPage* page) {
585 // have some spare metadata space to store the raw allocation size. We 581 size_t* rawSizePtr = partitionPageGetRawSizePtr(page);
586 // can use this to report better statistics. 582 if (UNLIKELY(rawSizePtr != nullptr))
587 PartitionBucket* bucket = page->bucket; 583 return *rawSizePtr;
588 if (bucket->slotSize <= kMaxSystemPagesPerSlotSpan * kSystemPageSize) 584 return 0;
589 return nullptr; 585 }
590 586
591 ASSERT((bucket->slotSize % kSystemPageSize) == 0); 587 ALWAYS_INLINE PartitionRootBase* partitionPageToRoot(PartitionPage* page) {
592 ASSERT(partitionBucketIsDirectMapped(bucket) || partitionBucketSlots(bucket) == 1); 588 PartitionSuperPageExtentEntry* extentEntry = reinterpret_cast<PartitionSuperPa geExtentEntry*>(reinterpret_cast<uintptr_t>(page) & kSystemPageBaseMask);
593 page++; 589 return extentEntry->root;
594 return reinterpret_cast<size_t*>(&page->freelistHead); 590 }
595 } 591
596 592 ALWAYS_INLINE bool partitionPointerIsValid(void* ptr) {
597 ALWAYS_INLINE size_t partitionPageGetRawSize(PartitionPage* page) 593 PartitionPage* page = partitionPointerToPage(ptr);
598 { 594 PartitionRootBase* root = partitionPageToRoot(page);
599 size_t* rawSizePtr = partitionPageGetRawSizePtr(page); 595 return root->invertedSelf == ~reinterpret_cast<uintptr_t>(root);
600 if (UNLIKELY(rawSizePtr != nullptr)) 596 }
601 return *rawSizePtr; 597
598 ALWAYS_INLINE void* partitionBucketAlloc(PartitionRootBase* root, int flags, siz e_t size, PartitionBucket* bucket) {
599 PartitionPage* page = bucket->activePagesHead;
600 // Check that this page is neither full nor freed.
601 ASSERT(page->numAllocatedSlots >= 0);
602 void* ret = page->freelistHead;
603 if (LIKELY(ret != 0)) {
604 // If these asserts fire, you probably corrupted memory.
605 ASSERT(partitionPointerIsValid(ret));
606 // All large allocations must go through the slow path to correctly
607 // update the size metadata.
608 ASSERT(partitionPageGetRawSize(page) == 0);
609 PartitionFreelistEntry* newHead = partitionFreelistMask(static_cast<Partitio nFreelistEntry*>(ret)->next);
610 page->freelistHead = newHead;
611 page->numAllocatedSlots++;
612 } else {
613 ret = partitionAllocSlowPath(root, flags, size, bucket);
614 ASSERT(!ret || partitionPointerIsValid(ret));
615 }
616 #if ENABLE(ASSERT)
617 if (!ret)
602 return 0; 618 return 0;
603 } 619 // Fill the uninitialized pattern, and write the cookies.
604 620 page = partitionPointerToPage(ret);
605 ALWAYS_INLINE PartitionRootBase* partitionPageToRoot(PartitionPage* page) 621 size_t slotSize = page->bucket->slotSize;
606 { 622 size_t rawSize = partitionPageGetRawSize(page);
607 PartitionSuperPageExtentEntry* extentEntry = reinterpret_cast<PartitionSuper PageExtentEntry*>(reinterpret_cast<uintptr_t>(page) & kSystemPageBaseMask); 623 if (rawSize) {
608 return extentEntry->root; 624 ASSERT(rawSize == size);
609 } 625 slotSize = rawSize;
610 626 }
611 ALWAYS_INLINE bool partitionPointerIsValid(void* ptr) 627 size_t noCookieSize = partitionCookieSizeAdjustSubtract(slotSize);
612 { 628 char* charRet = static_cast<char*>(ret);
613 PartitionPage* page = partitionPointerToPage(ptr); 629 // The value given to the application is actually just after the cookie.
614 PartitionRootBase* root = partitionPageToRoot(page); 630 ret = charRet + kCookieSize;
615 return root->invertedSelf == ~reinterpret_cast<uintptr_t>(root); 631 memset(ret, kUninitializedByte, noCookieSize);
616 } 632 partitionCookieWriteValue(charRet);
617 633 partitionCookieWriteValue(charRet + kCookieSize + noCookieSize);
618 ALWAYS_INLINE void* partitionBucketAlloc(PartitionRootBase* root, int flags, siz e_t size, PartitionBucket* bucket) 634 #endif
619 { 635 return ret;
620 PartitionPage* page = bucket->activePagesHead; 636 }
621 // Check that this page is neither full nor freed. 637
622 ASSERT(page->numAllocatedSlots >= 0); 638 ALWAYS_INLINE void* partitionAlloc(PartitionRoot* root, size_t size) {
623 void* ret = page->freelistHead; 639 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
624 if (LIKELY(ret != 0)) { 640 void* result = malloc(size);
625 // If these asserts fire, you probably corrupted memory. 641 RELEASE_ASSERT(result);
626 ASSERT(partitionPointerIsValid(ret)); 642 return result;
627 // All large allocations must go through the slow path to correctly 643 #else
628 // update the size metadata. 644 size_t requestedSize = size;
629 ASSERT(partitionPageGetRawSize(page) == 0); 645 size = partitionCookieSizeAdjustAdd(size);
630 PartitionFreelistEntry* newHead = partitionFreelistMask(static_cast<Part itionFreelistEntry*>(ret)->next); 646 ASSERT(root->initialized);
631 page->freelistHead = newHead; 647 size_t index = size >> kBucketShift;
632 page->numAllocatedSlots++; 648 ASSERT(index < root->numBuckets);
633 } else { 649 ASSERT(size == index << kBucketShift);
634 ret = partitionAllocSlowPath(root, flags, size, bucket); 650 PartitionBucket* bucket = &root->buckets()[index];
635 ASSERT(!ret || partitionPointerIsValid(ret)); 651 void* result = partitionBucketAlloc(root, 0, size, bucket);
636 } 652 PartitionAllocHooks::allocationHookIfEnabled(result, requestedSize);
637 #if ENABLE(ASSERT) 653 return result;
638 if (!ret) 654 #endif // defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
639 return 0; 655 }
640 // Fill the uninitialized pattern, and write the cookies. 656
641 page = partitionPointerToPage(ret); 657 ALWAYS_INLINE void partitionFreeWithPage(void* ptr, PartitionPage* page) {
642 size_t slotSize = page->bucket->slotSize; 658 // If these asserts fire, you probably corrupted memory.
643 size_t rawSize = partitionPageGetRawSize(page); 659 #if ENABLE(ASSERT)
644 if (rawSize) { 660 size_t slotSize = page->bucket->slotSize;
645 ASSERT(rawSize == size); 661 size_t rawSize = partitionPageGetRawSize(page);
646 slotSize = rawSize; 662 if (rawSize)
647 } 663 slotSize = rawSize;
648 size_t noCookieSize = partitionCookieSizeAdjustSubtract(slotSize); 664 partitionCookieCheckValue(ptr);
649 char* charRet = static_cast<char*>(ret); 665 partitionCookieCheckValue(reinterpret_cast<char*>(ptr) + slotSize - kCookieSiz e);
650 // The value given to the application is actually just after the cookie. 666 memset(ptr, kFreedByte, slotSize);
651 ret = charRet + kCookieSize; 667 #endif
652 memset(ret, kUninitializedByte, noCookieSize); 668 ASSERT(page->numAllocatedSlots);
653 partitionCookieWriteValue(charRet); 669 PartitionFreelistEntry* freelistHead = page->freelistHead;
654 partitionCookieWriteValue(charRet + kCookieSize + noCookieSize); 670 ASSERT(!freelistHead || partitionPointerIsValid(freelistHead));
655 #endif 671 RELEASE_ASSERT_WITH_SECURITY_IMPLICATION(ptr != freelistHead); // Catches an immediate double free.
656 return ret; 672 ASSERT_WITH_SECURITY_IMPLICATION(!freelistHead || ptr != partitionFreelistMask (freelistHead->next)); // Look for double free one level deeper in debug.
657 } 673 PartitionFreelistEntry* entry = static_cast<PartitionFreelistEntry*>(ptr);
658 674 entry->next = partitionFreelistMask(freelistHead);
659 ALWAYS_INLINE void* partitionAlloc(PartitionRoot* root, size_t size) 675 page->freelistHead = entry;
660 { 676 --page->numAllocatedSlots;
661 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR) 677 if (UNLIKELY(page->numAllocatedSlots <= 0)) {
662 void* result = malloc(size); 678 partitionFreeSlowPath(page);
663 RELEASE_ASSERT(result); 679 } else {
664 return result; 680 // All single-slot allocations must go through the slow path to
665 #else 681 // correctly update the size metadata.
666 size_t requestedSize = size; 682 ASSERT(partitionPageGetRawSize(page) == 0);
667 size = partitionCookieSizeAdjustAdd(size); 683 }
668 ASSERT(root->initialized); 684 }
669 size_t index = size >> kBucketShift; 685
670 ASSERT(index < root->numBuckets); 686 ALWAYS_INLINE void partitionFree(void* ptr) {
671 ASSERT(size == index << kBucketShift); 687 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
672 PartitionBucket* bucket = &root->buckets()[index]; 688 free(ptr);
673 void* result = partitionBucketAlloc(root, 0, size, bucket); 689 #else
674 PartitionAllocHooks::allocationHookIfEnabled(result, requestedSize); 690 PartitionAllocHooks::freeHookIfEnabled(ptr);
675 return result; 691 ptr = partitionCookieFreePointerAdjust(ptr);
676 #endif // defined(MEMORY_TOOL_REPLACES_ALLOCATOR) 692 ASSERT(partitionPointerIsValid(ptr));
677 } 693 PartitionPage* page = partitionPointerToPage(ptr);
678 694 partitionFreeWithPage(ptr, page);
679 ALWAYS_INLINE void partitionFreeWithPage(void* ptr, PartitionPage* page) 695 #endif
680 { 696 }
681 // If these asserts fire, you probably corrupted memory. 697
682 #if ENABLE(ASSERT) 698 ALWAYS_INLINE PartitionBucket* partitionGenericSizeToBucket(PartitionRootGeneric * root, size_t size) {
683 size_t slotSize = page->bucket->slotSize; 699 size_t order = kBitsPerSizet - countLeadingZerosSizet(size);
684 size_t rawSize = partitionPageGetRawSize(page); 700 // The order index is simply the next few bits after the most significant bit.
685 if (rawSize) 701 size_t orderIndex = (size >> root->orderIndexShifts[order]) & (kGenericNumBuck etsPerOrder - 1);
686 slotSize = rawSize; 702 // And if the remaining bits are non-zero we must bump the bucket up.
687 partitionCookieCheckValue(ptr); 703 size_t subOrderIndex = size & root->orderSubIndexMasks[order];
688 partitionCookieCheckValue(reinterpret_cast<char*>(ptr) + slotSize - kCookieS ize); 704 PartitionBucket* bucket = root->bucketLookups[(order << kGenericNumBucketsPerO rderBits) + orderIndex + !!subOrderIndex];
689 memset(ptr, kFreedByte, slotSize); 705 ASSERT(!bucket->slotSize || bucket->slotSize >= size);
690 #endif 706 ASSERT(!(bucket->slotSize % kGenericSmallestBucket));
691 ASSERT(page->numAllocatedSlots); 707 return bucket;
692 PartitionFreelistEntry* freelistHead = page->freelistHead; 708 }
693 ASSERT(!freelistHead || partitionPointerIsValid(freelistHead)); 709
694 RELEASE_ASSERT_WITH_SECURITY_IMPLICATION(ptr != freelistHead); // Catches an immediate double free. 710 ALWAYS_INLINE void* partitionAllocGenericFlags(PartitionRootGeneric* root, int f lags, size_t size) {
695 ASSERT_WITH_SECURITY_IMPLICATION(!freelistHead || ptr != partitionFreelistMa sk(freelistHead->next)); // Look for double free one level deeper in debug. 711 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
696 PartitionFreelistEntry* entry = static_cast<PartitionFreelistEntry*>(ptr); 712 void* result = malloc(size);
697 entry->next = partitionFreelistMask(freelistHead); 713 RELEASE_ASSERT(result);
698 page->freelistHead = entry; 714 return result;
699 --page->numAllocatedSlots; 715 #else
700 if (UNLIKELY(page->numAllocatedSlots <= 0)) { 716 ASSERT(root->initialized);
701 partitionFreeSlowPath(page); 717 size_t requestedSize = size;
702 } else { 718 size = partitionCookieSizeAdjustAdd(size);
703 // All single-slot allocations must go through the slow path to 719 PartitionBucket* bucket = partitionGenericSizeToBucket(root, size);
704 // correctly update the size metadata. 720 // TODO(bashi): Remove following RELEAE_ASSERT()s once we find the cause of
705 ASSERT(partitionPageGetRawSize(page) == 0); 721 // http://crbug.com/514141
706 }
707 }
708
709 ALWAYS_INLINE void partitionFree(void* ptr)
710 {
711 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
712 free(ptr);
713 #else
714 PartitionAllocHooks::freeHookIfEnabled(ptr);
715 ptr = partitionCookieFreePointerAdjust(ptr);
716 ASSERT(partitionPointerIsValid(ptr));
717 PartitionPage* page = partitionPointerToPage(ptr);
718 partitionFreeWithPage(ptr, page);
719 #endif
720 }
721
722 ALWAYS_INLINE PartitionBucket* partitionGenericSizeToBucket(PartitionRootGeneric * root, size_t size)
723 {
724 size_t order = kBitsPerSizet - countLeadingZerosSizet(size);
725 // The order index is simply the next few bits after the most significant bi t.
726 size_t orderIndex = (size >> root->orderIndexShifts[order]) & (kGenericNumBu cketsPerOrder - 1);
727 // And if the remaining bits are non-zero we must bump the bucket up.
728 size_t subOrderIndex = size & root->orderSubIndexMasks[order];
729 PartitionBucket* bucket = root->bucketLookups[(order << kGenericNumBucketsPe rOrderBits) + orderIndex + !!subOrderIndex];
730 ASSERT(!bucket->slotSize || bucket->slotSize >= size);
731 ASSERT(!(bucket->slotSize % kGenericSmallestBucket));
732 return bucket;
733 }
734
735 ALWAYS_INLINE void* partitionAllocGenericFlags(PartitionRootGeneric* root, int f lags, size_t size)
736 {
737 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
738 void* result = malloc(size);
739 RELEASE_ASSERT(result);
740 return result;
741 #else
742 ASSERT(root->initialized);
743 size_t requestedSize = size;
744 size = partitionCookieSizeAdjustAdd(size);
745 PartitionBucket* bucket = partitionGenericSizeToBucket(root, size);
746 // TODO(bashi): Remove following RELEAE_ASSERT()s once we find the cause of
747 // http://crbug.com/514141
748 #if OS(ANDROID) 722 #if OS(ANDROID)
749 RELEASE_ASSERT(bucket >= &root->buckets[0] || bucket == &PartitionRootGeneri c::gPagedBucket); 723 RELEASE_ASSERT(bucket >= &root->buckets[0] || bucket == &PartitionRootGeneric: :gPagedBucket);
750 RELEASE_ASSERT(bucket <= &root->buckets[kGenericNumBuckets - 1] || bucket == &PartitionRootGeneric::gPagedBucket); 724 RELEASE_ASSERT(bucket <= &root->buckets[kGenericNumBuckets - 1] || bucket == & PartitionRootGeneric::gPagedBucket);
751 #endif 725 #endif
752 spinLockLock(&root->lock); 726 spinLockLock(&root->lock);
753 void* ret = partitionBucketAlloc(root, flags, size, bucket); 727 void* ret = partitionBucketAlloc(root, flags, size, bucket);
754 spinLockUnlock(&root->lock); 728 spinLockUnlock(&root->lock);
755 PartitionAllocHooks::allocationHookIfEnabled(ret, requestedSize); 729 PartitionAllocHooks::allocationHookIfEnabled(ret, requestedSize);
756 return ret; 730 return ret;
757 #endif 731 #endif
758 } 732 }
759 733
760 ALWAYS_INLINE void* partitionAllocGeneric(PartitionRootGeneric* root, size_t siz e) 734 ALWAYS_INLINE void* partitionAllocGeneric(PartitionRootGeneric* root, size_t siz e) {
761 { 735 return partitionAllocGenericFlags(root, 0, size);
762 return partitionAllocGenericFlags(root, 0, size); 736 }
763 } 737
764 738 ALWAYS_INLINE void partitionFreeGeneric(PartitionRootGeneric* root, void* ptr) {
765 ALWAYS_INLINE void partitionFreeGeneric(PartitionRootGeneric* root, void* ptr) 739 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
766 { 740 free(ptr);
767 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR) 741 #else
768 free(ptr); 742 ASSERT(root->initialized);
769 #else 743
770 ASSERT(root->initialized); 744 if (UNLIKELY(!ptr))
771 745 return;
772 if (UNLIKELY(!ptr)) 746
773 return; 747 PartitionAllocHooks::freeHookIfEnabled(ptr);
774 748 ptr = partitionCookieFreePointerAdjust(ptr);
775 PartitionAllocHooks::freeHookIfEnabled(ptr); 749 ASSERT(partitionPointerIsValid(ptr));
776 ptr = partitionCookieFreePointerAdjust(ptr); 750 PartitionPage* page = partitionPointerToPage(ptr);
777 ASSERT(partitionPointerIsValid(ptr)); 751 spinLockLock(&root->lock);
778 PartitionPage* page = partitionPointerToPage(ptr); 752 partitionFreeWithPage(ptr, page);
779 spinLockLock(&root->lock); 753 spinLockUnlock(&root->lock);
780 partitionFreeWithPage(ptr, page); 754 #endif
781 spinLockUnlock(&root->lock); 755 }
782 #endif 756
783 } 757 ALWAYS_INLINE size_t partitionDirectMapSize(size_t size) {
784 758 // Caller must check that the size is not above the kGenericMaxDirectMapped
785 ALWAYS_INLINE size_t partitionDirectMapSize(size_t size) 759 // limit before calling. This also guards against integer overflow in the
786 { 760 // calculation here.
787 // Caller must check that the size is not above the kGenericMaxDirectMapped 761 ASSERT(size <= kGenericMaxDirectMapped);
788 // limit before calling. This also guards against integer overflow in the 762 return (size + kSystemPageOffsetMask) & kSystemPageBaseMask;
789 // calculation here. 763 }
790 ASSERT(size <= kGenericMaxDirectMapped); 764
791 return (size + kSystemPageOffsetMask) & kSystemPageBaseMask; 765 ALWAYS_INLINE size_t partitionAllocActualSize(PartitionRootGeneric* root, size_t size) {
792 } 766 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
793 767 return size;
794 ALWAYS_INLINE size_t partitionAllocActualSize(PartitionRootGeneric* root, size_t size) 768 #else
795 { 769 ASSERT(root->initialized);
796 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR) 770 size = partitionCookieSizeAdjustAdd(size);
797 return size; 771 PartitionBucket* bucket = partitionGenericSizeToBucket(root, size);
798 #else 772 if (LIKELY(!partitionBucketIsDirectMapped(bucket))) {
799 ASSERT(root->initialized); 773 size = bucket->slotSize;
800 size = partitionCookieSizeAdjustAdd(size); 774 } else if (size > kGenericMaxDirectMapped) {
801 PartitionBucket* bucket = partitionGenericSizeToBucket(root, size); 775 // Too large to allocate => return the size unchanged.
802 if (LIKELY(!partitionBucketIsDirectMapped(bucket))) { 776 } else {
803 size = bucket->slotSize; 777 ASSERT(bucket == &PartitionRootBase::gPagedBucket);
804 } else if (size > kGenericMaxDirectMapped) { 778 size = partitionDirectMapSize(size);
805 // Too large to allocate => return the size unchanged. 779 }
806 } else { 780 return partitionCookieSizeAdjustSubtract(size);
807 ASSERT(bucket == &PartitionRootBase::gPagedBucket); 781 #endif
808 size = partitionDirectMapSize(size); 782 }
809 } 783
810 return partitionCookieSizeAdjustSubtract(size); 784 ALWAYS_INLINE bool partitionAllocSupportsGetSize() {
811 #endif 785 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
812 } 786 return false;
813 787 #else
814 ALWAYS_INLINE bool partitionAllocSupportsGetSize() 788 return true;
815 { 789 #endif
816 #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR) 790 }
817 return false; 791
818 #else 792 ALWAYS_INLINE size_t partitionAllocGetSize(void* ptr) {
819 return true; 793 // No need to lock here. Only 'ptr' being freed by another thread could
820 #endif 794 // cause trouble, and the caller is responsible for that not happening.
821 } 795 ASSERT(partitionAllocSupportsGetSize());
822 796 ptr = partitionCookieFreePointerAdjust(ptr);
823 ALWAYS_INLINE size_t partitionAllocGetSize(void* ptr) 797 ASSERT(partitionPointerIsValid(ptr));
824 { 798 PartitionPage* page = partitionPointerToPage(ptr);
825 // No need to lock here. Only 'ptr' being freed by another thread could 799 size_t size = page->bucket->slotSize;
826 // cause trouble, and the caller is responsible for that not happening. 800 return partitionCookieSizeAdjustSubtract(size);
827 ASSERT(partitionAllocSupportsGetSize());
828 ptr = partitionCookieFreePointerAdjust(ptr);
829 ASSERT(partitionPointerIsValid(ptr));
830 PartitionPage* page = partitionPointerToPage(ptr);
831 size_t size = page->bucket->slotSize;
832 return partitionCookieSizeAdjustSubtract(size);
833 } 801 }
834 802
835 // N (or more accurately, N - sizeof(void*)) represents the largest size in 803 // N (or more accurately, N - sizeof(void*)) represents the largest size in
836 // bytes that will be handled by a SizeSpecificPartitionAllocator. 804 // bytes that will be handled by a SizeSpecificPartitionAllocator.
837 // Attempts to partitionAlloc() more than this amount will fail. 805 // Attempts to partitionAlloc() more than this amount will fail.
838 template <size_t N> 806 template <size_t N>
839 class SizeSpecificPartitionAllocator { 807 class SizeSpecificPartitionAllocator {
840 public: 808 public:
841 static const size_t kMaxAllocation = N - kAllocationGranularity; 809 static const size_t kMaxAllocation = N - kAllocationGranularity;
842 static const size_t kNumBuckets = N / kAllocationGranularity; 810 static const size_t kNumBuckets = N / kAllocationGranularity;
843 void init() { partitionAllocInit(&m_partitionRoot, kNumBuckets, kMaxAllocati on); } 811 void init() { partitionAllocInit(&m_partitionRoot, kNumBuckets, kMaxAllocation ); }
844 bool shutdown() { return partitionAllocShutdown(&m_partitionRoot); } 812 bool shutdown() { return partitionAllocShutdown(&m_partitionRoot); }
845 ALWAYS_INLINE PartitionRoot* root() { return &m_partitionRoot; } 813 ALWAYS_INLINE PartitionRoot* root() { return &m_partitionRoot; }
846 private: 814
847 PartitionRoot m_partitionRoot; 815 private:
848 PartitionBucket m_actualBuckets[kNumBuckets]; 816 PartitionRoot m_partitionRoot;
817 PartitionBucket m_actualBuckets[kNumBuckets];
849 }; 818 };
850 819
851 class PartitionAllocatorGeneric { 820 class PartitionAllocatorGeneric {
852 public: 821 public:
853 void init() { partitionAllocGenericInit(&m_partitionRoot); } 822 void init() { partitionAllocGenericInit(&m_partitionRoot); }
854 bool shutdown() { return partitionAllocGenericShutdown(&m_partitionRoot); } 823 bool shutdown() { return partitionAllocGenericShutdown(&m_partitionRoot); }
855 ALWAYS_INLINE PartitionRootGeneric* root() { return &m_partitionRoot; } 824 ALWAYS_INLINE PartitionRootGeneric* root() { return &m_partitionRoot; }
856 private: 825
857 PartitionRootGeneric m_partitionRoot; 826 private:
827 PartitionRootGeneric m_partitionRoot;
858 }; 828 };
859 829
860 } // namespace WTF 830 } // namespace WTF
861 831
862 using WTF::SizeSpecificPartitionAllocator; 832 using WTF::SizeSpecificPartitionAllocator;
863 using WTF::PartitionAllocatorGeneric; 833 using WTF::PartitionAllocatorGeneric;
864 using WTF::PartitionRoot; 834 using WTF::PartitionRoot;
865 using WTF::partitionAllocInit; 835 using WTF::partitionAllocInit;
866 using WTF::partitionAllocShutdown; 836 using WTF::partitionAllocShutdown;
867 using WTF::partitionAlloc; 837 using WTF::partitionAlloc;
868 using WTF::partitionFree; 838 using WTF::partitionFree;
869 using WTF::partitionAllocGeneric; 839 using WTF::partitionAllocGeneric;
870 using WTF::partitionFreeGeneric; 840 using WTF::partitionFreeGeneric;
871 using WTF::partitionReallocGeneric; 841 using WTF::partitionReallocGeneric;
872 using WTF::partitionAllocActualSize; 842 using WTF::partitionAllocActualSize;
873 using WTF::partitionAllocSupportsGetSize; 843 using WTF::partitionAllocSupportsGetSize;
874 using WTF::partitionAllocGetSize; 844 using WTF::partitionAllocGetSize;
875 845
876 #endif // WTF_PartitionAlloc_h 846 #endif // WTF_PartitionAlloc_h
OLDNEW
« no previous file with comments | « third_party/WebKit/Source/wtf/PageAllocator.cpp ('k') | third_party/WebKit/Source/wtf/PartitionAlloc.cpp » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698