| OLD | NEW |
| 1 /* | 1 /* |
| 2 * Copyright (C) 2013 Google Inc. All rights reserved. | 2 * Copyright (C) 2013 Google Inc. All rights reserved. |
| 3 * | 3 * |
| 4 * Redistribution and use in source and binary forms, with or without | 4 * Redistribution and use in source and binary forms, with or without |
| 5 * modification, are permitted provided that the following conditions are | 5 * modification, are permitted provided that the following conditions are |
| 6 * met: | 6 * met: |
| 7 * | 7 * |
| 8 * * Redistributions of source code must retain the above copyright | 8 * * Redistributions of source code must retain the above copyright |
| 9 * notice, this list of conditions and the following disclaimer. | 9 * notice, this list of conditions and the following disclaimer. |
| 10 * * Redistributions in binary form must reproduce the above | 10 * * Redistributions in binary form must reproduce the above |
| (...skipping 48 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 59 // And for partitionAllocGeneric(): | 59 // And for partitionAllocGeneric(): |
| 60 // - Multi-threaded use against a single partition is ok; locking is handled. | 60 // - Multi-threaded use against a single partition is ok; locking is handled. |
| 61 // - Allocations of any arbitrary size can be handled (subject to a limit of | 61 // - Allocations of any arbitrary size can be handled (subject to a limit of |
| 62 // INT_MAX bytes for security reasons). | 62 // INT_MAX bytes for security reasons). |
| 63 // - Bucketing is by approximate size, for example an allocation of 4000 bytes | 63 // - Bucketing is by approximate size, for example an allocation of 4000 bytes |
| 64 // might be placed into a 4096-byte bucket. Bucket sizes are chosen to try and | 64 // might be placed into a 4096-byte bucket. Bucket sizes are chosen to try and |
| 65 // keep worst-case waste to ~10%. | 65 // keep worst-case waste to ~10%. |
| 66 // | 66 // |
| 67 // The allocators are designed to be extremely fast, thanks to the following | 67 // The allocators are designed to be extremely fast, thanks to the following |
| 68 // properties and design: | 68 // properties and design: |
| 69 // - Just two single (reasonably predicatable) branches in the hot / fast path f
or | 69 // - Just two single (reasonably predicatable) branches in the hot / fast path |
| 70 // both allocating and (significantly) freeing. | 70 // for both allocating and (significantly) freeing. |
| 71 // - A minimal number of operations in the hot / fast path, with the slow paths | 71 // - A minimal number of operations in the hot / fast path, with the slow paths |
| 72 // in separate functions, leading to the possibility of inlining. | 72 // in separate functions, leading to the possibility of inlining. |
| 73 // - Each partition page (which is usually multiple physical pages) has a | 73 // - Each partition page (which is usually multiple physical pages) has a |
| 74 // metadata structure which allows fast mapping of free() address to an | 74 // metadata structure which allows fast mapping of free() address to an |
| 75 // underlying bucket. | 75 // underlying bucket. |
| 76 // - Supports a lock-free API for fast performance in single-threaded cases. | 76 // - Supports a lock-free API for fast performance in single-threaded cases. |
| 77 // - The freelist for a given bucket is split across a number of partition | 77 // - The freelist for a given bucket is split across a number of partition |
| 78 // pages, enabling various simple tricks to try and minimize fragmentation. | 78 // pages, enabling various simple tricks to try and minimize fragmentation. |
| 79 // - Fine-grained bucket sizes leading to less waste and better packing. | 79 // - Fine-grained bucket sizes leading to less waste and better packing. |
| 80 // | 80 // |
| 81 // The following security properties could be investigated in the future: | 81 // The following security properties could be investigated in the future: |
| 82 // - Per-object bucketing (instead of per-size) is mostly available at the API, | 82 // - Per-object bucketing (instead of per-size) is mostly available at the API, |
| 83 // but not used yet. | 83 // but not used yet. |
| 84 // - No randomness of freelist entries or bucket position. | 84 // - No randomness of freelist entries or bucket position. |
| 85 // - Better checking for wild pointers in free(). | 85 // - Better checking for wild pointers in free(). |
| 86 // - Better freelist masking function to guarantee fault on 32-bit. | 86 // - Better freelist masking function to guarantee fault on 32-bit. |
| 87 | 87 |
| 88 #include "wtf/Assertions.h" | 88 #include "wtf/Assertions.h" |
| (...skipping 53 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 142 | 142 |
| 143 // We reserve virtual address space in 2MB chunks (aligned to 2MB as well). | 143 // We reserve virtual address space in 2MB chunks (aligned to 2MB as well). |
| 144 // These chunks are called "super pages". We do this so that we can store | 144 // These chunks are called "super pages". We do this so that we can store |
| 145 // metadata in the first few pages of each 2MB aligned section. This leads to | 145 // metadata in the first few pages of each 2MB aligned section. This leads to |
| 146 // a very fast free(). We specifically choose 2MB because this virtual address | 146 // a very fast free(). We specifically choose 2MB because this virtual address |
| 147 // block represents a full but single PTE allocation on ARM, ia32 and x64. | 147 // block represents a full but single PTE allocation on ARM, ia32 and x64. |
| 148 // | 148 // |
| 149 // The layout of the super page is as follows. The sizes below are the same | 149 // The layout of the super page is as follows. The sizes below are the same |
| 150 // for 32 bit and 64 bit. | 150 // for 32 bit and 64 bit. |
| 151 // | 151 // |
| 152 // | Guard page (4KB) | Metadata page (4KB) | Guard pages (8KB) | Slot span |
Slot span | ... | Slot span | Guard page (4KB) | | 152 // | Guard page (4KB) | |
| 153 // | Metadata page (4KB) | |
| 154 // | Guard pages (8KB) | |
| 155 // | Slot span | |
| 156 // | Slot span | |
| 157 // | ... | |
| 158 // | Slot span | |
| 159 // | Guard page (4KB) | |
| 153 // | 160 // |
| 154 // - Each slot span is a contiguous range of one or more PartitionPages. | 161 // - Each slot span is a contiguous range of one or more PartitionPages. |
| 155 // - The metadata page has the following format. Note that the PartitionPage | 162 // - The metadata page has the following format. Note that the PartitionPage |
| 156 // that is not at the head of a slot span is "unused". In other words, | 163 // that is not at the head of a slot span is "unused". In other words, |
| 157 // the metadata for the slot span is stored only in the first PartitionPage | 164 // the metadata for the slot span is stored only in the first PartitionPage |
| 158 // of the slot span. Metadata accesses to other PartitionPages are | 165 // of the slot span. Metadata accesses to other PartitionPages are |
| 159 // redirected to the first PartitionPage. | 166 // redirected to the first PartitionPage. |
| 160 // | 167 // |
| 161 // | SuperPageExtentEntry (32B) | PartitionPage of slot span 1 (32B, used) |
PartitionPage of slot span 1 (32B, unused) | PartitionPage of slot span 1 (32B,
unused) | PartitionPage of slot span 2 (32B, used) | PartitionPage of slot span
3 (32B, used) | ... | PartitionPage of slot span N (32B, unused) | | 168 // | SuperPageExtentEntry (32B) | |
| 169 // | PartitionPage of slot span 1 (32B, used) | |
| 170 // | PartitionPage of slot span 1 (32B, unused) | |
| 171 // | PartitionPage of slot span 1 (32B, unused) | |
| 172 // | PartitionPage of slot span 2 (32B, used) | |
| 173 // | PartitionPage of slot span 3 (32B, used) | |
| 174 // | ... | |
| 175 // | PartitionPage of slot span N (32B, unused) | |
| 162 // | 176 // |
| 163 // A direct mapped page has a similar layout to fake it looking like a super pag
e: | 177 // A direct mapped page has a similar layout to fake it looking like a super |
| 178 // page: |
| 164 // | 179 // |
| 165 // | Guard page (4KB) | Metadata page (4KB) | Guard pages (8KB) | Direct map
ped object | Guard page (4KB) | | 180 // | Guard page (4KB) | |
| 181 // | Metadata page (4KB) | |
| 182 // | Guard pages (8KB) | |
| 183 // | Direct mapped object | |
| 184 // | Guard page (4KB) | |
| 166 // | 185 // |
| 167 // - The metadata page has the following layout: | 186 // - The metadata page has the following layout: |
| 168 // | 187 // |
| 169 // | SuperPageExtentEntry (32B) | PartitionPage (32B) | PartitionBucket (32B
) | PartitionDirectMapExtent (8B) | | 188 // | SuperPageExtentEntry (32B) | |
| 189 // | PartitionPage (32B) | |
| 190 // | PartitionBucket (32B) | |
| 191 // | PartitionDirectMapExtent (8B) | |
| 170 static const size_t kSuperPageShift = 21; // 2MB | 192 static const size_t kSuperPageShift = 21; // 2MB |
| 171 static const size_t kSuperPageSize = 1 << kSuperPageShift; | 193 static const size_t kSuperPageSize = 1 << kSuperPageShift; |
| 172 static const size_t kSuperPageOffsetMask = kSuperPageSize - 1; | 194 static const size_t kSuperPageOffsetMask = kSuperPageSize - 1; |
| 173 static const size_t kSuperPageBaseMask = ~kSuperPageOffsetMask; | 195 static const size_t kSuperPageBaseMask = ~kSuperPageOffsetMask; |
| 174 static const size_t kNumPartitionPagesPerSuperPage = | 196 static const size_t kNumPartitionPagesPerSuperPage = |
| 175 kSuperPageSize / kPartitionPageSize; | 197 kSuperPageSize / kPartitionPageSize; |
| 176 | 198 |
| 177 static const size_t kPageMetadataShift = 5; // 32 bytes per partition page. | 199 static const size_t kPageMetadataShift = 5; // 32 bytes per partition page. |
| 178 static const size_t kPageMetadataSize = 1 << kPageMetadataShift; | 200 static const size_t kPageMetadataSize = 1 << kPageMetadataShift; |
| 179 | 201 |
| 180 // The following kGeneric* constants apply to the generic variants of the API. | 202 // The following kGeneric* constants apply to the generic variants of the API. |
| 181 // The "order" of an allocation is closely related to the power-of-two size of | 203 // The "order" of an allocation is closely related to the power-of-two size of |
| 182 // the allocation. More precisely, the order is the bit index of the | 204 // the allocation. More precisely, the order is the bit index of the |
| 183 // most-significant-bit in the allocation size, where the bit numbers starts | 205 // most-significant-bit in the allocation size, where the bit numbers starts |
| 184 // at index 1 for the least-significant-bit. | 206 // at index 1 for the least-significant-bit. |
| 185 // In terms of allocation sizes, order 0 covers 0, order 1 covers 1, order 2 | 207 // In terms of allocation sizes, order 0 covers 0, order 1 covers 1, order 2 |
| 186 // covers 2->3, order 3 covers 4->7, order 4 covers 8->15. | 208 // covers 2->3, order 3 covers 4->7, order 4 covers 8->15. |
| 187 static const size_t kGenericMinBucketedOrder = 4; // 8 bytes. | 209 static const size_t kGenericMinBucketedOrder = 4; // 8 bytes. |
| 188 static const size_t kGenericMaxBucketedOrder = | 210 static const size_t kGenericMaxBucketedOrder = |
| 189 20; // Largest bucketed order is 1<<(20-1) (storing 512KB -> almost 1MB) | 211 20; // Largest bucketed order is 1<<(20-1) (storing 512KB -> almost 1MB) |
| 190 static const size_t kGenericNumBucketedOrders = | 212 static const size_t kGenericNumBucketedOrders = |
| 191 (kGenericMaxBucketedOrder - kGenericMinBucketedOrder) + 1; | 213 (kGenericMaxBucketedOrder - kGenericMinBucketedOrder) + 1; |
| 192 static const size_t kGenericNumBucketsPerOrderBits = | 214 // Eight buckets per order (for the higher orders), e.g. order 8 is 128, 144, |
| 193 3; // Eight buckets per order (for the higher orders), e.g. order 8 is 128,
144, 160, ..., 240 | 215 // 160, ..., 240: |
| 216 static const size_t kGenericNumBucketsPerOrderBits = 3; |
| 194 static const size_t kGenericNumBucketsPerOrder = | 217 static const size_t kGenericNumBucketsPerOrder = |
| 195 1 << kGenericNumBucketsPerOrderBits; | 218 1 << kGenericNumBucketsPerOrderBits; |
| 196 static const size_t kGenericNumBuckets = | 219 static const size_t kGenericNumBuckets = |
| 197 kGenericNumBucketedOrders * kGenericNumBucketsPerOrder; | 220 kGenericNumBucketedOrders * kGenericNumBucketsPerOrder; |
| 198 static const size_t kGenericSmallestBucket = 1 | 221 static const size_t kGenericSmallestBucket = 1 |
| 199 << (kGenericMinBucketedOrder - 1); | 222 << (kGenericMinBucketedOrder - 1); |
| 200 static const size_t kGenericMaxBucketSpacing = | 223 static const size_t kGenericMaxBucketSpacing = |
| 201 1 << ((kGenericMaxBucketedOrder - 1) - kGenericNumBucketsPerOrderBits); | 224 1 << ((kGenericMaxBucketedOrder - 1) - kGenericNumBucketsPerOrderBits); |
| 202 static const size_t kGenericMaxBucketed = | 225 static const size_t kGenericMaxBucketed = |
| 203 (1 << (kGenericMaxBucketedOrder - 1)) + | 226 (1 << (kGenericMaxBucketedOrder - 1)) + |
| (...skipping 54 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 258 // empty list or it _may_ leave it on the active list until a future list scan. | 281 // empty list or it _may_ leave it on the active list until a future list scan. |
| 259 // - malloc() _may_ scan the active page list in order to fulfil the request. | 282 // - malloc() _may_ scan the active page list in order to fulfil the request. |
| 260 // If it does this, full, empty and decommitted pages encountered will be | 283 // If it does this, full, empty and decommitted pages encountered will be |
| 261 // booted out of the active list. If there are no suitable active pages found, | 284 // booted out of the active list. If there are no suitable active pages found, |
| 262 // an empty or decommitted page (if one exists) will be pulled from the empty | 285 // an empty or decommitted page (if one exists) will be pulled from the empty |
| 263 // list on to the active list. | 286 // list on to the active list. |
| 264 struct PartitionPage { | 287 struct PartitionPage { |
| 265 PartitionFreelistEntry* freelistHead; | 288 PartitionFreelistEntry* freelistHead; |
| 266 PartitionPage* nextPage; | 289 PartitionPage* nextPage; |
| 267 PartitionBucket* bucket; | 290 PartitionBucket* bucket; |
| 268 int16_t | 291 // Deliberately signed, 0 for empty or decommitted page, -n for full pages: |
| 269 numAllocatedSlots; // Deliberately signed, 0 for empty or decommitted pag
e, -n for full pages. | 292 int16_t numAllocatedSlots; |
| 270 uint16_t numUnprovisionedSlots; | 293 uint16_t numUnprovisionedSlots; |
| 271 uint16_t pageOffset; | 294 uint16_t pageOffset; |
| 272 int16_t emptyCacheIndex; // -1 if not in the empty cache. | 295 int16_t emptyCacheIndex; // -1 if not in the empty cache. |
| 273 }; | 296 }; |
| 274 | 297 |
| 275 struct PartitionBucket { | 298 struct PartitionBucket { |
| 276 PartitionPage* activePagesHead; // Accessed most in hot path => goes first. | 299 PartitionPage* activePagesHead; // Accessed most in hot path => goes first. |
| 277 PartitionPage* emptyPagesHead; | 300 PartitionPage* emptyPagesHead; |
| 278 PartitionPage* decommittedPagesHead; | 301 PartitionPage* decommittedPagesHead; |
| 279 uint32_t slotSize; | 302 uint32_t slotSize; |
| (...skipping 15 matching lines...) Expand all Loading... |
| 295 PartitionDirectMapExtent* nextExtent; | 318 PartitionDirectMapExtent* nextExtent; |
| 296 PartitionDirectMapExtent* prevExtent; | 319 PartitionDirectMapExtent* prevExtent; |
| 297 PartitionBucket* bucket; | 320 PartitionBucket* bucket; |
| 298 size_t mapSize; // Mapped size, not including guard pages and meta-data. | 321 size_t mapSize; // Mapped size, not including guard pages and meta-data. |
| 299 }; | 322 }; |
| 300 | 323 |
| 301 struct WTF_EXPORT PartitionRootBase { | 324 struct WTF_EXPORT PartitionRootBase { |
| 302 size_t totalSizeOfCommittedPages; | 325 size_t totalSizeOfCommittedPages; |
| 303 size_t totalSizeOfSuperPages; | 326 size_t totalSizeOfSuperPages; |
| 304 size_t totalSizeOfDirectMappedPages; | 327 size_t totalSizeOfDirectMappedPages; |
| 305 // Invariant: totalSizeOfCommittedPages <= totalSizeOfSuperPages + totalSizeOf
DirectMappedPages. | 328 // Invariant: totalSizeOfCommittedPages <= |
| 329 // totalSizeOfSuperPages + totalSizeOfDirectMappedPages. |
| 306 unsigned numBuckets; | 330 unsigned numBuckets; |
| 307 unsigned maxAllocation; | 331 unsigned maxAllocation; |
| 308 bool initialized; | 332 bool initialized; |
| 309 char* nextSuperPage; | 333 char* nextSuperPage; |
| 310 char* nextPartitionPage; | 334 char* nextPartitionPage; |
| 311 char* nextPartitionPageEnd; | 335 char* nextPartitionPageEnd; |
| 312 PartitionSuperPageExtentEntry* currentExtent; | 336 PartitionSuperPageExtentEntry* currentExtent; |
| 313 PartitionSuperPageExtentEntry* firstExtent; | 337 PartitionSuperPageExtentEntry* firstExtent; |
| 314 PartitionDirectMapExtent* directMapList; | 338 PartitionDirectMapExtent* directMapList; |
| 315 PartitionPage* globalEmptyPageRing[kMaxFreeableSpans]; | 339 PartitionPage* globalEmptyPageRing[kMaxFreeableSpans]; |
| (...skipping 16 matching lines...) Expand all Loading... |
| 332 struct PartitionRoot : public PartitionRootBase { | 356 struct PartitionRoot : public PartitionRootBase { |
| 333 // The PartitionAlloc templated class ensures the following is correct. | 357 // The PartitionAlloc templated class ensures the following is correct. |
| 334 ALWAYS_INLINE PartitionBucket* buckets() { | 358 ALWAYS_INLINE PartitionBucket* buckets() { |
| 335 return reinterpret_cast<PartitionBucket*>(this + 1); | 359 return reinterpret_cast<PartitionBucket*>(this + 1); |
| 336 } | 360 } |
| 337 ALWAYS_INLINE const PartitionBucket* buckets() const { | 361 ALWAYS_INLINE const PartitionBucket* buckets() const { |
| 338 return reinterpret_cast<const PartitionBucket*>(this + 1); | 362 return reinterpret_cast<const PartitionBucket*>(this + 1); |
| 339 } | 363 } |
| 340 }; | 364 }; |
| 341 | 365 |
| 342 // Never instantiate a PartitionRootGeneric directly, instead use PartitionAlloc
atorGeneric. | 366 // Never instantiate a PartitionRootGeneric directly, instead use |
| 367 // PartitionAllocatorGeneric. |
| 343 struct PartitionRootGeneric : public PartitionRootBase { | 368 struct PartitionRootGeneric : public PartitionRootBase { |
| 344 SpinLock lock; | 369 SpinLock lock; |
| 345 // Some pre-computed constants. | 370 // Some pre-computed constants. |
| 346 size_t orderIndexShifts[kBitsPerSizet + 1]; | 371 size_t orderIndexShifts[kBitsPerSizet + 1]; |
| 347 size_t orderSubIndexMasks[kBitsPerSizet + 1]; | 372 size_t orderSubIndexMasks[kBitsPerSizet + 1]; |
| 348 // The bucket lookup table lets us map a size_t to a bucket quickly. | 373 // The bucket lookup table lets us map a size_t to a bucket quickly. |
| 349 // The trailing +1 caters for the overflow case for very large allocation size
s. | 374 // The trailing +1 caters for the overflow case for very large allocation |
| 350 // It is one flat array instead of a 2D array because in the 2D world, we'd | 375 // sizes. It is one flat array instead of a 2D array because in the 2D |
| 351 // need to index array[blah][max+1] which risks undefined behavior. | 376 // world, we'd need to index array[blah][max+1] which risks undefined |
| 377 // behavior. |
| 352 PartitionBucket* | 378 PartitionBucket* |
| 353 bucketLookups[((kBitsPerSizet + 1) * kGenericNumBucketsPerOrder) + 1]; | 379 bucketLookups[((kBitsPerSizet + 1) * kGenericNumBucketsPerOrder) + 1]; |
| 354 PartitionBucket buckets[kGenericNumBuckets]; | 380 PartitionBucket buckets[kGenericNumBuckets]; |
| 355 }; | 381 }; |
| 356 | 382 |
| 357 // Flags for partitionAllocGenericFlags. | 383 // Flags for partitionAllocGenericFlags. |
| 358 enum PartitionAllocFlags { | 384 enum PartitionAllocFlags { |
| 359 PartitionAllocReturnNull = 1 << 0, | 385 PartitionAllocReturnNull = 1 << 0, |
| 360 }; | 386 }; |
| 361 | 387 |
| 362 // Struct used to retrieve total memory usage of a partition. Used by | 388 // Struct used to retrieve total memory usage of a partition. Used by |
| 363 // PartitionStatsDumper implementation. | 389 // PartitionStatsDumper implementation. |
| 364 struct PartitionMemoryStats { | 390 struct PartitionMemoryStats { |
| 365 size_t totalMmappedBytes; // Total bytes mmaped from the system. | 391 size_t totalMmappedBytes; // Total bytes mmaped from the system. |
| 366 size_t totalCommittedBytes; // Total size of commmitted pages. | 392 size_t totalCommittedBytes; // Total size of commmitted pages. |
| 367 size_t totalResidentBytes; // Total bytes provisioned by the partition. | 393 size_t totalResidentBytes; // Total bytes provisioned by the partition. |
| 368 size_t totalActiveBytes; // Total active bytes in the partition. | 394 size_t totalActiveBytes; // Total active bytes in the partition. |
| 369 size_t totalDecommittableBytes; // Total bytes that could be decommitted. | 395 size_t totalDecommittableBytes; // Total bytes that could be decommitted. |
| 370 size_t totalDiscardableBytes; // Total bytes that could be discarded. | 396 size_t totalDiscardableBytes; // Total bytes that could be discarded. |
| 371 }; | 397 }; |
| 372 | 398 |
| 373 // Struct used to retrieve memory statistics about a partition bucket. Used by | 399 // Struct used to retrieve memory statistics about a partition bucket. Used by |
| 374 // PartitionStatsDumper implementation. | 400 // PartitionStatsDumper implementation. |
| 375 struct PartitionBucketMemoryStats { | 401 struct PartitionBucketMemoryStats { |
| 376 bool isValid; // Used to check if the stats is valid. | 402 bool isValid; // Used to check if the stats is valid. |
| 377 bool | 403 bool isDirectMap; // True if this is a direct mapping; size will not be |
| 378 isDirectMap; // True if this is a direct mapping; size will not be unique
. | 404 // unique. |
| 379 uint32_t bucketSlotSize; // The size of the slot in bytes. | 405 uint32_t bucketSlotSize; // The size of the slot in bytes. |
| 380 uint32_t | 406 uint32_t allocatedPageSize; // Total size the partition page allocated from |
| 381 allocatedPageSize; // Total size the partition page allocated from the sy
stem. | 407 // the system. |
| 382 uint32_t activeBytes; // Total active bytes used in the bucket. | 408 uint32_t activeBytes; // Total active bytes used in the bucket. |
| 383 uint32_t residentBytes; // Total bytes provisioned in the bucket. | 409 uint32_t residentBytes; // Total bytes provisioned in the bucket. |
| 384 uint32_t decommittableBytes; // Total bytes that could be decommitted. | 410 uint32_t decommittableBytes; // Total bytes that could be decommitted. |
| 385 uint32_t discardableBytes; // Total bytes that could be discarded. | 411 uint32_t discardableBytes; // Total bytes that could be discarded. |
| 386 uint32_t numFullPages; // Number of pages with all slots allocated. | 412 uint32_t numFullPages; // Number of pages with all slots allocated. |
| 387 uint32_t | 413 uint32_t numActivePages; // Number of pages that have at least one |
| 388 numActivePages; // Number of pages that have at least one provisioned slo
t. | 414 // provisioned slot. |
| 389 uint32_t | 415 uint32_t numEmptyPages; // Number of pages that are empty |
| 390 numEmptyPages; // Number of pages that are empty but not decommitted. | 416 // but not decommitted. |
| 391 uint32_t | 417 uint32_t numDecommittedPages; // Number of pages that are empty |
| 392 numDecommittedPages; // Number of pages that are empty and decommitted. | 418 // and decommitted. |
| 393 }; | 419 }; |
| 394 | 420 |
| 395 // Interface that is passed to partitionDumpStats and | 421 // Interface that is passed to partitionDumpStats and |
| 396 // partitionDumpStatsGeneric for using the memory statistics. | 422 // partitionDumpStatsGeneric for using the memory statistics. |
| 397 class WTF_EXPORT PartitionStatsDumper { | 423 class WTF_EXPORT PartitionStatsDumper { |
| 398 public: | 424 public: |
| 399 // Called to dump total memory used by partition, once per partition. | 425 // Called to dump total memory used by partition, once per partition. |
| 400 virtual void partitionDumpTotals(const char* partitionName, | 426 virtual void partitionDumpTotals(const char* partitionName, |
| 401 const PartitionMemoryStats*) = 0; | 427 const PartitionMemoryStats*) = 0; |
| 402 | 428 |
| (...skipping 168 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 571 reinterpret_cast<char*>(pointerAsUint & kSuperPageBaseMask); | 597 reinterpret_cast<char*>(pointerAsUint & kSuperPageBaseMask); |
| 572 uintptr_t partitionPageIndex = | 598 uintptr_t partitionPageIndex = |
| 573 (pointerAsUint & kSuperPageOffsetMask) >> kPartitionPageShift; | 599 (pointerAsUint & kSuperPageOffsetMask) >> kPartitionPageShift; |
| 574 // Index 0 is invalid because it is the metadata and guard area and | 600 // Index 0 is invalid because it is the metadata and guard area and |
| 575 // the last index is invalid because it is a guard page. | 601 // the last index is invalid because it is a guard page. |
| 576 ASSERT(partitionPageIndex); | 602 ASSERT(partitionPageIndex); |
| 577 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); | 603 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); |
| 578 PartitionPage* page = reinterpret_cast<PartitionPage*>( | 604 PartitionPage* page = reinterpret_cast<PartitionPage*>( |
| 579 partitionSuperPageToMetadataArea(superPagePtr) + | 605 partitionSuperPageToMetadataArea(superPagePtr) + |
| 580 (partitionPageIndex << kPageMetadataShift)); | 606 (partitionPageIndex << kPageMetadataShift)); |
| 581 // Partition pages in the same slot span can share the same page object. Adjus
t for that. | 607 // Partition pages in the same slot span can share the same page object. |
| 608 // Adjust for that. |
| 582 size_t delta = page->pageOffset << kPageMetadataShift; | 609 size_t delta = page->pageOffset << kPageMetadataShift; |
| 583 page = | 610 page = |
| 584 reinterpret_cast<PartitionPage*>(reinterpret_cast<char*>(page) - delta); | 611 reinterpret_cast<PartitionPage*>(reinterpret_cast<char*>(page) - delta); |
| 585 return page; | 612 return page; |
| 586 } | 613 } |
| 587 | 614 |
| 588 ALWAYS_INLINE void* partitionPageToPointer(const PartitionPage* page) { | 615 ALWAYS_INLINE void* partitionPageToPointer(const PartitionPage* page) { |
| 589 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(page); | 616 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(page); |
| 590 uintptr_t superPageOffset = (pointerAsUint & kSuperPageOffsetMask); | 617 uintptr_t superPageOffset = (pointerAsUint & kSuperPageOffsetMask); |
| 591 ASSERT(superPageOffset > kSystemPageSize); | 618 ASSERT(superPageOffset > kSystemPageSize); |
| 592 ASSERT(superPageOffset < kSystemPageSize + (kNumPartitionPagesPerSuperPage * | 619 ASSERT(superPageOffset < kSystemPageSize + (kNumPartitionPagesPerSuperPage * |
| 593 kPageMetadataSize)); | 620 kPageMetadataSize)); |
| 594 uintptr_t partitionPageIndex = | 621 uintptr_t partitionPageIndex = |
| 595 (superPageOffset - kSystemPageSize) >> kPageMetadataShift; | 622 (superPageOffset - kSystemPageSize) >> kPageMetadataShift; |
| 596 // Index 0 is invalid because it is the metadata area and the last index is in
valid because it is a guard page. | 623 // Index 0 is invalid because it is the metadata area and the last index is |
| 624 // invalid because it is a guard page. |
| 597 ASSERT(partitionPageIndex); | 625 ASSERT(partitionPageIndex); |
| 598 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); | 626 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); |
| 599 uintptr_t superPageBase = (pointerAsUint & kSuperPageBaseMask); | 627 uintptr_t superPageBase = (pointerAsUint & kSuperPageBaseMask); |
| 600 void* ret = reinterpret_cast<void*>( | 628 void* ret = reinterpret_cast<void*>( |
| 601 superPageBase + (partitionPageIndex << kPartitionPageShift)); | 629 superPageBase + (partitionPageIndex << kPartitionPageShift)); |
| 602 return ret; | 630 return ret; |
| 603 } | 631 } |
| 604 | 632 |
| 605 ALWAYS_INLINE PartitionPage* partitionPointerToPage(void* ptr) { | 633 ALWAYS_INLINE PartitionPage* partitionPointerToPage(void* ptr) { |
| 606 PartitionPage* page = partitionPointerToPageNoAlignmentCheck(ptr); | 634 PartitionPage* page = partitionPointerToPageNoAlignmentCheck(ptr); |
| (...skipping 126 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 733 slotSize = rawSize; | 761 slotSize = rawSize; |
| 734 partitionCookieCheckValue(ptr); | 762 partitionCookieCheckValue(ptr); |
| 735 partitionCookieCheckValue(reinterpret_cast<char*>(ptr) + slotSize - | 763 partitionCookieCheckValue(reinterpret_cast<char*>(ptr) + slotSize - |
| 736 kCookieSize); | 764 kCookieSize); |
| 737 memset(ptr, kFreedByte, slotSize); | 765 memset(ptr, kFreedByte, slotSize); |
| 738 #endif | 766 #endif |
| 739 ASSERT(page->numAllocatedSlots); | 767 ASSERT(page->numAllocatedSlots); |
| 740 PartitionFreelistEntry* freelistHead = page->freelistHead; | 768 PartitionFreelistEntry* freelistHead = page->freelistHead; |
| 741 ASSERT(!freelistHead || partitionPointerIsValid(freelistHead)); | 769 ASSERT(!freelistHead || partitionPointerIsValid(freelistHead)); |
| 742 SECURITY_CHECK(ptr != freelistHead); // Catches an immediate double free. | 770 SECURITY_CHECK(ptr != freelistHead); // Catches an immediate double free. |
| 771 // Look for double free one level deeper in debug. |
| 743 ASSERT_WITH_SECURITY_IMPLICATION( | 772 ASSERT_WITH_SECURITY_IMPLICATION( |
| 744 !freelistHead || | 773 !freelistHead || ptr != partitionFreelistMask(freelistHead->next)); |
| 745 ptr != | |
| 746 partitionFreelistMask( | |
| 747 freelistHead | |
| 748 ->next)); // Look for double free one level deeper in debug. | |
| 749 PartitionFreelistEntry* entry = static_cast<PartitionFreelistEntry*>(ptr); | 774 PartitionFreelistEntry* entry = static_cast<PartitionFreelistEntry*>(ptr); |
| 750 entry->next = partitionFreelistMask(freelistHead); | 775 entry->next = partitionFreelistMask(freelistHead); |
| 751 page->freelistHead = entry; | 776 page->freelistHead = entry; |
| 752 --page->numAllocatedSlots; | 777 --page->numAllocatedSlots; |
| 753 if (UNLIKELY(page->numAllocatedSlots <= 0)) { | 778 if (UNLIKELY(page->numAllocatedSlots <= 0)) { |
| 754 partitionFreeSlowPath(page); | 779 partitionFreeSlowPath(page); |
| 755 } else { | 780 } else { |
| 756 // All single-slot allocations must go through the slow path to | 781 // All single-slot allocations must go through the slow path to |
| 757 // correctly update the size metadata. | 782 // correctly update the size metadata. |
| 758 ASSERT(partitionPageGetRawSize(page) == 0); | 783 ASSERT(partitionPageGetRawSize(page) == 0); |
| (...skipping 173 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 932 using WTF::partitionAlloc; | 957 using WTF::partitionAlloc; |
| 933 using WTF::partitionFree; | 958 using WTF::partitionFree; |
| 934 using WTF::partitionAllocGeneric; | 959 using WTF::partitionAllocGeneric; |
| 935 using WTF::partitionFreeGeneric; | 960 using WTF::partitionFreeGeneric; |
| 936 using WTF::partitionReallocGeneric; | 961 using WTF::partitionReallocGeneric; |
| 937 using WTF::partitionAllocActualSize; | 962 using WTF::partitionAllocActualSize; |
| 938 using WTF::partitionAllocSupportsGetSize; | 963 using WTF::partitionAllocSupportsGetSize; |
| 939 using WTF::partitionAllocGetSize; | 964 using WTF::partitionAllocGetSize; |
| 940 | 965 |
| 941 #endif // WTF_PartitionAlloc_h | 966 #endif // WTF_PartitionAlloc_h |
| OLD | NEW |