Chromium Code Reviews| OLD | NEW |
|---|---|
| 1 /* | 1 /* |
| 2 * Copyright (C) 2013 Google Inc. All rights reserved. | 2 * Copyright (C) 2013 Google Inc. All rights reserved. |
| 3 * | 3 * |
| 4 * Redistribution and use in source and binary forms, with or without | 4 * Redistribution and use in source and binary forms, with or without |
| 5 * modification, are permitted provided that the following conditions are | 5 * modification, are permitted provided that the following conditions are |
| 6 * met: | 6 * met: |
| 7 * | 7 * |
| 8 * * Redistributions of source code must retain the above copyright | 8 * * Redistributions of source code must retain the above copyright |
| 9 * notice, this list of conditions and the following disclaimer. | 9 * notice, this list of conditions and the following disclaimer. |
| 10 * * Redistributions in binary form must reproduce the above | 10 * * Redistributions in binary form must reproduce the above |
| (...skipping 48 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... | |
| 59 // And for partitionAllocGeneric(): | 59 // And for partitionAllocGeneric(): |
| 60 // - Multi-threaded use against a single partition is ok; locking is handled. | 60 // - Multi-threaded use against a single partition is ok; locking is handled. |
| 61 // - Allocations of any arbitrary size can be handled (subject to a limit of | 61 // - Allocations of any arbitrary size can be handled (subject to a limit of |
| 62 // INT_MAX bytes for security reasons). | 62 // INT_MAX bytes for security reasons). |
| 63 // - Bucketing is by approximate size, for example an allocation of 4000 bytes | 63 // - Bucketing is by approximate size, for example an allocation of 4000 bytes |
| 64 // might be placed into a 4096-byte bucket. Bucket sizes are chosen to try and | 64 // might be placed into a 4096-byte bucket. Bucket sizes are chosen to try and |
| 65 // keep worst-case waste to ~10%. | 65 // keep worst-case waste to ~10%. |
| 66 // | 66 // |
| 67 // The allocators are designed to be extremely fast, thanks to the following | 67 // The allocators are designed to be extremely fast, thanks to the following |
| 68 // properties and design: | 68 // properties and design: |
| 69 // - Just two single (reasonably predicatable) branches in the hot / fast path f or | 69 // - Just two single (reasonably predicatable) branches in the hot / fast path |
| 70 // both allocating and (significantly) freeing. | 70 // for both allocating and (significantly) freeing. |
| 71 // - A minimal number of operations in the hot / fast path, with the slow paths | 71 // - A minimal number of operations in the hot / fast path, with the slow paths |
| 72 // in separate functions, leading to the possibility of inlining. | 72 // in separate functions, leading to the possibility of inlining. |
| 73 // - Each partition page (which is usually multiple physical pages) has a | 73 // - Each partition page (which is usually multiple physical pages) has a |
| 74 // metadata structure which allows fast mapping of free() address to an | 74 // metadata structure which allows fast mapping of free() address to an |
| 75 // underlying bucket. | 75 // underlying bucket. |
| 76 // - Supports a lock-free API for fast performance in single-threaded cases. | 76 // - Supports a lock-free API for fast performance in single-threaded cases. |
| 77 // - The freelist for a given bucket is split across a number of partition | 77 // - The freelist for a given bucket is split across a number of partition |
| 78 // pages, enabling various simple tricks to try and minimize fragmentation. | 78 // pages, enabling various simple tricks to try and minimize fragmentation. |
| 79 // - Fine-grained bucket sizes leading to less waste and better packing. | 79 // - Fine-grained bucket sizes leading to less waste and better packing. |
| 80 // | 80 // |
| 81 // The following security properties could be investigated in the future: | 81 // The following security properties could be investigated in the future: |
| 82 // - Per-object bucketing (instead of per-size) is mostly available at the API, | 82 // - Per-object bucketing (instead of per-size) is mostly available at the API, |
| 83 // but not used yet. | 83 // but not used yet. |
| 84 // - No randomness of freelist entries or bucket position. | 84 // - No randomness of freelist entries or bucket position. |
| 85 // - Better checking for wild pointers in free(). | 85 // - Better checking for wild pointers in free(). |
| 86 // - Better freelist masking function to guarantee fault on 32-bit. | 86 // - Better freelist masking function to guarantee fault on 32-bit. |
| 87 | 87 |
| 88 #include "wtf/Assertions.h" | 88 #include "wtf/Assertions.h" |
| (...skipping 53 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... | |
| 142 | 142 |
| 143 // We reserve virtual address space in 2MB chunks (aligned to 2MB as well). | 143 // We reserve virtual address space in 2MB chunks (aligned to 2MB as well). |
| 144 // These chunks are called "super pages". We do this so that we can store | 144 // These chunks are called "super pages". We do this so that we can store |
| 145 // metadata in the first few pages of each 2MB aligned section. This leads to | 145 // metadata in the first few pages of each 2MB aligned section. This leads to |
| 146 // a very fast free(). We specifically choose 2MB because this virtual address | 146 // a very fast free(). We specifically choose 2MB because this virtual address |
| 147 // block represents a full but single PTE allocation on ARM, ia32 and x64. | 147 // block represents a full but single PTE allocation on ARM, ia32 and x64. |
| 148 // | 148 // |
| 149 // The layout of the super page is as follows. The sizes below are the same | 149 // The layout of the super page is as follows. The sizes below are the same |
| 150 // for 32 bit and 64 bit. | 150 // for 32 bit and 64 bit. |
| 151 // | 151 // |
| 152 // | Guard page (4KB) | Metadata page (4KB) | Guard pages (8KB) | Slot span | Slot span | ... | Slot span | Guard page (4KB) | | 152 // | Guard page (4KB) | Metadata page (4KB) | Guard pages (8KB) | Slot span | |
| 153 // Slot span | ... | Slot span | Guard page (4KB) | | |
| 153 // | 154 // |
| 154 // - Each slot span is a contiguous range of one or more PartitionPages. | 155 // - Each slot span is a contiguous range of one or more PartitionPages. |
| 155 // - The metadata page has the following format. Note that the PartitionPage | 156 // - The metadata page has the following format. Note that the PartitionPage |
| 156 // that is not at the head of a slot span is "unused". In other words, | 157 // that is not at the head of a slot span is "unused". In other words, |
| 157 // the metadata for the slot span is stored only in the first PartitionPage | 158 // the metadata for the slot span is stored only in the first PartitionPage |
| 158 // of the slot span. Metadata accesses to other PartitionPages are | 159 // of the slot span. Metadata accesses to other PartitionPages are |
| 159 // redirected to the first PartitionPage. | 160 // redirected to the first PartitionPage. |
| 160 // | 161 // |
| 161 // | SuperPageExtentEntry (32B) | PartitionPage of slot span 1 (32B, used) | PartitionPage of slot span 1 (32B, unused) | PartitionPage of slot span 1 (32B, unused) | PartitionPage of slot span 2 (32B, used) | PartitionPage of slot span 3 (32B, used) | ... | PartitionPage of slot span N (32B, unused) | | 162 // | SuperPageExtentEntry (32B) | |
| 163 // PartitionPage of slot span 1 (32B, used) | | |
| 164 // PartitionPage of slot span 1 (32B, unused) | | |
| 165 // PartitionPage of slot span 1 (32B, unused) | | |
| 166 // PartitionPage of slot span 2 (32B, used) | | |
| 167 // PartitionPage of slot span 3 (32B, used) | ... | | |
| 168 // PartitionPage of slot span N (32B, unused) | | |
| 162 // | 169 // |
| 163 // A direct mapped page has a similar layout to fake it looking like a super pag e: | 170 // A direct mapped page has a similar layout to fake it looking like a super |
| 171 // page: | |
| 164 // | 172 // |
| 165 // | Guard page (4KB) | Metadata page (4KB) | Guard pages (8KB) | Direct map ped object | Guard page (4KB) | | 173 // | Guard page (4KB) | Metadata page (4KB) | Guard pages (8KB) | Direct |
| 174 // mapped object | Guard page (4KB) | | |
| 166 // | 175 // |
| 167 // - The metadata page has the following layout: | 176 // - The metadata page has the following layout: |
| 168 // | 177 // |
| 169 // | SuperPageExtentEntry (32B) | PartitionPage (32B) | PartitionBucket (32B ) | PartitionDirectMapExtent (8B) | | 178 // | SuperPageExtentEntry (32B) | PartitionPage (32B) | |
| 179 // PartitionBucket (32B) | PartitionDirectMapExtent (8B) | | |
|
dcheng
2016/10/01 17:55:23
Nit: I feel like this is kind of hard to read once
Nico
2016/10/01 19:47:01
Did that but without the ----- lines. wdty?
| |
| 170 static const size_t kSuperPageShift = 21; // 2MB | 180 static const size_t kSuperPageShift = 21; // 2MB |
| 171 static const size_t kSuperPageSize = 1 << kSuperPageShift; | 181 static const size_t kSuperPageSize = 1 << kSuperPageShift; |
| 172 static const size_t kSuperPageOffsetMask = kSuperPageSize - 1; | 182 static const size_t kSuperPageOffsetMask = kSuperPageSize - 1; |
| 173 static const size_t kSuperPageBaseMask = ~kSuperPageOffsetMask; | 183 static const size_t kSuperPageBaseMask = ~kSuperPageOffsetMask; |
| 174 static const size_t kNumPartitionPagesPerSuperPage = | 184 static const size_t kNumPartitionPagesPerSuperPage = |
| 175 kSuperPageSize / kPartitionPageSize; | 185 kSuperPageSize / kPartitionPageSize; |
| 176 | 186 |
| 177 static const size_t kPageMetadataShift = 5; // 32 bytes per partition page. | 187 static const size_t kPageMetadataShift = 5; // 32 bytes per partition page. |
| 178 static const size_t kPageMetadataSize = 1 << kPageMetadataShift; | 188 static const size_t kPageMetadataSize = 1 << kPageMetadataShift; |
| 179 | 189 |
| 180 // The following kGeneric* constants apply to the generic variants of the API. | 190 // The following kGeneric* constants apply to the generic variants of the API. |
| 181 // The "order" of an allocation is closely related to the power-of-two size of | 191 // The "order" of an allocation is closely related to the power-of-two size of |
| 182 // the allocation. More precisely, the order is the bit index of the | 192 // the allocation. More precisely, the order is the bit index of the |
| 183 // most-significant-bit in the allocation size, where the bit numbers starts | 193 // most-significant-bit in the allocation size, where the bit numbers starts |
| 184 // at index 1 for the least-significant-bit. | 194 // at index 1 for the least-significant-bit. |
| 185 // In terms of allocation sizes, order 0 covers 0, order 1 covers 1, order 2 | 195 // In terms of allocation sizes, order 0 covers 0, order 1 covers 1, order 2 |
| 186 // covers 2->3, order 3 covers 4->7, order 4 covers 8->15. | 196 // covers 2->3, order 3 covers 4->7, order 4 covers 8->15. |
| 187 static const size_t kGenericMinBucketedOrder = 4; // 8 bytes. | 197 static const size_t kGenericMinBucketedOrder = 4; // 8 bytes. |
| 188 static const size_t kGenericMaxBucketedOrder = | 198 static const size_t kGenericMaxBucketedOrder = |
| 189 20; // Largest bucketed order is 1<<(20-1) (storing 512KB -> almost 1MB) | 199 20; // Largest bucketed order is 1<<(20-1) (storing 512KB -> almost 1MB) |
| 190 static const size_t kGenericNumBucketedOrders = | 200 static const size_t kGenericNumBucketedOrders = |
| 191 (kGenericMaxBucketedOrder - kGenericMinBucketedOrder) + 1; | 201 (kGenericMaxBucketedOrder - kGenericMinBucketedOrder) + 1; |
| 192 static const size_t kGenericNumBucketsPerOrderBits = | 202 // Eight buckets per order (for the higher orders), e.g. order 8 is 128, 144, |
| 193 3; // Eight buckets per order (for the higher orders), e.g. order 8 is 128, 144, 160, ..., 240 | 203 // 160, ..., 240: |
| 204 static const size_t kGenericNumBucketsPerOrderBits = 3; | |
| 194 static const size_t kGenericNumBucketsPerOrder = | 205 static const size_t kGenericNumBucketsPerOrder = |
| 195 1 << kGenericNumBucketsPerOrderBits; | 206 1 << kGenericNumBucketsPerOrderBits; |
| 196 static const size_t kGenericNumBuckets = | 207 static const size_t kGenericNumBuckets = |
| 197 kGenericNumBucketedOrders * kGenericNumBucketsPerOrder; | 208 kGenericNumBucketedOrders * kGenericNumBucketsPerOrder; |
| 198 static const size_t kGenericSmallestBucket = 1 | 209 static const size_t kGenericSmallestBucket = 1 |
| 199 << (kGenericMinBucketedOrder - 1); | 210 << (kGenericMinBucketedOrder - 1); |
| 200 static const size_t kGenericMaxBucketSpacing = | 211 static const size_t kGenericMaxBucketSpacing = |
| 201 1 << ((kGenericMaxBucketedOrder - 1) - kGenericNumBucketsPerOrderBits); | 212 1 << ((kGenericMaxBucketedOrder - 1) - kGenericNumBucketsPerOrderBits); |
| 202 static const size_t kGenericMaxBucketed = | 213 static const size_t kGenericMaxBucketed = |
| 203 (1 << (kGenericMaxBucketedOrder - 1)) + | 214 (1 << (kGenericMaxBucketedOrder - 1)) + |
| (...skipping 54 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... | |
| 258 // empty list or it _may_ leave it on the active list until a future list scan. | 269 // empty list or it _may_ leave it on the active list until a future list scan. |
| 259 // - malloc() _may_ scan the active page list in order to fulfil the request. | 270 // - malloc() _may_ scan the active page list in order to fulfil the request. |
| 260 // If it does this, full, empty and decommitted pages encountered will be | 271 // If it does this, full, empty and decommitted pages encountered will be |
| 261 // booted out of the active list. If there are no suitable active pages found, | 272 // booted out of the active list. If there are no suitable active pages found, |
| 262 // an empty or decommitted page (if one exists) will be pulled from the empty | 273 // an empty or decommitted page (if one exists) will be pulled from the empty |
| 263 // list on to the active list. | 274 // list on to the active list. |
| 264 struct PartitionPage { | 275 struct PartitionPage { |
| 265 PartitionFreelistEntry* freelistHead; | 276 PartitionFreelistEntry* freelistHead; |
| 266 PartitionPage* nextPage; | 277 PartitionPage* nextPage; |
| 267 PartitionBucket* bucket; | 278 PartitionBucket* bucket; |
| 268 int16_t | 279 int16_t numAllocatedSlots; // Deliberately signed, 0 for empty or decommitted |
| 269 numAllocatedSlots; // Deliberately signed, 0 for empty or decommitted pag e, -n for full pages. | 280 // page, -n for full pages. |
|
dcheng
2016/10/01 17:55:23
Nit: just place this on the prior line?
Nico
2016/10/01 19:47:00
Done.
Nico
2016/10/01 19:47:00
Done.
| |
| 270 uint16_t numUnprovisionedSlots; | 281 uint16_t numUnprovisionedSlots; |
| 271 uint16_t pageOffset; | 282 uint16_t pageOffset; |
| 272 int16_t emptyCacheIndex; // -1 if not in the empty cache. | 283 int16_t emptyCacheIndex; // -1 if not in the empty cache. |
| 273 }; | 284 }; |
| 274 | 285 |
| 275 struct PartitionBucket { | 286 struct PartitionBucket { |
| 276 PartitionPage* activePagesHead; // Accessed most in hot path => goes first. | 287 PartitionPage* activePagesHead; // Accessed most in hot path => goes first. |
| 277 PartitionPage* emptyPagesHead; | 288 PartitionPage* emptyPagesHead; |
| 278 PartitionPage* decommittedPagesHead; | 289 PartitionPage* decommittedPagesHead; |
| 279 uint32_t slotSize; | 290 uint32_t slotSize; |
| (...skipping 15 matching lines...) Expand all Loading... | |
| 295 PartitionDirectMapExtent* nextExtent; | 306 PartitionDirectMapExtent* nextExtent; |
| 296 PartitionDirectMapExtent* prevExtent; | 307 PartitionDirectMapExtent* prevExtent; |
| 297 PartitionBucket* bucket; | 308 PartitionBucket* bucket; |
| 298 size_t mapSize; // Mapped size, not including guard pages and meta-data. | 309 size_t mapSize; // Mapped size, not including guard pages and meta-data. |
| 299 }; | 310 }; |
| 300 | 311 |
| 301 struct WTF_EXPORT PartitionRootBase { | 312 struct WTF_EXPORT PartitionRootBase { |
| 302 size_t totalSizeOfCommittedPages; | 313 size_t totalSizeOfCommittedPages; |
| 303 size_t totalSizeOfSuperPages; | 314 size_t totalSizeOfSuperPages; |
| 304 size_t totalSizeOfDirectMappedPages; | 315 size_t totalSizeOfDirectMappedPages; |
| 305 // Invariant: totalSizeOfCommittedPages <= totalSizeOfSuperPages + totalSizeOf DirectMappedPages. | 316 // Invariant: totalSizeOfCommittedPages <= |
| 317 // totalSizeOfSuperPages + totalSizeOfDirectMappedPages. | |
| 306 unsigned numBuckets; | 318 unsigned numBuckets; |
| 307 unsigned maxAllocation; | 319 unsigned maxAllocation; |
| 308 bool initialized; | 320 bool initialized; |
| 309 char* nextSuperPage; | 321 char* nextSuperPage; |
| 310 char* nextPartitionPage; | 322 char* nextPartitionPage; |
| 311 char* nextPartitionPageEnd; | 323 char* nextPartitionPageEnd; |
| 312 PartitionSuperPageExtentEntry* currentExtent; | 324 PartitionSuperPageExtentEntry* currentExtent; |
| 313 PartitionSuperPageExtentEntry* firstExtent; | 325 PartitionSuperPageExtentEntry* firstExtent; |
| 314 PartitionDirectMapExtent* directMapList; | 326 PartitionDirectMapExtent* directMapList; |
| 315 PartitionPage* globalEmptyPageRing[kMaxFreeableSpans]; | 327 PartitionPage* globalEmptyPageRing[kMaxFreeableSpans]; |
| (...skipping 16 matching lines...) Expand all Loading... | |
| 332 struct PartitionRoot : public PartitionRootBase { | 344 struct PartitionRoot : public PartitionRootBase { |
| 333 // The PartitionAlloc templated class ensures the following is correct. | 345 // The PartitionAlloc templated class ensures the following is correct. |
| 334 ALWAYS_INLINE PartitionBucket* buckets() { | 346 ALWAYS_INLINE PartitionBucket* buckets() { |
| 335 return reinterpret_cast<PartitionBucket*>(this + 1); | 347 return reinterpret_cast<PartitionBucket*>(this + 1); |
| 336 } | 348 } |
| 337 ALWAYS_INLINE const PartitionBucket* buckets() const { | 349 ALWAYS_INLINE const PartitionBucket* buckets() const { |
| 338 return reinterpret_cast<const PartitionBucket*>(this + 1); | 350 return reinterpret_cast<const PartitionBucket*>(this + 1); |
| 339 } | 351 } |
| 340 }; | 352 }; |
| 341 | 353 |
| 342 // Never instantiate a PartitionRootGeneric directly, instead use PartitionAlloc atorGeneric. | 354 // Never instantiate a PartitionRootGeneric directly, instead use |
| 355 // PartitionAllocatorGeneric. | |
| 343 struct PartitionRootGeneric : public PartitionRootBase { | 356 struct PartitionRootGeneric : public PartitionRootBase { |
| 344 SpinLock lock; | 357 SpinLock lock; |
| 345 // Some pre-computed constants. | 358 // Some pre-computed constants. |
| 346 size_t orderIndexShifts[kBitsPerSizet + 1]; | 359 size_t orderIndexShifts[kBitsPerSizet + 1]; |
| 347 size_t orderSubIndexMasks[kBitsPerSizet + 1]; | 360 size_t orderSubIndexMasks[kBitsPerSizet + 1]; |
| 348 // The bucket lookup table lets us map a size_t to a bucket quickly. | 361 // The bucket lookup table lets us map a size_t to a bucket quickly. |
| 349 // The trailing +1 caters for the overflow case for very large allocation size s. | 362 // The trailing +1 caters for the overflow case for very large allocation |
| 363 // sizes. | |
|
dcheng
2016/10/01 17:55:23
Nit: Merge the next line
Nico
2016/10/01 19:47:00
Done.
| |
| 350 // It is one flat array instead of a 2D array because in the 2D world, we'd | 364 // It is one flat array instead of a 2D array because in the 2D world, we'd |
| 351 // need to index array[blah][max+1] which risks undefined behavior. | 365 // need to index array[blah][max+1] which risks undefined behavior. |
| 352 PartitionBucket* | 366 PartitionBucket* |
| 353 bucketLookups[((kBitsPerSizet + 1) * kGenericNumBucketsPerOrder) + 1]; | 367 bucketLookups[((kBitsPerSizet + 1) * kGenericNumBucketsPerOrder) + 1]; |
| 354 PartitionBucket buckets[kGenericNumBuckets]; | 368 PartitionBucket buckets[kGenericNumBuckets]; |
| 355 }; | 369 }; |
| 356 | 370 |
| 357 // Flags for partitionAllocGenericFlags. | 371 // Flags for partitionAllocGenericFlags. |
| 358 enum PartitionAllocFlags { | 372 enum PartitionAllocFlags { |
| 359 PartitionAllocReturnNull = 1 << 0, | 373 PartitionAllocReturnNull = 1 << 0, |
| 360 }; | 374 }; |
| 361 | 375 |
| 362 // Struct used to retrieve total memory usage of a partition. Used by | 376 // Struct used to retrieve total memory usage of a partition. Used by |
| 363 // PartitionStatsDumper implementation. | 377 // PartitionStatsDumper implementation. |
| 364 struct PartitionMemoryStats { | 378 struct PartitionMemoryStats { |
| 365 size_t totalMmappedBytes; // Total bytes mmaped from the system. | 379 size_t totalMmappedBytes; // Total bytes mmaped from the system. |
| 366 size_t totalCommittedBytes; // Total size of commmitted pages. | 380 size_t totalCommittedBytes; // Total size of commmitted pages. |
| 367 size_t totalResidentBytes; // Total bytes provisioned by the partition. | 381 size_t totalResidentBytes; // Total bytes provisioned by the partition. |
| 368 size_t totalActiveBytes; // Total active bytes in the partition. | 382 size_t totalActiveBytes; // Total active bytes in the partition. |
| 369 size_t totalDecommittableBytes; // Total bytes that could be decommitted. | 383 size_t totalDecommittableBytes; // Total bytes that could be decommitted. |
| 370 size_t totalDiscardableBytes; // Total bytes that could be discarded. | 384 size_t totalDiscardableBytes; // Total bytes that could be discarded. |
| 371 }; | 385 }; |
| 372 | 386 |
| 373 // Struct used to retrieve memory statistics about a partition bucket. Used by | 387 // Struct used to retrieve memory statistics about a partition bucket. Used by |
| 374 // PartitionStatsDumper implementation. | 388 // PartitionStatsDumper implementation. |
| 375 struct PartitionBucketMemoryStats { | 389 struct PartitionBucketMemoryStats { |
| 376 bool isValid; // Used to check if the stats is valid. | 390 bool isValid; // Used to check if the stats is valid. |
| 377 bool | 391 bool isDirectMap; // True if this is a direct mapping; size will not be |
| 378 isDirectMap; // True if this is a direct mapping; size will not be unique . | 392 // unique. |
| 379 uint32_t bucketSlotSize; // The size of the slot in bytes. | 393 uint32_t bucketSlotSize; // The size of the slot in bytes. |
| 380 uint32_t | 394 uint32_t allocatedPageSize; // Total size the partition page allocated from |
| 381 allocatedPageSize; // Total size the partition page allocated from the sy stem. | 395 // the system. |
| 382 uint32_t activeBytes; // Total active bytes used in the bucket. | 396 uint32_t activeBytes; // Total active bytes used in the bucket. |
| 383 uint32_t residentBytes; // Total bytes provisioned in the bucket. | 397 uint32_t residentBytes; // Total bytes provisioned in the bucket. |
| 384 uint32_t decommittableBytes; // Total bytes that could be decommitted. | 398 uint32_t decommittableBytes; // Total bytes that could be decommitted. |
| 385 uint32_t discardableBytes; // Total bytes that could be discarded. | 399 uint32_t discardableBytes; // Total bytes that could be discarded. |
| 386 uint32_t numFullPages; // Number of pages with all slots allocated. | 400 uint32_t numFullPages; // Number of pages with all slots allocated. |
| 387 uint32_t | 401 uint32_t numActivePages; // Number of pages that have at least one |
| 388 numActivePages; // Number of pages that have at least one provisioned slo t. | 402 // provisioned slot. |
| 389 uint32_t | 403 uint32_t numEmptyPages; // Number of pages that are empty |
| 390 numEmptyPages; // Number of pages that are empty but not decommitted. | 404 // but not decommitted. |
| 391 uint32_t | 405 uint32_t numDecommittedPages; // Number of pages that are empty |
| 392 numDecommittedPages; // Number of pages that are empty and decommitted. | 406 // and decommitted. |
| 393 }; | 407 }; |
| 394 | 408 |
| 395 // Interface that is passed to partitionDumpStats and | 409 // Interface that is passed to partitionDumpStats and |
| 396 // partitionDumpStatsGeneric for using the memory statistics. | 410 // partitionDumpStatsGeneric for using the memory statistics. |
| 397 class WTF_EXPORT PartitionStatsDumper { | 411 class WTF_EXPORT PartitionStatsDumper { |
| 398 public: | 412 public: |
| 399 // Called to dump total memory used by partition, once per partition. | 413 // Called to dump total memory used by partition, once per partition. |
| 400 virtual void partitionDumpTotals(const char* partitionName, | 414 virtual void partitionDumpTotals(const char* partitionName, |
| 401 const PartitionMemoryStats*) = 0; | 415 const PartitionMemoryStats*) = 0; |
| 402 | 416 |
| (...skipping 168 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... | |
| 571 reinterpret_cast<char*>(pointerAsUint & kSuperPageBaseMask); | 585 reinterpret_cast<char*>(pointerAsUint & kSuperPageBaseMask); |
| 572 uintptr_t partitionPageIndex = | 586 uintptr_t partitionPageIndex = |
| 573 (pointerAsUint & kSuperPageOffsetMask) >> kPartitionPageShift; | 587 (pointerAsUint & kSuperPageOffsetMask) >> kPartitionPageShift; |
| 574 // Index 0 is invalid because it is the metadata and guard area and | 588 // Index 0 is invalid because it is the metadata and guard area and |
| 575 // the last index is invalid because it is a guard page. | 589 // the last index is invalid because it is a guard page. |
| 576 ASSERT(partitionPageIndex); | 590 ASSERT(partitionPageIndex); |
| 577 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); | 591 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); |
| 578 PartitionPage* page = reinterpret_cast<PartitionPage*>( | 592 PartitionPage* page = reinterpret_cast<PartitionPage*>( |
| 579 partitionSuperPageToMetadataArea(superPagePtr) + | 593 partitionSuperPageToMetadataArea(superPagePtr) + |
| 580 (partitionPageIndex << kPageMetadataShift)); | 594 (partitionPageIndex << kPageMetadataShift)); |
| 581 // Partition pages in the same slot span can share the same page object. Adjus t for that. | 595 // Partition pages in the same slot span can share the same page object. |
| 596 // Adjust for that. | |
| 582 size_t delta = page->pageOffset << kPageMetadataShift; | 597 size_t delta = page->pageOffset << kPageMetadataShift; |
| 583 page = | 598 page = |
| 584 reinterpret_cast<PartitionPage*>(reinterpret_cast<char*>(page) - delta); | 599 reinterpret_cast<PartitionPage*>(reinterpret_cast<char*>(page) - delta); |
| 585 return page; | 600 return page; |
| 586 } | 601 } |
| 587 | 602 |
| 588 ALWAYS_INLINE void* partitionPageToPointer(const PartitionPage* page) { | 603 ALWAYS_INLINE void* partitionPageToPointer(const PartitionPage* page) { |
| 589 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(page); | 604 uintptr_t pointerAsUint = reinterpret_cast<uintptr_t>(page); |
| 590 uintptr_t superPageOffset = (pointerAsUint & kSuperPageOffsetMask); | 605 uintptr_t superPageOffset = (pointerAsUint & kSuperPageOffsetMask); |
| 591 ASSERT(superPageOffset > kSystemPageSize); | 606 ASSERT(superPageOffset > kSystemPageSize); |
| 592 ASSERT(superPageOffset < kSystemPageSize + (kNumPartitionPagesPerSuperPage * | 607 ASSERT(superPageOffset < kSystemPageSize + (kNumPartitionPagesPerSuperPage * |
| 593 kPageMetadataSize)); | 608 kPageMetadataSize)); |
| 594 uintptr_t partitionPageIndex = | 609 uintptr_t partitionPageIndex = |
| 595 (superPageOffset - kSystemPageSize) >> kPageMetadataShift; | 610 (superPageOffset - kSystemPageSize) >> kPageMetadataShift; |
| 596 // Index 0 is invalid because it is the metadata area and the last index is in valid because it is a guard page. | 611 // Index 0 is invalid because it is the metadata area and the last index is |
| 612 // invalid because it is a guard page. | |
| 597 ASSERT(partitionPageIndex); | 613 ASSERT(partitionPageIndex); |
| 598 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); | 614 ASSERT(partitionPageIndex < kNumPartitionPagesPerSuperPage - 1); |
| 599 uintptr_t superPageBase = (pointerAsUint & kSuperPageBaseMask); | 615 uintptr_t superPageBase = (pointerAsUint & kSuperPageBaseMask); |
| 600 void* ret = reinterpret_cast<void*>( | 616 void* ret = reinterpret_cast<void*>( |
| 601 superPageBase + (partitionPageIndex << kPartitionPageShift)); | 617 superPageBase + (partitionPageIndex << kPartitionPageShift)); |
| 602 return ret; | 618 return ret; |
| 603 } | 619 } |
| 604 | 620 |
| 605 ALWAYS_INLINE PartitionPage* partitionPointerToPage(void* ptr) { | 621 ALWAYS_INLINE PartitionPage* partitionPointerToPage(void* ptr) { |
| 606 PartitionPage* page = partitionPointerToPageNoAlignmentCheck(ptr); | 622 PartitionPage* page = partitionPointerToPageNoAlignmentCheck(ptr); |
| (...skipping 126 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... | |
| 733 slotSize = rawSize; | 749 slotSize = rawSize; |
| 734 partitionCookieCheckValue(ptr); | 750 partitionCookieCheckValue(ptr); |
| 735 partitionCookieCheckValue(reinterpret_cast<char*>(ptr) + slotSize - | 751 partitionCookieCheckValue(reinterpret_cast<char*>(ptr) + slotSize - |
| 736 kCookieSize); | 752 kCookieSize); |
| 737 memset(ptr, kFreedByte, slotSize); | 753 memset(ptr, kFreedByte, slotSize); |
| 738 #endif | 754 #endif |
| 739 ASSERT(page->numAllocatedSlots); | 755 ASSERT(page->numAllocatedSlots); |
| 740 PartitionFreelistEntry* freelistHead = page->freelistHead; | 756 PartitionFreelistEntry* freelistHead = page->freelistHead; |
| 741 ASSERT(!freelistHead || partitionPointerIsValid(freelistHead)); | 757 ASSERT(!freelistHead || partitionPointerIsValid(freelistHead)); |
| 742 SECURITY_CHECK(ptr != freelistHead); // Catches an immediate double free. | 758 SECURITY_CHECK(ptr != freelistHead); // Catches an immediate double free. |
| 759 // Look for double free one level deeper in debug. | |
| 743 ASSERT_WITH_SECURITY_IMPLICATION( | 760 ASSERT_WITH_SECURITY_IMPLICATION( |
| 744 !freelistHead || | 761 !freelistHead || ptr != partitionFreelistMask(freelistHead->next)); |
| 745 ptr != | |
| 746 partitionFreelistMask( | |
| 747 freelistHead | |
| 748 ->next)); // Look for double free one level deeper in debug. | |
| 749 PartitionFreelistEntry* entry = static_cast<PartitionFreelistEntry*>(ptr); | 762 PartitionFreelistEntry* entry = static_cast<PartitionFreelistEntry*>(ptr); |
| 750 entry->next = partitionFreelistMask(freelistHead); | 763 entry->next = partitionFreelistMask(freelistHead); |
| 751 page->freelistHead = entry; | 764 page->freelistHead = entry; |
| 752 --page->numAllocatedSlots; | 765 --page->numAllocatedSlots; |
| 753 if (UNLIKELY(page->numAllocatedSlots <= 0)) { | 766 if (UNLIKELY(page->numAllocatedSlots <= 0)) { |
| 754 partitionFreeSlowPath(page); | 767 partitionFreeSlowPath(page); |
| 755 } else { | 768 } else { |
| 756 // All single-slot allocations must go through the slow path to | 769 // All single-slot allocations must go through the slow path to |
| 757 // correctly update the size metadata. | 770 // correctly update the size metadata. |
| 758 ASSERT(partitionPageGetRawSize(page) == 0); | 771 ASSERT(partitionPageGetRawSize(page) == 0); |
| (...skipping 173 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... | |
| 932 using WTF::partitionAlloc; | 945 using WTF::partitionAlloc; |
| 933 using WTF::partitionFree; | 946 using WTF::partitionFree; |
| 934 using WTF::partitionAllocGeneric; | 947 using WTF::partitionAllocGeneric; |
| 935 using WTF::partitionFreeGeneric; | 948 using WTF::partitionFreeGeneric; |
| 936 using WTF::partitionReallocGeneric; | 949 using WTF::partitionReallocGeneric; |
| 937 using WTF::partitionAllocActualSize; | 950 using WTF::partitionAllocActualSize; |
| 938 using WTF::partitionAllocSupportsGetSize; | 951 using WTF::partitionAllocSupportsGetSize; |
| 939 using WTF::partitionAllocGetSize; | 952 using WTF::partitionAllocGetSize; |
| 940 | 953 |
| 941 #endif // WTF_PartitionAlloc_h | 954 #endif // WTF_PartitionAlloc_h |
| OLD | NEW |