| OLD | NEW |
| 1 // Copyright 2006-2008 the V8 project authors. All rights reserved. | 1 // Copyright 2006-2008 the V8 project authors. All rights reserved. |
| 2 // Redistribution and use in source and binary forms, with or without | 2 // Redistribution and use in source and binary forms, with or without |
| 3 // modification, are permitted provided that the following conditions are | 3 // modification, are permitted provided that the following conditions are |
| 4 // met: | 4 // met: |
| 5 // | 5 // |
| 6 // * Redistributions of source code must retain the above copyright | 6 // * Redistributions of source code must retain the above copyright |
| 7 // notice, this list of conditions and the following disclaimer. | 7 // notice, this list of conditions and the following disclaimer. |
| 8 // * Redistributions in binary form must reproduce the above | 8 // * Redistributions in binary form must reproduce the above |
| 9 // copyright notice, this list of conditions and the following | 9 // copyright notice, this list of conditions and the following |
| 10 // disclaimer in the documentation and/or other materials provided | 10 // disclaimer in the documentation and/or other materials provided |
| (...skipping 27 matching lines...) Expand all Loading... |
| 38 // Heap structures: | 38 // Heap structures: |
| 39 // | 39 // |
| 40 // A JS heap consists of a young generation, an old generation, and a large | 40 // A JS heap consists of a young generation, an old generation, and a large |
| 41 // object space. The young generation is divided into two semispaces. A | 41 // object space. The young generation is divided into two semispaces. A |
| 42 // scavenger implements Cheney's copying algorithm. The old generation is | 42 // scavenger implements Cheney's copying algorithm. The old generation is |
| 43 // separated into a map space and an old object space. The map space contains | 43 // separated into a map space and an old object space. The map space contains |
| 44 // all (and only) map objects, the rest of old objects go into the old space. | 44 // all (and only) map objects, the rest of old objects go into the old space. |
| 45 // The old generation is collected by a mark-sweep-compact collector. | 45 // The old generation is collected by a mark-sweep-compact collector. |
| 46 // | 46 // |
| 47 // The semispaces of the young generation are contiguous. The old and map | 47 // The semispaces of the young generation are contiguous. The old and map |
| 48 // spaces consists of a list of pages. A page has a page header, a remembered | 48 // spaces consists of a list of pages. A page has a page header and an object |
| 49 // set area, and an object area. A page size is deliberately chosen as 8K | 49 // area. A page size is deliberately chosen as 8K bytes. |
| 50 // bytes. The first word of a page is an opaque page header that has the | 50 // The first word of a page is an opaque page header that has the |
| 51 // address of the next page and its ownership information. The second word may | 51 // address of the next page and its ownership information. The second word may |
| 52 // have the allocation top address of this page. The next 248 bytes are | 52 // have the allocation top address of this page. Heap objects are aligned to the |
| 53 // remembered sets. Heap objects are aligned to the pointer size (4 bytes). A | 53 // pointer size. |
| 54 // remembered set bit corresponds to a pointer in the object area. | |
| 55 // | 54 // |
| 56 // There is a separate large object space for objects larger than | 55 // There is a separate large object space for objects larger than |
| 57 // Page::kMaxHeapObjectSize, so that they do not have to move during | 56 // Page::kMaxHeapObjectSize, so that they do not have to move during |
| 58 // collection. The large object space is paged and uses the same remembered | 57 // collection. The large object space is paged. Pages in large object space |
| 59 // set implementation. Pages in large object space may be larger than 8K. | 58 // may be larger than 8K. |
| 60 // | 59 // |
| 61 // NOTE: The mark-compact collector rebuilds the remembered set after a | 60 // A card marking write barrier is used to keep track of intergenerational |
| 62 // collection. It reuses first a few words of the remembered set for | 61 // references. Old space pages are divided into regions of Page::kRegionSize |
| 63 // bookkeeping relocation information. | 62 // size. Each region has a corresponding dirty bit in the page header which is |
| 64 | 63 // set if the region might contain pointers to new space. For details about |
| 64 // dirty bits encoding see comments in the Page::GetRegionNumberForAddress() |
| 65 // method body. |
| 66 // |
| 67 // During scavenges and mark-sweep collections we iterate intergenerational |
| 68 // pointers without decoding heap object maps so if the page belongs to old |
| 69 // pointer space or large object space it is essential to guarantee that |
| 70 // the page does not contain any garbage pointers to new space: every pointer |
| 71 // aligned word which satisfies the Heap::InNewSpace() predicate must be a |
| 72 // pointer to a live heap object in new space. Thus objects in old pointer |
| 73 // and large object spaces should have a special layout (e.g. no bare integer |
| 74 // fields). This requirement does not apply to map space which is iterated in |
| 75 // a special fashion. However we still require pointer fields of dead maps to |
| 76 // be cleaned. |
| 77 // |
| 78 // To enable lazy cleaning of old space pages we use a notion of allocation |
| 79 // watermark. Every pointer under watermark is considered to be well formed. |
| 80 // Page allocation watermark is not necessarily equal to page allocation top but |
| 81 // all alive objects on page should reside under allocation watermark. |
| 82 // During scavenge allocation watermark might be bumped and invalid pointers |
| 83 // might appear below it. To avoid following them we store a valid watermark |
| 84 // into special field in the page header and set a page WATERMARK_INVALIDATED |
| 85 // flag. For details see comments in the Page::SetAllocationWatermark() method |
| 86 // body. |
| 87 // |
| 65 | 88 |
| 66 // Some assertion macros used in the debugging mode. | 89 // Some assertion macros used in the debugging mode. |
| 67 | 90 |
| 68 #define ASSERT_PAGE_ALIGNED(address) \ | 91 #define ASSERT_PAGE_ALIGNED(address) \ |
| 69 ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0) | 92 ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0) |
| 70 | 93 |
| 71 #define ASSERT_OBJECT_ALIGNED(address) \ | 94 #define ASSERT_OBJECT_ALIGNED(address) \ |
| 72 ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0) | 95 ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0) |
| 73 | 96 |
| 74 #define ASSERT_MAP_ALIGNED(address) \ | 97 #define ASSERT_MAP_ALIGNED(address) \ |
| 75 ASSERT((OffsetFrom(address) & kMapAlignmentMask) == 0) | 98 ASSERT((OffsetFrom(address) & kMapAlignmentMask) == 0) |
| 76 | 99 |
| 77 #define ASSERT_OBJECT_SIZE(size) \ | 100 #define ASSERT_OBJECT_SIZE(size) \ |
| 78 ASSERT((0 < size) && (size <= Page::kMaxHeapObjectSize)) | 101 ASSERT((0 < size) && (size <= Page::kMaxHeapObjectSize)) |
| 79 | 102 |
| 80 #define ASSERT_PAGE_OFFSET(offset) \ | 103 #define ASSERT_PAGE_OFFSET(offset) \ |
| 81 ASSERT((Page::kObjectStartOffset <= offset) \ | 104 ASSERT((Page::kObjectStartOffset <= offset) \ |
| 82 && (offset <= Page::kPageSize)) | 105 && (offset <= Page::kPageSize)) |
| 83 | 106 |
| 84 #define ASSERT_MAP_PAGE_INDEX(index) \ | 107 #define ASSERT_MAP_PAGE_INDEX(index) \ |
| 85 ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex)) | 108 ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex)) |
| 86 | 109 |
| 87 | 110 |
| 88 class PagedSpace; | 111 class PagedSpace; |
| 89 class MemoryAllocator; | 112 class MemoryAllocator; |
| 90 class AllocationInfo; | 113 class AllocationInfo; |
| 91 | 114 |
| 92 // ----------------------------------------------------------------------------- | 115 // ----------------------------------------------------------------------------- |
| 93 // A page normally has 8K bytes. Large object pages may be larger. A page | 116 // A page normally has 8K bytes. Large object pages may be larger. A page |
| 94 // address is always aligned to the 8K page size. A page is divided into | 117 // address is always aligned to the 8K page size. |
| 95 // three areas: the first two words are used for bookkeeping, the next 248 | |
| 96 // bytes are used as remembered set, and the rest of the page is the object | |
| 97 // area. | |
| 98 // | 118 // |
| 99 // Pointers are aligned to the pointer size (4), only 1 bit is needed | 119 // Each page starts with a header of Page::kPageHeaderSize size which contains |
| 100 // for a pointer in the remembered set. Given an address, its remembered set | 120 // bookkeeping data. |
| 101 // bit position (offset from the start of the page) is calculated by dividing | |
| 102 // its page offset by 32. Therefore, the object area in a page starts at the | |
| 103 // 256th byte (8K/32). Bytes 0 to 255 do not need the remembered set, so that | |
| 104 // the first two words (64 bits) in a page can be used for other purposes. | |
| 105 // | |
| 106 // On the 64-bit platform, we add an offset to the start of the remembered set, | |
| 107 // and pointers are aligned to 8-byte pointer size. This means that we need | |
| 108 // only 128 bytes for the RSet, and only get two bytes free in the RSet's RSet. | |
| 109 // For this reason we add an offset to get room for the Page data at the start. | |
| 110 // | 121 // |
| 111 // The mark-compact collector transforms a map pointer into a page index and a | 122 // The mark-compact collector transforms a map pointer into a page index and a |
| 112 // page offset. The excact encoding is described in the comments for | 123 // page offset. The exact encoding is described in the comments for |
| 113 // class MapWord in objects.h. | 124 // class MapWord in objects.h. |
| 114 // | 125 // |
| 115 // The only way to get a page pointer is by calling factory methods: | 126 // The only way to get a page pointer is by calling factory methods: |
| 116 // Page* p = Page::FromAddress(addr); or | 127 // Page* p = Page::FromAddress(addr); or |
| 117 // Page* p = Page::FromAllocationTop(top); | 128 // Page* p = Page::FromAllocationTop(top); |
| 118 class Page { | 129 class Page { |
| 119 public: | 130 public: |
| 120 // Returns the page containing a given address. The address ranges | 131 // Returns the page containing a given address. The address ranges |
| 121 // from [page_addr .. page_addr + kPageSize[ | 132 // from [page_addr .. page_addr + kPageSize[ |
| 122 // | 133 // |
| (...skipping 20 matching lines...) Expand all Loading... |
| 143 | 154 |
| 144 // Checks whether this is a valid page address. | 155 // Checks whether this is a valid page address. |
| 145 bool is_valid() { return address() != NULL; } | 156 bool is_valid() { return address() != NULL; } |
| 146 | 157 |
| 147 // Returns the next page of this page. | 158 // Returns the next page of this page. |
| 148 inline Page* next_page(); | 159 inline Page* next_page(); |
| 149 | 160 |
| 150 // Return the end of allocation in this page. Undefined for unused pages. | 161 // Return the end of allocation in this page. Undefined for unused pages. |
| 151 inline Address AllocationTop(); | 162 inline Address AllocationTop(); |
| 152 | 163 |
| 164 // Return the allocation watermark for the page. |
| 165 // For old space pages it is guaranteed that the area under the watermark |
| 166 // does not contain any garbage pointers to new space. |
| 167 inline Address AllocationWatermark(); |
| 168 |
| 169 // Return the allocation watermark offset from the beginning of the page. |
| 170 inline uint32_t AllocationWatermarkOffset(); |
| 171 |
| 172 inline void SetAllocationWatermark(Address allocation_watermark); |
| 173 |
| 174 inline void SetCachedAllocationWatermark(Address allocation_watermark); |
| 175 inline Address CachedAllocationWatermark(); |
| 176 |
| 153 // Returns the start address of the object area in this page. | 177 // Returns the start address of the object area in this page. |
| 154 Address ObjectAreaStart() { return address() + kObjectStartOffset; } | 178 Address ObjectAreaStart() { return address() + kObjectStartOffset; } |
| 155 | 179 |
| 156 // Returns the end address (exclusive) of the object area in this page. | 180 // Returns the end address (exclusive) of the object area in this page. |
| 157 Address ObjectAreaEnd() { return address() + Page::kPageSize; } | 181 Address ObjectAreaEnd() { return address() + Page::kPageSize; } |
| 158 | 182 |
| 159 // Returns the start address of the remembered set area. | |
| 160 Address RSetStart() { return address() + kRSetStartOffset; } | |
| 161 | |
| 162 // Returns the end address of the remembered set area (exclusive). | |
| 163 Address RSetEnd() { return address() + kRSetEndOffset; } | |
| 164 | |
| 165 // Checks whether an address is page aligned. | 183 // Checks whether an address is page aligned. |
| 166 static bool IsAlignedToPageSize(Address a) { | 184 static bool IsAlignedToPageSize(Address a) { |
| 167 return 0 == (OffsetFrom(a) & kPageAlignmentMask); | 185 return 0 == (OffsetFrom(a) & kPageAlignmentMask); |
| 168 } | 186 } |
| 169 | 187 |
| 170 // True if this page was in use before current compaction started. | 188 // True if this page was in use before current compaction started. |
| 171 // Result is valid only for pages owned by paged spaces and | 189 // Result is valid only for pages owned by paged spaces and |
| 172 // only after PagedSpace::PrepareForMarkCompact was called. | 190 // only after PagedSpace::PrepareForMarkCompact was called. |
| 173 inline bool WasInUseBeforeMC(); | 191 inline bool WasInUseBeforeMC(); |
| 174 | 192 |
| (...skipping 11 matching lines...) Expand all Loading... |
| 186 return offset; | 204 return offset; |
| 187 } | 205 } |
| 188 | 206 |
| 189 // Returns the address for a given offset to the this page. | 207 // Returns the address for a given offset to the this page. |
| 190 Address OffsetToAddress(int offset) { | 208 Address OffsetToAddress(int offset) { |
| 191 ASSERT_PAGE_OFFSET(offset); | 209 ASSERT_PAGE_OFFSET(offset); |
| 192 return address() + offset; | 210 return address() + offset; |
| 193 } | 211 } |
| 194 | 212 |
| 195 // --------------------------------------------------------------------- | 213 // --------------------------------------------------------------------- |
| 196 // Remembered set support | 214 // Card marking support |
| 197 | 215 |
| 198 // Clears remembered set in this page. | 216 static const uint32_t kAllRegionsCleanMarks = 0x0; |
| 199 inline void ClearRSet(); | 217 static const uint32_t kAllRegionsDirtyMarks = 0xFFFFFFFF; |
| 200 | 218 |
| 201 // Return the address of the remembered set word corresponding to an | 219 inline uint32_t GetRegionMarks(); |
| 202 // object address/offset pair, and the bit encoded as a single-bit | 220 inline void SetRegionMarks(uint32_t dirty); |
| 203 // mask in the output parameter 'bitmask'. | |
| 204 INLINE(static Address ComputeRSetBitPosition(Address address, int offset, | |
| 205 uint32_t* bitmask)); | |
| 206 | 221 |
| 207 // Sets the corresponding remembered set bit for a given address. | 222 inline uint32_t GetRegionMaskForAddress(Address addr); |
| 208 INLINE(static void SetRSet(Address address, int offset)); | 223 inline int GetRegionNumberForAddress(Address addr); |
| 209 | 224 |
| 210 // Clears the corresponding remembered set bit for a given address. | 225 inline void MarkRegionDirty(Address addr); |
| 211 static inline void UnsetRSet(Address address, int offset); | 226 inline bool IsRegionDirty(Address addr); |
| 212 | 227 |
| 213 // Checks whether the remembered set bit for a given address is set. | 228 inline void ClearRegionMarks(Address start, |
| 214 static inline bool IsRSetSet(Address address, int offset); | 229 Address end, |
| 215 | 230 bool reaches_limit); |
| 216 #ifdef DEBUG | |
| 217 // Use a state to mark whether remembered set space can be used for other | |
| 218 // purposes. | |
| 219 enum RSetState { IN_USE, NOT_IN_USE }; | |
| 220 static bool is_rset_in_use() { return rset_state_ == IN_USE; } | |
| 221 static void set_rset_state(RSetState state) { rset_state_ = state; } | |
| 222 #endif | |
| 223 | 231 |
| 224 // Page size in bytes. This must be a multiple of the OS page size. | 232 // Page size in bytes. This must be a multiple of the OS page size. |
| 225 static const int kPageSize = 1 << kPageSizeBits; | 233 static const int kPageSize = 1 << kPageSizeBits; |
| 226 | 234 |
| 227 // Page size mask. | 235 // Page size mask. |
| 228 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1; | 236 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1; |
| 229 | 237 |
| 230 // The offset of the remembered set in a page, in addition to the empty bytes | 238 static const int kPageHeaderSize = kPointerSize + kPointerSize + kIntSize + |
| 231 // formed as the remembered bits of the remembered set itself. | 239 kIntSize + kPointerSize; |
| 232 #ifdef V8_TARGET_ARCH_X64 | |
| 233 static const int kRSetOffset = 4 * kPointerSize; // Room for four pointers. | |
| 234 #else | |
| 235 static const int kRSetOffset = 0; | |
| 236 #endif | |
| 237 // The end offset of the remembered set in a page | |
| 238 // (heaps are aligned to pointer size). | |
| 239 static const int kRSetEndOffset = kRSetOffset + kPageSize / kBitsPerPointer; | |
| 240 | 240 |
| 241 // The start offset of the object area in a page. | 241 // The start offset of the object area in a page. |
| 242 // This needs to be at least (bits per uint32_t) * kBitsPerPointer, | 242 static const int kObjectStartOffset = MAP_POINTER_ALIGN(kPageHeaderSize); |
| 243 // to align start of rset to a uint32_t address. | |
| 244 static const int kObjectStartOffset = 256; | |
| 245 | |
| 246 // The start offset of the used part of the remembered set in a page. | |
| 247 static const int kRSetStartOffset = kRSetOffset + | |
| 248 kObjectStartOffset / kBitsPerPointer; | |
| 249 | 243 |
| 250 // Object area size in bytes. | 244 // Object area size in bytes. |
| 251 static const int kObjectAreaSize = kPageSize - kObjectStartOffset; | 245 static const int kObjectAreaSize = kPageSize - kObjectStartOffset; |
| 252 | 246 |
| 253 // Maximum object size that fits in a page. | 247 // Maximum object size that fits in a page. |
| 254 static const int kMaxHeapObjectSize = kObjectAreaSize; | 248 static const int kMaxHeapObjectSize = kObjectAreaSize; |
| 255 | 249 |
| 250 static const int kDirtyFlagOffset = 2 * kPointerSize; |
| 251 static const int kRegionSizeLog2 = 8; |
| 252 static const int kRegionSize = 1 << kRegionSizeLog2; |
| 253 static const intptr_t kRegionAlignmentMask = (kRegionSize - 1); |
| 254 |
| 255 STATIC_CHECK(kRegionSize == kPageSize / kBitsPerInt); |
| 256 |
| 256 enum PageFlag { | 257 enum PageFlag { |
| 257 IS_NORMAL_PAGE = 1 << 0, | 258 IS_NORMAL_PAGE = 1 << 0, |
| 258 WAS_IN_USE_BEFORE_MC = 1 << 1 | 259 WAS_IN_USE_BEFORE_MC = 1 << 1, |
| 260 |
| 261 // Page allocation watermark was bumped by preallocation during scavenge. |
| 262 // Correct watermark can be retrieved by CachedAllocationWatermark() method |
| 263 WATERMARK_INVALIDATED = 1 << 2 |
| 259 }; | 264 }; |
| 260 | 265 |
| 266 // To avoid an additional WATERMARK_INVALIDATED flag clearing pass during |
| 267 // scavenge we just invalidate the watermark on each old space page after |
| 268 // processing it. And then we flip the meaning of the WATERMARK_INVALIDATED |
| 269 // flag at the beginning of the next scavenge and each page becomes marked as |
| 270 // having a valid watermark. |
| 271 // |
| 272 // The following invariant must hold for pages in old pointer and map spaces: |
| 273 // If page is in use then page is marked as having invalid watermark at |
| 274 // the beginning and at the end of any GC. |
| 275 // |
| 276 // This invariant guarantees that after flipping flag meaning at the |
| 277 // beginning of scavenge all pages in use will be marked as having valid |
| 278 // watermark. |
| 279 static inline void FlipMeaningOfInvalidatedWatermarkFlag(); |
| 280 |
| 281 // Returns true if the page allocation watermark was not altered during |
| 282 // scavenge. |
| 283 inline bool IsWatermarkValid(); |
| 284 |
| 285 inline void InvalidateWatermark(bool value); |
| 286 |
| 261 inline bool GetPageFlag(PageFlag flag); | 287 inline bool GetPageFlag(PageFlag flag); |
| 262 inline void SetPageFlag(PageFlag flag, bool value); | 288 inline void SetPageFlag(PageFlag flag, bool value); |
| 289 inline void ClearPageFlags(); |
| 290 |
| 291 static const int kAllocationWatermarkOffsetShift = 3; |
| 292 static const int kAllocationWatermarkOffsetBits = kPageSizeBits + 1; |
| 293 static const uint32_t kAllocationWatermarkOffsetMask = |
| 294 ((1 << kAllocationWatermarkOffsetBits) - 1) << |
| 295 kAllocationWatermarkOffsetShift; |
| 296 |
| 297 static const uint32_t kFlagsMask = |
| 298 ((1 << kAllocationWatermarkOffsetShift) - 1); |
| 299 |
| 300 STATIC_CHECK(kBitsPerInt - kAllocationWatermarkOffsetShift >= |
| 301 kAllocationWatermarkOffsetBits); |
| 302 |
| 303 // This field contains the meaning of the WATERMARK_INVALIDATED flag. |
| 304 // Instead of clearing this flag from all pages we just flip |
| 305 // its meaning at the beginning of a scavenge. |
| 306 static intptr_t watermark_invalidated_mark_; |
| 263 | 307 |
| 264 //--------------------------------------------------------------------------- | 308 //--------------------------------------------------------------------------- |
| 265 // Page header description. | 309 // Page header description. |
| 266 // | 310 // |
| 267 // If a page is not in the large object space, the first word, | 311 // If a page is not in the large object space, the first word, |
| 268 // opaque_header, encodes the next page address (aligned to kPageSize 8K) | 312 // opaque_header, encodes the next page address (aligned to kPageSize 8K) |
| 269 // and the chunk number (0 ~ 8K-1). Only MemoryAllocator should use | 313 // and the chunk number (0 ~ 8K-1). Only MemoryAllocator should use |
| 270 // opaque_header. The value range of the opaque_header is [0..kPageSize[, | 314 // opaque_header. The value range of the opaque_header is [0..kPageSize[, |
| 271 // or [next_page_start, next_page_end[. It cannot point to a valid address | 315 // or [next_page_start, next_page_end[. It cannot point to a valid address |
| 272 // in the current page. If a page is in the large object space, the first | 316 // in the current page. If a page is in the large object space, the first |
| 273 // word *may* (if the page start and large object chunk start are the | 317 // word *may* (if the page start and large object chunk start are the |
| 274 // same) contain the address of the next large object chunk. | 318 // same) contain the address of the next large object chunk. |
| 275 intptr_t opaque_header; | 319 intptr_t opaque_header; |
| 276 | 320 |
| 277 // If the page is not in the large object space, the low-order bit of the | 321 // If the page is not in the large object space, the low-order bit of the |
| 278 // second word is set. If the page is in the large object space, the | 322 // second word is set. If the page is in the large object space, the |
| 279 // second word *may* (if the page start and large object chunk start are | 323 // second word *may* (if the page start and large object chunk start are |
| 280 // the same) contain the large object chunk size. In either case, the | 324 // the same) contain the large object chunk size. In either case, the |
| 281 // low-order bit for large object pages will be cleared. | 325 // low-order bit for large object pages will be cleared. |
| 282 // For normal pages this word is used to store various page flags. | 326 // For normal pages this word is used to store page flags and |
| 283 int flags; | 327 // offset of allocation top. |
| 328 intptr_t flags_; |
| 284 | 329 |
| 285 // The following fields may overlap with remembered set, they can only | 330 // This field contains dirty marks for regions covering the page. Only dirty |
| 286 // be used in the mark-compact collector when remembered set is not | 331 // regions might contain intergenerational references. |
| 287 // used. | 332 // Only 32 dirty marks are supported so for large object pages several regions |
| 333 // might be mapped to a single dirty mark. |
| 334 uint32_t dirty_regions_; |
| 288 | 335 |
| 289 // The index of the page in its owner space. | 336 // The index of the page in its owner space. |
| 290 int mc_page_index; | 337 int mc_page_index; |
| 291 | 338 |
| 292 // The allocation pointer after relocating objects to this page. | 339 // During mark-compact collections this field contains the forwarding address |
| 293 Address mc_relocation_top; | 340 // of the first live object in this page. |
| 294 | 341 // During scavenge collection this field is used to store allocation watermark |
| 295 // The forwarding address of the first live object in this page. | 342 // if it is altered during scavenge. |
| 296 Address mc_first_forwarded; | 343 Address mc_first_forwarded; |
| 297 | |
| 298 #ifdef DEBUG | |
| 299 private: | |
| 300 static RSetState rset_state_; // state of the remembered set | |
| 301 #endif | |
| 302 }; | 344 }; |
| 303 | 345 |
| 304 | 346 |
| 305 // ---------------------------------------------------------------------------- | 347 // ---------------------------------------------------------------------------- |
| 306 // Space is the abstract superclass for all allocation spaces. | 348 // Space is the abstract superclass for all allocation spaces. |
| 307 class Space : public Malloced { | 349 class Space : public Malloced { |
| 308 public: | 350 public: |
| 309 Space(AllocationSpace id, Executability executable) | 351 Space(AllocationSpace id, Executability executable) |
| 310 : id_(id), executable_(executable) {} | 352 : id_(id), executable_(executable) {} |
| 311 | 353 |
| (...skipping 602 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 914 | 956 |
| 915 // Given an address occupied by a live object, return that object if it is | 957 // Given an address occupied by a live object, return that object if it is |
| 916 // in this space, or Failure::Exception() if it is not. The implementation | 958 // in this space, or Failure::Exception() if it is not. The implementation |
| 917 // iterates over objects in the page containing the address, the cost is | 959 // iterates over objects in the page containing the address, the cost is |
| 918 // linear in the number of objects in the page. It may be slow. | 960 // linear in the number of objects in the page. It may be slow. |
| 919 Object* FindObject(Address addr); | 961 Object* FindObject(Address addr); |
| 920 | 962 |
| 921 // Checks whether page is currently in use by this space. | 963 // Checks whether page is currently in use by this space. |
| 922 bool IsUsed(Page* page); | 964 bool IsUsed(Page* page); |
| 923 | 965 |
| 924 // Clears remembered sets of pages in this space. | 966 void MarkAllPagesClean(); |
| 925 void ClearRSet(); | |
| 926 | 967 |
| 927 // Prepares for a mark-compact GC. | 968 // Prepares for a mark-compact GC. |
| 928 virtual void PrepareForMarkCompact(bool will_compact); | 969 virtual void PrepareForMarkCompact(bool will_compact); |
| 929 | 970 |
| 930 // The top of allocation in a page in this space. Undefined if page is unused. | 971 // The top of allocation in a page in this space. Undefined if page is unused. |
| 931 Address PageAllocationTop(Page* page) { | 972 Address PageAllocationTop(Page* page) { |
| 932 return page == TopPageOf(allocation_info_) ? top() | 973 return page == TopPageOf(allocation_info_) ? top() |
| 933 : PageAllocationLimit(page); | 974 : PageAllocationLimit(page); |
| 934 } | 975 } |
| 935 | 976 |
| 936 // The limit of allocation for a page in this space. | 977 // The limit of allocation for a page in this space. |
| 937 virtual Address PageAllocationLimit(Page* page) = 0; | 978 virtual Address PageAllocationLimit(Page* page) = 0; |
| 938 | 979 |
| 980 void FlushTopPageWatermark() { |
| 981 AllocationTopPage()->SetCachedAllocationWatermark(top()); |
| 982 AllocationTopPage()->InvalidateWatermark(true); |
| 983 } |
| 984 |
| 939 // Current capacity without growing (Size() + Available() + Waste()). | 985 // Current capacity without growing (Size() + Available() + Waste()). |
| 940 int Capacity() { return accounting_stats_.Capacity(); } | 986 int Capacity() { return accounting_stats_.Capacity(); } |
| 941 | 987 |
| 942 // Total amount of memory committed for this space. For paged | 988 // Total amount of memory committed for this space. For paged |
| 943 // spaces this equals the capacity. | 989 // spaces this equals the capacity. |
| 944 int CommittedMemory() { return Capacity(); } | 990 int CommittedMemory() { return Capacity(); } |
| 945 | 991 |
| 946 // Available bytes without growing. | 992 // Available bytes without growing. |
| 947 int Available() { return accounting_stats_.Available(); } | 993 int Available() { return accounting_stats_.Available(); } |
| 948 | 994 |
| (...skipping 34 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 983 } | 1029 } |
| 984 | 1030 |
| 985 // --------------------------------------------------------------------------- | 1031 // --------------------------------------------------------------------------- |
| 986 // Mark-compact collection support functions | 1032 // Mark-compact collection support functions |
| 987 | 1033 |
| 988 // Set the relocation point to the beginning of the space. | 1034 // Set the relocation point to the beginning of the space. |
| 989 void MCResetRelocationInfo(); | 1035 void MCResetRelocationInfo(); |
| 990 | 1036 |
| 991 // Writes relocation info to the top page. | 1037 // Writes relocation info to the top page. |
| 992 void MCWriteRelocationInfoToPage() { | 1038 void MCWriteRelocationInfoToPage() { |
| 993 TopPageOf(mc_forwarding_info_)->mc_relocation_top = mc_forwarding_info_.top; | 1039 TopPageOf(mc_forwarding_info_)-> |
| 1040 SetAllocationWatermark(mc_forwarding_info_.top); |
| 994 } | 1041 } |
| 995 | 1042 |
| 996 // Computes the offset of a given address in this space to the beginning | 1043 // Computes the offset of a given address in this space to the beginning |
| 997 // of the space. | 1044 // of the space. |
| 998 int MCSpaceOffsetForAddress(Address addr); | 1045 int MCSpaceOffsetForAddress(Address addr); |
| 999 | 1046 |
| 1000 // Updates the allocation pointer to the relocation top after a mark-compact | 1047 // Updates the allocation pointer to the relocation top after a mark-compact |
| 1001 // collection. | 1048 // collection. |
| 1002 virtual void MCCommitRelocationInfo() = 0; | 1049 virtual void MCCommitRelocationInfo() = 0; |
| 1003 | 1050 |
| (...skipping 97 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1101 | 1148 |
| 1102 // Slow path of AllocateRaw. This function is space-dependent. | 1149 // Slow path of AllocateRaw. This function is space-dependent. |
| 1103 virtual HeapObject* SlowAllocateRaw(int size_in_bytes) = 0; | 1150 virtual HeapObject* SlowAllocateRaw(int size_in_bytes) = 0; |
| 1104 | 1151 |
| 1105 // Slow path of MCAllocateRaw. | 1152 // Slow path of MCAllocateRaw. |
| 1106 HeapObject* SlowMCAllocateRaw(int size_in_bytes); | 1153 HeapObject* SlowMCAllocateRaw(int size_in_bytes); |
| 1107 | 1154 |
| 1108 #ifdef DEBUG | 1155 #ifdef DEBUG |
| 1109 // Returns the number of total pages in this space. | 1156 // Returns the number of total pages in this space. |
| 1110 int CountTotalPages(); | 1157 int CountTotalPages(); |
| 1111 | |
| 1112 void DoPrintRSet(const char* space_name); | |
| 1113 #endif | 1158 #endif |
| 1114 private: | 1159 private: |
| 1115 | 1160 |
| 1116 // Returns a pointer to the page of the relocation pointer. | 1161 // Returns a pointer to the page of the relocation pointer. |
| 1117 Page* MCRelocationTopPage() { return TopPageOf(mc_forwarding_info_); } | 1162 Page* MCRelocationTopPage() { return TopPageOf(mc_forwarding_info_); } |
| 1118 | 1163 |
| 1119 friend class PageIterator; | 1164 friend class PageIterator; |
| 1120 }; | 1165 }; |
| 1121 | 1166 |
| 1122 | 1167 |
| (...skipping 632 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1755 | 1800 |
| 1756 // Updates the allocation pointer to the relocation top after a mark-compact | 1801 // Updates the allocation pointer to the relocation top after a mark-compact |
| 1757 // collection. | 1802 // collection. |
| 1758 virtual void MCCommitRelocationInfo(); | 1803 virtual void MCCommitRelocationInfo(); |
| 1759 | 1804 |
| 1760 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); | 1805 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); |
| 1761 | 1806 |
| 1762 #ifdef DEBUG | 1807 #ifdef DEBUG |
| 1763 // Reports statistics for the space | 1808 // Reports statistics for the space |
| 1764 void ReportStatistics(); | 1809 void ReportStatistics(); |
| 1765 // Dump the remembered sets in the space to stdout. | |
| 1766 void PrintRSet(); | |
| 1767 #endif | 1810 #endif |
| 1768 | 1811 |
| 1769 protected: | 1812 protected: |
| 1770 // Virtual function in the superclass. Slow path of AllocateRaw. | 1813 // Virtual function in the superclass. Slow path of AllocateRaw. |
| 1771 HeapObject* SlowAllocateRaw(int size_in_bytes); | 1814 HeapObject* SlowAllocateRaw(int size_in_bytes); |
| 1772 | 1815 |
| 1773 // Virtual function in the superclass. Allocate linearly at the start of | 1816 // Virtual function in the superclass. Allocate linearly at the start of |
| 1774 // the page after current_page (there is assumed to be one). | 1817 // the page after current_page (there is assumed to be one). |
| 1775 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); | 1818 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); |
| 1776 | 1819 |
| (...skipping 44 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1821 | 1864 |
| 1822 // Updates the allocation pointer to the relocation top after a mark-compact | 1865 // Updates the allocation pointer to the relocation top after a mark-compact |
| 1823 // collection. | 1866 // collection. |
| 1824 virtual void MCCommitRelocationInfo(); | 1867 virtual void MCCommitRelocationInfo(); |
| 1825 | 1868 |
| 1826 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); | 1869 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); |
| 1827 | 1870 |
| 1828 #ifdef DEBUG | 1871 #ifdef DEBUG |
| 1829 // Reports statistic info of the space | 1872 // Reports statistic info of the space |
| 1830 void ReportStatistics(); | 1873 void ReportStatistics(); |
| 1831 | |
| 1832 // Dump the remembered sets in the space to stdout. | |
| 1833 void PrintRSet(); | |
| 1834 #endif | 1874 #endif |
| 1835 | 1875 |
| 1836 protected: | 1876 protected: |
| 1837 // Virtual function in the superclass. Slow path of AllocateRaw. | 1877 // Virtual function in the superclass. Slow path of AllocateRaw. |
| 1838 HeapObject* SlowAllocateRaw(int size_in_bytes); | 1878 HeapObject* SlowAllocateRaw(int size_in_bytes); |
| 1839 | 1879 |
| 1840 // Virtual function in the superclass. Allocate linearly at the start of | 1880 // Virtual function in the superclass. Allocate linearly at the start of |
| 1841 // the page after current_page (there is assumed to be one). | 1881 // the page after current_page (there is assumed to be one). |
| 1842 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); | 1882 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); |
| 1843 | 1883 |
| (...skipping 48 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1892 return !MapPointersEncodable() && live_maps <= CompactionThreshold(); | 1932 return !MapPointersEncodable() && live_maps <= CompactionThreshold(); |
| 1893 } | 1933 } |
| 1894 | 1934 |
| 1895 Address TopAfterCompaction(int live_maps) { | 1935 Address TopAfterCompaction(int live_maps) { |
| 1896 ASSERT(NeedsCompaction(live_maps)); | 1936 ASSERT(NeedsCompaction(live_maps)); |
| 1897 | 1937 |
| 1898 int pages_left = live_maps / kMapsPerPage; | 1938 int pages_left = live_maps / kMapsPerPage; |
| 1899 PageIterator it(this, PageIterator::ALL_PAGES); | 1939 PageIterator it(this, PageIterator::ALL_PAGES); |
| 1900 while (pages_left-- > 0) { | 1940 while (pages_left-- > 0) { |
| 1901 ASSERT(it.has_next()); | 1941 ASSERT(it.has_next()); |
| 1902 it.next()->ClearRSet(); | 1942 it.next()->SetRegionMarks(Page::kAllRegionsCleanMarks); |
| 1903 } | 1943 } |
| 1904 ASSERT(it.has_next()); | 1944 ASSERT(it.has_next()); |
| 1905 Page* top_page = it.next(); | 1945 Page* top_page = it.next(); |
| 1906 top_page->ClearRSet(); | 1946 top_page->SetRegionMarks(Page::kAllRegionsCleanMarks); |
| 1907 ASSERT(top_page->is_valid()); | 1947 ASSERT(top_page->is_valid()); |
| 1908 | 1948 |
| 1909 int offset = live_maps % kMapsPerPage * Map::kSize; | 1949 int offset = live_maps % kMapsPerPage * Map::kSize; |
| 1910 Address top = top_page->ObjectAreaStart() + offset; | 1950 Address top = top_page->ObjectAreaStart() + offset; |
| 1911 ASSERT(top < top_page->ObjectAreaEnd()); | 1951 ASSERT(top < top_page->ObjectAreaEnd()); |
| 1912 ASSERT(Contains(top)); | 1952 ASSERT(Contains(top)); |
| 1913 | 1953 |
| 1914 return top; | 1954 return top; |
| 1915 } | 1955 } |
| 1916 | 1956 |
| (...skipping 70 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1987 // extra padding bytes (Page::kPageSize + Page::kObjectStartOffset). | 2027 // extra padding bytes (Page::kPageSize + Page::kObjectStartOffset). |
| 1988 // A large object always starts at Page::kObjectStartOffset to a page. | 2028 // A large object always starts at Page::kObjectStartOffset to a page. |
| 1989 // Large objects do not move during garbage collections. | 2029 // Large objects do not move during garbage collections. |
| 1990 | 2030 |
| 1991 // A LargeObjectChunk holds exactly one large object page with exactly one | 2031 // A LargeObjectChunk holds exactly one large object page with exactly one |
| 1992 // large object. | 2032 // large object. |
| 1993 class LargeObjectChunk { | 2033 class LargeObjectChunk { |
| 1994 public: | 2034 public: |
| 1995 // Allocates a new LargeObjectChunk that contains a large object page | 2035 // Allocates a new LargeObjectChunk that contains a large object page |
| 1996 // (Page::kPageSize aligned) that has at least size_in_bytes (for a large | 2036 // (Page::kPageSize aligned) that has at least size_in_bytes (for a large |
| 1997 // object and possibly extra remembered set words) bytes after the object | 2037 // object) bytes after the object area start of that page. |
| 1998 // area start of that page. The allocated chunk size is set in the output | 2038 // The allocated chunk size is set in the output parameter chunk_size. |
| 1999 // parameter chunk_size. | |
| 2000 static LargeObjectChunk* New(int size_in_bytes, | 2039 static LargeObjectChunk* New(int size_in_bytes, |
| 2001 size_t* chunk_size, | 2040 size_t* chunk_size, |
| 2002 Executability executable); | 2041 Executability executable); |
| 2003 | 2042 |
| 2004 // Interpret a raw address as a large object chunk. | 2043 // Interpret a raw address as a large object chunk. |
| 2005 static LargeObjectChunk* FromAddress(Address address) { | 2044 static LargeObjectChunk* FromAddress(Address address) { |
| 2006 return reinterpret_cast<LargeObjectChunk*>(address); | 2045 return reinterpret_cast<LargeObjectChunk*>(address); |
| 2007 } | 2046 } |
| 2008 | 2047 |
| 2009 // Returns the address of this chunk. | 2048 // Returns the address of this chunk. |
| 2010 Address address() { return reinterpret_cast<Address>(this); } | 2049 Address address() { return reinterpret_cast<Address>(this); } |
| 2011 | 2050 |
| 2012 // Accessors for the fields of the chunk. | 2051 // Accessors for the fields of the chunk. |
| 2013 LargeObjectChunk* next() { return next_; } | 2052 LargeObjectChunk* next() { return next_; } |
| 2014 void set_next(LargeObjectChunk* chunk) { next_ = chunk; } | 2053 void set_next(LargeObjectChunk* chunk) { next_ = chunk; } |
| 2015 | 2054 |
| 2016 size_t size() { return size_; } | 2055 size_t size() { return size_; } |
| 2017 void set_size(size_t size_in_bytes) { size_ = size_in_bytes; } | 2056 void set_size(size_t size_in_bytes) { size_ = size_in_bytes; } |
| 2018 | 2057 |
| 2019 // Returns the object in this chunk. | 2058 // Returns the object in this chunk. |
| 2020 inline HeapObject* GetObject(); | 2059 inline HeapObject* GetObject(); |
| 2021 | 2060 |
| 2022 // Given a requested size (including any extra remembered set words), | 2061 // Given a requested size returns the physical size of a chunk to be |
| 2023 // returns the physical size of a chunk to be allocated. | 2062 // allocated. |
| 2024 static int ChunkSizeFor(int size_in_bytes); | 2063 static int ChunkSizeFor(int size_in_bytes); |
| 2025 | 2064 |
| 2026 // Given a chunk size, returns the object size it can accommodate (not | 2065 // Given a chunk size, returns the object size it can accommodate. Used by |
| 2027 // including any extra remembered set words). Used by | 2066 // LargeObjectSpace::Available. |
| 2028 // LargeObjectSpace::Available. Note that this can overestimate the size | |
| 2029 // of object that will fit in a chunk---if the object requires extra | |
| 2030 // remembered set words (eg, for large fixed arrays), the actual object | |
| 2031 // size for the chunk will be smaller than reported by this function. | |
| 2032 static int ObjectSizeFor(int chunk_size) { | 2067 static int ObjectSizeFor(int chunk_size) { |
| 2033 if (chunk_size <= (Page::kPageSize + Page::kObjectStartOffset)) return 0; | 2068 if (chunk_size <= (Page::kPageSize + Page::kObjectStartOffset)) return 0; |
| 2034 return chunk_size - Page::kPageSize - Page::kObjectStartOffset; | 2069 return chunk_size - Page::kPageSize - Page::kObjectStartOffset; |
| 2035 } | 2070 } |
| 2036 | 2071 |
| 2037 private: | 2072 private: |
| 2038 // A pointer to the next large object chunk in the space or NULL. | 2073 // A pointer to the next large object chunk in the space or NULL. |
| 2039 LargeObjectChunk* next_; | 2074 LargeObjectChunk* next_; |
| 2040 | 2075 |
| 2041 // The size of this chunk. | 2076 // The size of this chunk. |
| (...skipping 15 matching lines...) Expand all Loading... |
| 2057 // Releases internal resources, frees objects in this space. | 2092 // Releases internal resources, frees objects in this space. |
| 2058 void TearDown(); | 2093 void TearDown(); |
| 2059 | 2094 |
| 2060 // Allocates a (non-FixedArray, non-Code) large object. | 2095 // Allocates a (non-FixedArray, non-Code) large object. |
| 2061 Object* AllocateRaw(int size_in_bytes); | 2096 Object* AllocateRaw(int size_in_bytes); |
| 2062 // Allocates a large Code object. | 2097 // Allocates a large Code object. |
| 2063 Object* AllocateRawCode(int size_in_bytes); | 2098 Object* AllocateRawCode(int size_in_bytes); |
| 2064 // Allocates a large FixedArray. | 2099 // Allocates a large FixedArray. |
| 2065 Object* AllocateRawFixedArray(int size_in_bytes); | 2100 Object* AllocateRawFixedArray(int size_in_bytes); |
| 2066 | 2101 |
| 2067 // Available bytes for objects in this space, not including any extra | 2102 // Available bytes for objects in this space. |
| 2068 // remembered set words. | |
| 2069 int Available() { | 2103 int Available() { |
| 2070 return LargeObjectChunk::ObjectSizeFor(MemoryAllocator::Available()); | 2104 return LargeObjectChunk::ObjectSizeFor(MemoryAllocator::Available()); |
| 2071 } | 2105 } |
| 2072 | 2106 |
| 2073 virtual int Size() { | 2107 virtual int Size() { |
| 2074 return size_; | 2108 return size_; |
| 2075 } | 2109 } |
| 2076 | 2110 |
| 2077 int PageCount() { | 2111 int PageCount() { |
| 2078 return page_count_; | 2112 return page_count_; |
| 2079 } | 2113 } |
| 2080 | 2114 |
| 2081 // Finds an object for a given address, returns Failure::Exception() | 2115 // Finds an object for a given address, returns Failure::Exception() |
| 2082 // if it is not found. The function iterates through all objects in this | 2116 // if it is not found. The function iterates through all objects in this |
| 2083 // space, may be slow. | 2117 // space, may be slow. |
| 2084 Object* FindObject(Address a); | 2118 Object* FindObject(Address a); |
| 2085 | 2119 |
| 2086 // Clears remembered sets. | 2120 // Iterates objects covered by dirty regions. |
| 2087 void ClearRSet(); | 2121 void IterateDirtyRegions(ObjectSlotCallback func); |
| 2088 | |
| 2089 // Iterates objects whose remembered set bits are set. | |
| 2090 void IterateRSet(ObjectSlotCallback func); | |
| 2091 | 2122 |
| 2092 // Frees unmarked objects. | 2123 // Frees unmarked objects. |
| 2093 void FreeUnmarkedObjects(); | 2124 void FreeUnmarkedObjects(); |
| 2094 | 2125 |
| 2095 // Checks whether a heap object is in this space; O(1). | 2126 // Checks whether a heap object is in this space; O(1). |
| 2096 bool Contains(HeapObject* obj); | 2127 bool Contains(HeapObject* obj); |
| 2097 | 2128 |
| 2098 // Checks whether the space is empty. | 2129 // Checks whether the space is empty. |
| 2099 bool IsEmpty() { return first_chunk_ == NULL; } | 2130 bool IsEmpty() { return first_chunk_ == NULL; } |
| 2100 | 2131 |
| 2101 // See the comments for ReserveSpace in the Space class. This has to be | 2132 // See the comments for ReserveSpace in the Space class. This has to be |
| 2102 // called after ReserveSpace has been called on the paged spaces, since they | 2133 // called after ReserveSpace has been called on the paged spaces, since they |
| 2103 // may use some memory, leaving less for large objects. | 2134 // may use some memory, leaving less for large objects. |
| 2104 virtual bool ReserveSpace(int bytes); | 2135 virtual bool ReserveSpace(int bytes); |
| 2105 | 2136 |
| 2106 #ifdef ENABLE_HEAP_PROTECTION | 2137 #ifdef ENABLE_HEAP_PROTECTION |
| 2107 // Protect/unprotect the space by marking it read-only/writable. | 2138 // Protect/unprotect the space by marking it read-only/writable. |
| 2108 void Protect(); | 2139 void Protect(); |
| 2109 void Unprotect(); | 2140 void Unprotect(); |
| 2110 #endif | 2141 #endif |
| 2111 | 2142 |
| 2112 #ifdef DEBUG | 2143 #ifdef DEBUG |
| 2113 virtual void Verify(); | 2144 virtual void Verify(); |
| 2114 virtual void Print(); | 2145 virtual void Print(); |
| 2115 void ReportStatistics(); | 2146 void ReportStatistics(); |
| 2116 void CollectCodeStatistics(); | 2147 void CollectCodeStatistics(); |
| 2117 // Dump the remembered sets in the space to stdout. | |
| 2118 void PrintRSet(); | |
| 2119 #endif | 2148 #endif |
| 2120 // Checks whether an address is in the object area in this space. It | 2149 // Checks whether an address is in the object area in this space. It |
| 2121 // iterates all objects in the space. May be slow. | 2150 // iterates all objects in the space. May be slow. |
| 2122 bool SlowContains(Address addr) { return !FindObject(addr)->IsFailure(); } | 2151 bool SlowContains(Address addr) { return !FindObject(addr)->IsFailure(); } |
| 2123 | 2152 |
| 2124 private: | 2153 private: |
| 2125 // The head of the linked list of large object chunks. | 2154 // The head of the linked list of large object chunks. |
| 2126 LargeObjectChunk* first_chunk_; | 2155 LargeObjectChunk* first_chunk_; |
| 2127 int size_; // allocated bytes | 2156 int size_; // allocated bytes |
| 2128 int page_count_; // number of chunks | 2157 int page_count_; // number of chunks |
| 2129 | 2158 |
| 2130 | 2159 |
| 2131 // Shared implementation of AllocateRaw, AllocateRawCode and | 2160 // Shared implementation of AllocateRaw, AllocateRawCode and |
| 2132 // AllocateRawFixedArray. | 2161 // AllocateRawFixedArray. |
| 2133 Object* AllocateRawInternal(int requested_size, | 2162 Object* AllocateRawInternal(int requested_size, |
| 2134 int object_size, | 2163 int object_size, |
| 2135 Executability executable); | 2164 Executability executable); |
| 2136 | 2165 |
| 2137 // Returns the number of extra bytes (rounded up to the nearest full word) | |
| 2138 // required for extra_object_bytes of extra pointers (in bytes). | |
| 2139 static inline int ExtraRSetBytesFor(int extra_object_bytes); | |
| 2140 | |
| 2141 friend class LargeObjectIterator; | 2166 friend class LargeObjectIterator; |
| 2142 | 2167 |
| 2143 public: | 2168 public: |
| 2144 TRACK_MEMORY("LargeObjectSpace") | 2169 TRACK_MEMORY("LargeObjectSpace") |
| 2145 }; | 2170 }; |
| 2146 | 2171 |
| 2147 | 2172 |
| 2148 class LargeObjectIterator: public ObjectIterator { | 2173 class LargeObjectIterator: public ObjectIterator { |
| 2149 public: | 2174 public: |
| 2150 explicit LargeObjectIterator(LargeObjectSpace* space); | 2175 explicit LargeObjectIterator(LargeObjectSpace* space); |
| 2151 LargeObjectIterator(LargeObjectSpace* space, HeapObjectCallback size_func); | 2176 LargeObjectIterator(LargeObjectSpace* space, HeapObjectCallback size_func); |
| 2152 | 2177 |
| 2153 HeapObject* next(); | 2178 HeapObject* next(); |
| 2154 | 2179 |
| 2155 // implementation of ObjectIterator. | 2180 // implementation of ObjectIterator. |
| 2156 virtual HeapObject* next_object() { return next(); } | 2181 virtual HeapObject* next_object() { return next(); } |
| 2157 | 2182 |
| 2158 private: | 2183 private: |
| 2159 LargeObjectChunk* current_; | 2184 LargeObjectChunk* current_; |
| 2160 HeapObjectCallback size_func_; | 2185 HeapObjectCallback size_func_; |
| 2161 }; | 2186 }; |
| 2162 | 2187 |
| 2163 | 2188 |
| 2164 } } // namespace v8::internal | 2189 } } // namespace v8::internal |
| 2165 | 2190 |
| 2166 #endif // V8_SPACES_H_ | 2191 #endif // V8_SPACES_H_ |
| OLD | NEW |