| OLD | NEW |
| 1 // Copyright 2006-2008 the V8 project authors. All rights reserved. | 1 // Copyright 2006-2008 the V8 project authors. All rights reserved. |
| 2 // Redistribution and use in source and binary forms, with or without | 2 // Redistribution and use in source and binary forms, with or without |
| 3 // modification, are permitted provided that the following conditions are | 3 // modification, are permitted provided that the following conditions are |
| 4 // met: | 4 // met: |
| 5 // | 5 // |
| 6 // * Redistributions of source code must retain the above copyright | 6 // * Redistributions of source code must retain the above copyright |
| 7 // notice, this list of conditions and the following disclaimer. | 7 // notice, this list of conditions and the following disclaimer. |
| 8 // * Redistributions in binary form must reproduce the above | 8 // * Redistributions in binary form must reproduce the above |
| 9 // copyright notice, this list of conditions and the following | 9 // copyright notice, this list of conditions and the following |
| 10 // disclaimer in the documentation and/or other materials provided | 10 // disclaimer in the documentation and/or other materials provided |
| (...skipping 27 matching lines...) Expand all Loading... |
| 38 // Heap structures: | 38 // Heap structures: |
| 39 // | 39 // |
| 40 // A JS heap consists of a young generation, an old generation, and a large | 40 // A JS heap consists of a young generation, an old generation, and a large |
| 41 // object space. The young generation is divided into two semispaces. A | 41 // object space. The young generation is divided into two semispaces. A |
| 42 // scavenger implements Cheney's copying algorithm. The old generation is | 42 // scavenger implements Cheney's copying algorithm. The old generation is |
| 43 // separated into a map space and an old object space. The map space contains | 43 // separated into a map space and an old object space. The map space contains |
| 44 // all (and only) map objects, the rest of old objects go into the old space. | 44 // all (and only) map objects, the rest of old objects go into the old space. |
| 45 // The old generation is collected by a mark-sweep-compact collector. | 45 // The old generation is collected by a mark-sweep-compact collector. |
| 46 // | 46 // |
| 47 // The semispaces of the young generation are contiguous. The old and map | 47 // The semispaces of the young generation are contiguous. The old and map |
| 48 // spaces consists of a list of pages. A page has a page header, a remembered | 48 // spaces consists of a list of pages. A page has a page header and an object |
| 49 // set area, and an object area. A page size is deliberately chosen as 8K | 49 // area. A page size is deliberately chosen as 8K bytes. |
| 50 // bytes. The first word of a page is an opaque page header that has the | 50 // The first word of a page is an opaque page header that has the |
| 51 // address of the next page and its ownership information. The second word may | 51 // address of the next page and its ownership information. The second word may |
| 52 // have the allocation top address of this page. The next 248 bytes are | 52 // have the allocation top address of this page. Heap objects are aligned to the |
| 53 // remembered sets. Heap objects are aligned to the pointer size (4 bytes). A | 53 // pointer size. |
| 54 // remembered set bit corresponds to a pointer in the object area. | |
| 55 // | 54 // |
| 56 // There is a separate large object space for objects larger than | 55 // There is a separate large object space for objects larger than |
| 57 // Page::kMaxHeapObjectSize, so that they do not have to move during | 56 // Page::kMaxHeapObjectSize, so that they do not have to move during |
| 58 // collection. The large object space is paged and uses the same remembered | 57 // collection. The large object space is paged. Pages in large object space |
| 59 // set implementation. Pages in large object space may be larger than 8K. | 58 // may be larger than 8K. |
| 60 // | 59 // |
| 61 // NOTE: The mark-compact collector rebuilds the remembered set after a | 60 // A card marking write barrier is used to keep track of intergenerational |
| 62 // collection. It reuses first a few words of the remembered set for | 61 // references. Old space pages are divided into regions of Page::kRegionSize |
| 63 // bookkeeping relocation information. | 62 // size. Each region has a corresponding dirty bit in the page header which is |
| 64 | 63 // set if the region might contain pointers to new space. For details about |
| 64 // dirty bits encoding see comments in the Page::GetRegionNumberForAddress() |
| 65 // method body. |
| 66 // |
| 67 // During scavenges and mark-sweep collections we iterate intergenerational |
| 68 // pointers without decoding heap object maps so if the page belongs to old |
| 69 // pointer space or large object space it is essential to guarantee that |
| 70 // the page does not contain any garbage pointers to new space: every pointer |
| 71 // aligned word which satisfies the Heap::InNewSpace() predicate must be a |
| 72 // pointer to a live heap object in new space. Thus objects in old pointer |
| 73 // and large object spaces should have a special layout (e.g. no bare integer |
| 74 // fields). This requirement does not apply to map space which is iterated in |
| 75 // a special fashion. However we still require pointer fields of dead maps to |
| 76 // be cleaned. |
| 77 // |
| 78 // To enable lazy cleaning of old space pages we use a notion of allocation |
| 79 // watermark. Every pointer under watermark is considered to be well formed. |
| 80 // Page allocation watermark is not necessarily equal to page allocation top but |
| 81 // all alive objects on page should reside under allocation watermark. |
| 82 // During scavenge allocation watermark might be bumped and invalid pointers |
| 83 // might appear below it. To avoid following them we store a valid watermark |
| 84 // into special field in the page header and set a page WATERMARK_INVALIDATED |
| 85 // flag. For details see comments in the Page::SetAllocationWatermark() method |
| 86 // body. |
| 87 // |
| 65 | 88 |
| 66 // Some assertion macros used in the debugging mode. | 89 // Some assertion macros used in the debugging mode. |
| 67 | 90 |
| 68 #define ASSERT_PAGE_ALIGNED(address) \ | 91 #define ASSERT_PAGE_ALIGNED(address) \ |
| 69 ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0) | 92 ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0) |
| 70 | 93 |
| 71 #define ASSERT_OBJECT_ALIGNED(address) \ | 94 #define ASSERT_OBJECT_ALIGNED(address) \ |
| 72 ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0) | 95 ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0) |
| 73 | 96 |
| 74 #define ASSERT_MAP_ALIGNED(address) \ | 97 #define ASSERT_MAP_ALIGNED(address) \ |
| 75 ASSERT((OffsetFrom(address) & kMapAlignmentMask) == 0) | 98 ASSERT((OffsetFrom(address) & kMapAlignmentMask) == 0) |
| 76 | 99 |
| 77 #define ASSERT_OBJECT_SIZE(size) \ | 100 #define ASSERT_OBJECT_SIZE(size) \ |
| 78 ASSERT((0 < size) && (size <= Page::kMaxHeapObjectSize)) | 101 ASSERT((0 < size) && (size <= Page::kMaxHeapObjectSize)) |
| 79 | 102 |
| 80 #define ASSERT_PAGE_OFFSET(offset) \ | 103 #define ASSERT_PAGE_OFFSET(offset) \ |
| 81 ASSERT((Page::kObjectStartOffset <= offset) \ | 104 ASSERT((Page::kObjectStartOffset <= offset) \ |
| 82 && (offset <= Page::kPageSize)) | 105 && (offset <= Page::kPageSize)) |
| 83 | 106 |
| 84 #define ASSERT_MAP_PAGE_INDEX(index) \ | 107 #define ASSERT_MAP_PAGE_INDEX(index) \ |
| 85 ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex)) | 108 ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex)) |
| 86 | 109 |
| 87 | 110 |
| 88 class PagedSpace; | 111 class PagedSpace; |
| 89 class MemoryAllocator; | 112 class MemoryAllocator; |
| 90 class AllocationInfo; | 113 class AllocationInfo; |
| 91 | 114 |
| 92 // ----------------------------------------------------------------------------- | 115 // ----------------------------------------------------------------------------- |
| 93 // A page normally has 8K bytes. Large object pages may be larger. A page | 116 // A page normally has 8K bytes. Large object pages may be larger. A page |
| 94 // address is always aligned to the 8K page size. A page is divided into | 117 // address is always aligned to the 8K page size. |
| 95 // three areas: the first two words are used for bookkeeping, the next 248 | |
| 96 // bytes are used as remembered set, and the rest of the page is the object | |
| 97 // area. | |
| 98 // | 118 // |
| 99 // Pointers are aligned to the pointer size (4), only 1 bit is needed | 119 // Each page starts with a header of Page::kPageHeaderSize size which contains |
| 100 // for a pointer in the remembered set. Given an address, its remembered set | 120 // bookkeeping data. |
| 101 // bit position (offset from the start of the page) is calculated by dividing | |
| 102 // its page offset by 32. Therefore, the object area in a page starts at the | |
| 103 // 256th byte (8K/32). Bytes 0 to 255 do not need the remembered set, so that | |
| 104 // the first two words (64 bits) in a page can be used for other purposes. | |
| 105 // | |
| 106 // On the 64-bit platform, we add an offset to the start of the remembered set, | |
| 107 // and pointers are aligned to 8-byte pointer size. This means that we need | |
| 108 // only 128 bytes for the RSet, and only get two bytes free in the RSet's RSet. | |
| 109 // For this reason we add an offset to get room for the Page data at the start. | |
| 110 // | 121 // |
| 111 // The mark-compact collector transforms a map pointer into a page index and a | 122 // The mark-compact collector transforms a map pointer into a page index and a |
| 112 // page offset. The excact encoding is described in the comments for | 123 // page offset. The exact encoding is described in the comments for |
| 113 // class MapWord in objects.h. | 124 // class MapWord in objects.h. |
| 114 // | 125 // |
| 115 // The only way to get a page pointer is by calling factory methods: | 126 // The only way to get a page pointer is by calling factory methods: |
| 116 // Page* p = Page::FromAddress(addr); or | 127 // Page* p = Page::FromAddress(addr); or |
| 117 // Page* p = Page::FromAllocationTop(top); | 128 // Page* p = Page::FromAllocationTop(top); |
| 118 class Page { | 129 class Page { |
| 119 public: | 130 public: |
| 120 // Returns the page containing a given address. The address ranges | 131 // Returns the page containing a given address. The address ranges |
| 121 // from [page_addr .. page_addr + kPageSize[ | 132 // from [page_addr .. page_addr + kPageSize[ |
| 122 // | 133 // |
| (...skipping 20 matching lines...) Expand all Loading... |
| 143 | 154 |
| 144 // Checks whether this is a valid page address. | 155 // Checks whether this is a valid page address. |
| 145 bool is_valid() { return address() != NULL; } | 156 bool is_valid() { return address() != NULL; } |
| 146 | 157 |
| 147 // Returns the next page of this page. | 158 // Returns the next page of this page. |
| 148 inline Page* next_page(); | 159 inline Page* next_page(); |
| 149 | 160 |
| 150 // Return the end of allocation in this page. Undefined for unused pages. | 161 // Return the end of allocation in this page. Undefined for unused pages. |
| 151 inline Address AllocationTop(); | 162 inline Address AllocationTop(); |
| 152 | 163 |
| 164 // Return the allocation watermark for the page. |
| 165 // For old space pages it is guaranteed that the area under the watermark |
| 166 // does not contain any garbage pointers to new space. |
| 167 inline Address AllocationWatermark(); |
| 168 |
| 169 // Return the allocation watermark offset from the beginning of the page. |
| 170 inline uint32_t AllocationWatermarkOffset(); |
| 171 |
| 172 inline void SetAllocationWatermark(Address allocation_watermark); |
| 173 |
| 174 inline void SetCachedAllocationWatermark(Address allocation_watermark); |
| 175 inline Address CachedAllocationWatermark(); |
| 176 |
| 153 // Returns the start address of the object area in this page. | 177 // Returns the start address of the object area in this page. |
| 154 Address ObjectAreaStart() { return address() + kObjectStartOffset; } | 178 Address ObjectAreaStart() { return address() + kObjectStartOffset; } |
| 155 | 179 |
| 156 // Returns the end address (exclusive) of the object area in this page. | 180 // Returns the end address (exclusive) of the object area in this page. |
| 157 Address ObjectAreaEnd() { return address() + Page::kPageSize; } | 181 Address ObjectAreaEnd() { return address() + Page::kPageSize; } |
| 158 | 182 |
| 159 // Returns the start address of the remembered set area. | |
| 160 Address RSetStart() { return address() + kRSetStartOffset; } | |
| 161 | |
| 162 // Returns the end address of the remembered set area (exclusive). | |
| 163 Address RSetEnd() { return address() + kRSetEndOffset; } | |
| 164 | |
| 165 // Checks whether an address is page aligned. | 183 // Checks whether an address is page aligned. |
| 166 static bool IsAlignedToPageSize(Address a) { | 184 static bool IsAlignedToPageSize(Address a) { |
| 167 return 0 == (OffsetFrom(a) & kPageAlignmentMask); | 185 return 0 == (OffsetFrom(a) & kPageAlignmentMask); |
| 168 } | 186 } |
| 169 | 187 |
| 170 // True if this page was in use before current compaction started. | 188 // True if this page was in use before current compaction started. |
| 171 // Result is valid only for pages owned by paged spaces and | 189 // Result is valid only for pages owned by paged spaces and |
| 172 // only after PagedSpace::PrepareForMarkCompact was called. | 190 // only after PagedSpace::PrepareForMarkCompact was called. |
| 173 inline bool WasInUseBeforeMC(); | 191 inline bool WasInUseBeforeMC(); |
| 174 | 192 |
| (...skipping 11 matching lines...) Expand all Loading... |
| 186 return offset; | 204 return offset; |
| 187 } | 205 } |
| 188 | 206 |
| 189 // Returns the address for a given offset to the this page. | 207 // Returns the address for a given offset to the this page. |
| 190 Address OffsetToAddress(int offset) { | 208 Address OffsetToAddress(int offset) { |
| 191 ASSERT_PAGE_OFFSET(offset); | 209 ASSERT_PAGE_OFFSET(offset); |
| 192 return address() + offset; | 210 return address() + offset; |
| 193 } | 211 } |
| 194 | 212 |
| 195 // --------------------------------------------------------------------- | 213 // --------------------------------------------------------------------- |
| 196 // Remembered set support | 214 // Card marking support |
| 197 | 215 |
| 198 // Clears remembered set in this page. | 216 static const uint32_t kAllRegionsCleanMarks = 0x0; |
| 199 inline void ClearRSet(); | |
| 200 | 217 |
| 201 // Return the address of the remembered set word corresponding to an | 218 inline uint32_t GetRegionMarks(); |
| 202 // object address/offset pair, and the bit encoded as a single-bit | 219 inline void SetRegionMarks(uint32_t dirty); |
| 203 // mask in the output parameter 'bitmask'. | |
| 204 INLINE(static Address ComputeRSetBitPosition(Address address, int offset, | |
| 205 uint32_t* bitmask)); | |
| 206 | 220 |
| 207 // Sets the corresponding remembered set bit for a given address. | 221 inline uint32_t GetRegionMaskForAddress(Address addr); |
| 208 INLINE(static void SetRSet(Address address, int offset)); | 222 inline int GetRegionNumberForAddress(Address addr); |
| 209 | 223 |
| 210 // Clears the corresponding remembered set bit for a given address. | 224 inline void MarkRegionDirty(Address addr); |
| 211 static inline void UnsetRSet(Address address, int offset); | 225 inline bool IsRegionDirty(Address addr); |
| 212 | 226 |
| 213 // Checks whether the remembered set bit for a given address is set. | 227 inline void ClearRegionMarks(Address start, |
| 214 static inline bool IsRSetSet(Address address, int offset); | 228 Address end, |
| 215 | 229 bool reaches_limit); |
| 216 #ifdef DEBUG | |
| 217 // Use a state to mark whether remembered set space can be used for other | |
| 218 // purposes. | |
| 219 enum RSetState { IN_USE, NOT_IN_USE }; | |
| 220 static bool is_rset_in_use() { return rset_state_ == IN_USE; } | |
| 221 static void set_rset_state(RSetState state) { rset_state_ = state; } | |
| 222 #endif | |
| 223 | 230 |
| 224 // Page size in bytes. This must be a multiple of the OS page size. | 231 // Page size in bytes. This must be a multiple of the OS page size. |
| 225 static const int kPageSize = 1 << kPageSizeBits; | 232 static const int kPageSize = 1 << kPageSizeBits; |
| 226 | 233 |
| 227 // Page size mask. | 234 // Page size mask. |
| 228 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1; | 235 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1; |
| 229 | 236 |
| 230 // The offset of the remembered set in a page, in addition to the empty bytes | 237 static const int kPageHeaderSize = kPointerSize + kPointerSize + kIntSize + |
| 231 // formed as the remembered bits of the remembered set itself. | 238 kIntSize + kPointerSize; |
| 232 #ifdef V8_TARGET_ARCH_X64 | |
| 233 static const int kRSetOffset = 4 * kPointerSize; // Room for four pointers. | |
| 234 #else | |
| 235 static const int kRSetOffset = 0; | |
| 236 #endif | |
| 237 // The end offset of the remembered set in a page | |
| 238 // (heaps are aligned to pointer size). | |
| 239 static const int kRSetEndOffset = kRSetOffset + kPageSize / kBitsPerPointer; | |
| 240 | 239 |
| 241 // The start offset of the object area in a page. | 240 // The start offset of the object area in a page. |
| 242 // This needs to be at least (bits per uint32_t) * kBitsPerPointer, | 241 static const int kObjectStartOffset = MAP_POINTER_ALIGN(kPageHeaderSize); |
| 243 // to align start of rset to a uint32_t address. | |
| 244 static const int kObjectStartOffset = 256; | |
| 245 | |
| 246 // The start offset of the used part of the remembered set in a page. | |
| 247 static const int kRSetStartOffset = kRSetOffset + | |
| 248 kObjectStartOffset / kBitsPerPointer; | |
| 249 | 242 |
| 250 // Object area size in bytes. | 243 // Object area size in bytes. |
| 251 static const int kObjectAreaSize = kPageSize - kObjectStartOffset; | 244 static const int kObjectAreaSize = kPageSize - kObjectStartOffset; |
| 252 | 245 |
| 253 // Maximum object size that fits in a page. | 246 // Maximum object size that fits in a page. |
| 254 static const int kMaxHeapObjectSize = kObjectAreaSize; | 247 static const int kMaxHeapObjectSize = kObjectAreaSize; |
| 255 | 248 |
| 249 static const int kDirtyFlagOffset = 2 * kPointerSize; |
| 250 static const int kRegionSizeLog2 = 8; |
| 251 static const int kRegionSize = 1 << kRegionSizeLog2; |
| 252 static const intptr_t kRegionAlignmentMask = (kRegionSize - 1); |
| 253 |
| 254 STATIC_CHECK(kRegionSize == kPageSize / kBitsPerInt); |
| 255 |
| 256 enum PageFlag { | 256 enum PageFlag { |
| 257 IS_NORMAL_PAGE = 1 << 0, | 257 IS_NORMAL_PAGE = 1 << 0, |
| 258 WAS_IN_USE_BEFORE_MC = 1 << 1 | 258 WAS_IN_USE_BEFORE_MC = 1 << 1, |
| 259 |
| 260 // Page allocation watermark was bumped by preallocation during scavenge. |
| 261 // Correct watermark can be retrieved by CachedAllocationWatermark() method |
| 262 WATERMARK_INVALIDATED = 1 << 2 |
| 259 }; | 263 }; |
| 260 | 264 |
| 265 // To avoid an additional WATERMARK_INVALIDATED flag clearing pass during |
| 266 // scavenge we just invalidate the watermark on each old space page after |
| 267 // processing it. And then we flip the meaning of the WATERMARK_INVALIDATED |
| 268 // flag at the beginning of the next scavenge and each page becomes marked as |
| 269 // having a valid watermark. |
| 270 // |
| 271 // The following invariant must hold for pages in old pointer and map spaces: |
| 272 // If page is in use then page is marked as having invalid watermark at |
| 273 // the beginning and at the end of any GC. |
| 274 // |
| 275 // This invariant guarantees that after flipping flag meaning at the |
| 276 // beginning of scavenge all pages in use will be marked as having valid |
| 277 // watermark. |
| 278 static inline void FlipMeaningOfInvalidatedWatermarkFlag(); |
| 279 |
| 280 // Returns true if the page allocation watermark was not altered during |
| 281 // scavenge. |
| 282 inline bool IsWatermarkValid(); |
| 283 |
| 284 inline void InvalidateWatermark(bool value); |
| 285 |
| 261 inline bool GetPageFlag(PageFlag flag); | 286 inline bool GetPageFlag(PageFlag flag); |
| 262 inline void SetPageFlag(PageFlag flag, bool value); | 287 inline void SetPageFlag(PageFlag flag, bool value); |
| 288 inline void ClearPageFlags(); |
| 289 |
| 290 static const int kAllocationWatermarkOffsetShift = 3; |
| 291 static const int kAllocationWatermarkOffsetBits = kPageSizeBits + 1; |
| 292 static const uint32_t kAllocationWatermarkOffsetMask = |
| 293 ((1 << kAllocationWatermarkOffsetBits) - 1) << |
| 294 kAllocationWatermarkOffsetShift; |
| 295 |
| 296 static const uint32_t kFlagsMask = |
| 297 ((1 << kAllocationWatermarkOffsetShift) - 1); |
| 298 |
| 299 STATIC_CHECK(kBitsPerInt - kAllocationWatermarkOffsetShift >= |
| 300 kAllocationWatermarkOffsetBits); |
| 301 |
| 302 // This field contains the meaning of the WATERMARK_INVALIDATED flag. |
| 303 // Instead of clearing this flag from all pages we just flip |
| 304 // its meaning at the beginning of a scavenge. |
| 305 static intptr_t watermark_invalidated_mark_; |
| 263 | 306 |
| 264 //--------------------------------------------------------------------------- | 307 //--------------------------------------------------------------------------- |
| 265 // Page header description. | 308 // Page header description. |
| 266 // | 309 // |
| 267 // If a page is not in the large object space, the first word, | 310 // If a page is not in the large object space, the first word, |
| 268 // opaque_header, encodes the next page address (aligned to kPageSize 8K) | 311 // opaque_header, encodes the next page address (aligned to kPageSize 8K) |
| 269 // and the chunk number (0 ~ 8K-1). Only MemoryAllocator should use | 312 // and the chunk number (0 ~ 8K-1). Only MemoryAllocator should use |
| 270 // opaque_header. The value range of the opaque_header is [0..kPageSize[, | 313 // opaque_header. The value range of the opaque_header is [0..kPageSize[, |
| 271 // or [next_page_start, next_page_end[. It cannot point to a valid address | 314 // or [next_page_start, next_page_end[. It cannot point to a valid address |
| 272 // in the current page. If a page is in the large object space, the first | 315 // in the current page. If a page is in the large object space, the first |
| 273 // word *may* (if the page start and large object chunk start are the | 316 // word *may* (if the page start and large object chunk start are the |
| 274 // same) contain the address of the next large object chunk. | 317 // same) contain the address of the next large object chunk. |
| 275 intptr_t opaque_header; | 318 intptr_t opaque_header; |
| 276 | 319 |
| 277 // If the page is not in the large object space, the low-order bit of the | 320 // If the page is not in the large object space, the low-order bit of the |
| 278 // second word is set. If the page is in the large object space, the | 321 // second word is set. If the page is in the large object space, the |
| 279 // second word *may* (if the page start and large object chunk start are | 322 // second word *may* (if the page start and large object chunk start are |
| 280 // the same) contain the large object chunk size. In either case, the | 323 // the same) contain the large object chunk size. In either case, the |
| 281 // low-order bit for large object pages will be cleared. | 324 // low-order bit for large object pages will be cleared. |
| 282 // For normal pages this word is used to store various page flags. | 325 // For normal pages this word is used to store page flags and |
| 283 int flags; | 326 // offset of allocation top. |
| 327 intptr_t flags_; |
| 284 | 328 |
| 285 // The following fields may overlap with remembered set, they can only | 329 // This field contains dirty marks for regions covering the page. Only dirty |
| 286 // be used in the mark-compact collector when remembered set is not | 330 // regions might contain intergenerational references. |
| 287 // used. | 331 // Only 32 dirty marks are supported so for large object pages several regions |
| 332 // might be mapped to a single dirty mark. |
| 333 uint32_t dirty_regions_; |
| 288 | 334 |
| 289 // The index of the page in its owner space. | 335 // The index of the page in its owner space. |
| 290 int mc_page_index; | 336 int mc_page_index; |
| 291 | 337 |
| 292 // The allocation pointer after relocating objects to this page. | 338 // During mark-compact collections this field contains the forwarding address |
| 293 Address mc_relocation_top; | 339 // of the first live object in this page. |
| 294 | 340 // During scavenge collection this field is used to store allocation watermark |
| 295 // The forwarding address of the first live object in this page. | 341 // if it is altered during scavenge. |
| 296 Address mc_first_forwarded; | 342 Address mc_first_forwarded; |
| 297 | |
| 298 #ifdef DEBUG | |
| 299 private: | |
| 300 static RSetState rset_state_; // state of the remembered set | |
| 301 #endif | |
| 302 }; | 343 }; |
| 303 | 344 |
| 304 | 345 |
| 305 // ---------------------------------------------------------------------------- | 346 // ---------------------------------------------------------------------------- |
| 306 // Space is the abstract superclass for all allocation spaces. | 347 // Space is the abstract superclass for all allocation spaces. |
| 307 class Space : public Malloced { | 348 class Space : public Malloced { |
| 308 public: | 349 public: |
| 309 Space(AllocationSpace id, Executability executable) | 350 Space(AllocationSpace id, Executability executable) |
| 310 : id_(id), executable_(executable) {} | 351 : id_(id), executable_(executable) {} |
| 311 | 352 |
| (...skipping 602 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 914 | 955 |
| 915 // Given an address occupied by a live object, return that object if it is | 956 // Given an address occupied by a live object, return that object if it is |
| 916 // in this space, or Failure::Exception() if it is not. The implementation | 957 // in this space, or Failure::Exception() if it is not. The implementation |
| 917 // iterates over objects in the page containing the address, the cost is | 958 // iterates over objects in the page containing the address, the cost is |
| 918 // linear in the number of objects in the page. It may be slow. | 959 // linear in the number of objects in the page. It may be slow. |
| 919 Object* FindObject(Address addr); | 960 Object* FindObject(Address addr); |
| 920 | 961 |
| 921 // Checks whether page is currently in use by this space. | 962 // Checks whether page is currently in use by this space. |
| 922 bool IsUsed(Page* page); | 963 bool IsUsed(Page* page); |
| 923 | 964 |
| 924 // Clears remembered sets of pages in this space. | 965 void MarkAllPagesClean(); |
| 925 void ClearRSet(); | |
| 926 | 966 |
| 927 // Prepares for a mark-compact GC. | 967 // Prepares for a mark-compact GC. |
| 928 virtual void PrepareForMarkCompact(bool will_compact); | 968 virtual void PrepareForMarkCompact(bool will_compact); |
| 929 | 969 |
| 930 // The top of allocation in a page in this space. Undefined if page is unused. | 970 // The top of allocation in a page in this space. Undefined if page is unused. |
| 931 Address PageAllocationTop(Page* page) { | 971 Address PageAllocationTop(Page* page) { |
| 932 return page == TopPageOf(allocation_info_) ? top() | 972 return page == TopPageOf(allocation_info_) ? top() |
| 933 : PageAllocationLimit(page); | 973 : PageAllocationLimit(page); |
| 934 } | 974 } |
| 935 | 975 |
| 936 // The limit of allocation for a page in this space. | 976 // The limit of allocation for a page in this space. |
| 937 virtual Address PageAllocationLimit(Page* page) = 0; | 977 virtual Address PageAllocationLimit(Page* page) = 0; |
| 938 | 978 |
| 979 void FlushTopPageWatermark() { |
| 980 AllocationTopPage()->SetCachedAllocationWatermark(top()); |
| 981 AllocationTopPage()->InvalidateWatermark(true); |
| 982 } |
| 983 |
| 939 // Current capacity without growing (Size() + Available() + Waste()). | 984 // Current capacity without growing (Size() + Available() + Waste()). |
| 940 int Capacity() { return accounting_stats_.Capacity(); } | 985 int Capacity() { return accounting_stats_.Capacity(); } |
| 941 | 986 |
| 942 // Total amount of memory committed for this space. For paged | 987 // Total amount of memory committed for this space. For paged |
| 943 // spaces this equals the capacity. | 988 // spaces this equals the capacity. |
| 944 int CommittedMemory() { return Capacity(); } | 989 int CommittedMemory() { return Capacity(); } |
| 945 | 990 |
| 946 // Available bytes without growing. | 991 // Available bytes without growing. |
| 947 int Available() { return accounting_stats_.Available(); } | 992 int Available() { return accounting_stats_.Available(); } |
| 948 | 993 |
| (...skipping 34 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 983 } | 1028 } |
| 984 | 1029 |
| 985 // --------------------------------------------------------------------------- | 1030 // --------------------------------------------------------------------------- |
| 986 // Mark-compact collection support functions | 1031 // Mark-compact collection support functions |
| 987 | 1032 |
| 988 // Set the relocation point to the beginning of the space. | 1033 // Set the relocation point to the beginning of the space. |
| 989 void MCResetRelocationInfo(); | 1034 void MCResetRelocationInfo(); |
| 990 | 1035 |
| 991 // Writes relocation info to the top page. | 1036 // Writes relocation info to the top page. |
| 992 void MCWriteRelocationInfoToPage() { | 1037 void MCWriteRelocationInfoToPage() { |
| 993 TopPageOf(mc_forwarding_info_)->mc_relocation_top = mc_forwarding_info_.top; | 1038 TopPageOf(mc_forwarding_info_)-> |
| 1039 SetAllocationWatermark(mc_forwarding_info_.top); |
| 994 } | 1040 } |
| 995 | 1041 |
| 996 // Computes the offset of a given address in this space to the beginning | 1042 // Computes the offset of a given address in this space to the beginning |
| 997 // of the space. | 1043 // of the space. |
| 998 int MCSpaceOffsetForAddress(Address addr); | 1044 int MCSpaceOffsetForAddress(Address addr); |
| 999 | 1045 |
| 1000 // Updates the allocation pointer to the relocation top after a mark-compact | 1046 // Updates the allocation pointer to the relocation top after a mark-compact |
| 1001 // collection. | 1047 // collection. |
| 1002 virtual void MCCommitRelocationInfo() = 0; | 1048 virtual void MCCommitRelocationInfo() = 0; |
| 1003 | 1049 |
| (...skipping 97 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1101 | 1147 |
| 1102 // Slow path of AllocateRaw. This function is space-dependent. | 1148 // Slow path of AllocateRaw. This function is space-dependent. |
| 1103 virtual HeapObject* SlowAllocateRaw(int size_in_bytes) = 0; | 1149 virtual HeapObject* SlowAllocateRaw(int size_in_bytes) = 0; |
| 1104 | 1150 |
| 1105 // Slow path of MCAllocateRaw. | 1151 // Slow path of MCAllocateRaw. |
| 1106 HeapObject* SlowMCAllocateRaw(int size_in_bytes); | 1152 HeapObject* SlowMCAllocateRaw(int size_in_bytes); |
| 1107 | 1153 |
| 1108 #ifdef DEBUG | 1154 #ifdef DEBUG |
| 1109 // Returns the number of total pages in this space. | 1155 // Returns the number of total pages in this space. |
| 1110 int CountTotalPages(); | 1156 int CountTotalPages(); |
| 1111 | |
| 1112 void DoPrintRSet(const char* space_name); | |
| 1113 #endif | 1157 #endif |
| 1114 private: | 1158 private: |
| 1115 | 1159 |
| 1116 // Returns a pointer to the page of the relocation pointer. | 1160 // Returns a pointer to the page of the relocation pointer. |
| 1117 Page* MCRelocationTopPage() { return TopPageOf(mc_forwarding_info_); } | 1161 Page* MCRelocationTopPage() { return TopPageOf(mc_forwarding_info_); } |
| 1118 | 1162 |
| 1119 friend class PageIterator; | 1163 friend class PageIterator; |
| 1120 }; | 1164 }; |
| 1121 | 1165 |
| 1122 | 1166 |
| (...skipping 616 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1739 // Give a block of memory to the space's free list. It might be added to | 1783 // Give a block of memory to the space's free list. It might be added to |
| 1740 // the free list or accounted as waste. | 1784 // the free list or accounted as waste. |
| 1741 // If add_to_freelist is false then just accounting stats are updated and | 1785 // If add_to_freelist is false then just accounting stats are updated and |
| 1742 // no attempt to add area to free list is made. | 1786 // no attempt to add area to free list is made. |
| 1743 void Free(Address start, int size_in_bytes, bool add_to_freelist) { | 1787 void Free(Address start, int size_in_bytes, bool add_to_freelist) { |
| 1744 accounting_stats_.DeallocateBytes(size_in_bytes); | 1788 accounting_stats_.DeallocateBytes(size_in_bytes); |
| 1745 | 1789 |
| 1746 if (add_to_freelist) { | 1790 if (add_to_freelist) { |
| 1747 int wasted_bytes = free_list_.Free(start, size_in_bytes); | 1791 int wasted_bytes = free_list_.Free(start, size_in_bytes); |
| 1748 accounting_stats_.WasteBytes(wasted_bytes); | 1792 accounting_stats_.WasteBytes(wasted_bytes); |
| 1793 } else { |
| 1794 #ifdef DEBUG |
| 1795 MemoryAllocator::ZapBlock(start, size_in_bytes); |
| 1796 #endif |
| 1749 } | 1797 } |
| 1750 } | 1798 } |
| 1751 | 1799 |
| 1752 // Prepare for full garbage collection. Resets the relocation pointer and | 1800 // Prepare for full garbage collection. Resets the relocation pointer and |
| 1753 // clears the free list. | 1801 // clears the free list. |
| 1754 virtual void PrepareForMarkCompact(bool will_compact); | 1802 virtual void PrepareForMarkCompact(bool will_compact); |
| 1755 | 1803 |
| 1756 // Updates the allocation pointer to the relocation top after a mark-compact | 1804 // Updates the allocation pointer to the relocation top after a mark-compact |
| 1757 // collection. | 1805 // collection. |
| 1758 virtual void MCCommitRelocationInfo(); | 1806 virtual void MCCommitRelocationInfo(); |
| 1759 | 1807 |
| 1760 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); | 1808 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); |
| 1761 | 1809 |
| 1762 #ifdef DEBUG | 1810 #ifdef DEBUG |
| 1763 // Reports statistics for the space | 1811 // Reports statistics for the space |
| 1764 void ReportStatistics(); | 1812 void ReportStatistics(); |
| 1765 // Dump the remembered sets in the space to stdout. | |
| 1766 void PrintRSet(); | |
| 1767 #endif | 1813 #endif |
| 1768 | 1814 |
| 1769 protected: | 1815 protected: |
| 1770 // Virtual function in the superclass. Slow path of AllocateRaw. | 1816 // Virtual function in the superclass. Slow path of AllocateRaw. |
| 1771 HeapObject* SlowAllocateRaw(int size_in_bytes); | 1817 HeapObject* SlowAllocateRaw(int size_in_bytes); |
| 1772 | 1818 |
| 1773 // Virtual function in the superclass. Allocate linearly at the start of | 1819 // Virtual function in the superclass. Allocate linearly at the start of |
| 1774 // the page after current_page (there is assumed to be one). | 1820 // the page after current_page (there is assumed to be one). |
| 1775 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); | 1821 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); |
| 1776 | 1822 |
| (...skipping 28 matching lines...) Expand all Loading... |
| 1805 } | 1851 } |
| 1806 | 1852 |
| 1807 int object_size_in_bytes() { return object_size_in_bytes_; } | 1853 int object_size_in_bytes() { return object_size_in_bytes_; } |
| 1808 | 1854 |
| 1809 // Give a fixed sized block of memory to the space's free list. | 1855 // Give a fixed sized block of memory to the space's free list. |
| 1810 // If add_to_freelist is false then just accounting stats are updated and | 1856 // If add_to_freelist is false then just accounting stats are updated and |
| 1811 // no attempt to add area to free list is made. | 1857 // no attempt to add area to free list is made. |
| 1812 void Free(Address start, bool add_to_freelist) { | 1858 void Free(Address start, bool add_to_freelist) { |
| 1813 if (add_to_freelist) { | 1859 if (add_to_freelist) { |
| 1814 free_list_.Free(start); | 1860 free_list_.Free(start); |
| 1861 } else { |
| 1862 #ifdef DEBUG |
| 1863 MemoryAllocator::ZapBlock(start, object_size_in_bytes_); |
| 1864 #endif |
| 1815 } | 1865 } |
| 1816 accounting_stats_.DeallocateBytes(object_size_in_bytes_); | 1866 accounting_stats_.DeallocateBytes(object_size_in_bytes_); |
| 1817 } | 1867 } |
| 1818 | 1868 |
| 1819 // Prepares for a mark-compact GC. | 1869 // Prepares for a mark-compact GC. |
| 1820 virtual void PrepareForMarkCompact(bool will_compact); | 1870 virtual void PrepareForMarkCompact(bool will_compact); |
| 1821 | 1871 |
| 1822 // Updates the allocation pointer to the relocation top after a mark-compact | 1872 // Updates the allocation pointer to the relocation top after a mark-compact |
| 1823 // collection. | 1873 // collection. |
| 1824 virtual void MCCommitRelocationInfo(); | 1874 virtual void MCCommitRelocationInfo(); |
| 1825 | 1875 |
| 1826 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); | 1876 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); |
| 1827 | 1877 |
| 1828 #ifdef DEBUG | 1878 #ifdef DEBUG |
| 1829 // Reports statistic info of the space | 1879 // Reports statistic info of the space |
| 1830 void ReportStatistics(); | 1880 void ReportStatistics(); |
| 1831 | |
| 1832 // Dump the remembered sets in the space to stdout. | |
| 1833 void PrintRSet(); | |
| 1834 #endif | 1881 #endif |
| 1835 | 1882 |
| 1836 protected: | 1883 protected: |
| 1837 // Virtual function in the superclass. Slow path of AllocateRaw. | 1884 // Virtual function in the superclass. Slow path of AllocateRaw. |
| 1838 HeapObject* SlowAllocateRaw(int size_in_bytes); | 1885 HeapObject* SlowAllocateRaw(int size_in_bytes); |
| 1839 | 1886 |
| 1840 // Virtual function in the superclass. Allocate linearly at the start of | 1887 // Virtual function in the superclass. Allocate linearly at the start of |
| 1841 // the page after current_page (there is assumed to be one). | 1888 // the page after current_page (there is assumed to be one). |
| 1842 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); | 1889 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); |
| 1843 | 1890 |
| (...skipping 48 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1892 return !MapPointersEncodable() && live_maps <= CompactionThreshold(); | 1939 return !MapPointersEncodable() && live_maps <= CompactionThreshold(); |
| 1893 } | 1940 } |
| 1894 | 1941 |
| 1895 Address TopAfterCompaction(int live_maps) { | 1942 Address TopAfterCompaction(int live_maps) { |
| 1896 ASSERT(NeedsCompaction(live_maps)); | 1943 ASSERT(NeedsCompaction(live_maps)); |
| 1897 | 1944 |
| 1898 int pages_left = live_maps / kMapsPerPage; | 1945 int pages_left = live_maps / kMapsPerPage; |
| 1899 PageIterator it(this, PageIterator::ALL_PAGES); | 1946 PageIterator it(this, PageIterator::ALL_PAGES); |
| 1900 while (pages_left-- > 0) { | 1947 while (pages_left-- > 0) { |
| 1901 ASSERT(it.has_next()); | 1948 ASSERT(it.has_next()); |
| 1902 it.next()->ClearRSet(); | 1949 it.next()->SetRegionMarks(Page::kAllRegionsCleanMarks); |
| 1903 } | 1950 } |
| 1904 ASSERT(it.has_next()); | 1951 ASSERT(it.has_next()); |
| 1905 Page* top_page = it.next(); | 1952 Page* top_page = it.next(); |
| 1906 top_page->ClearRSet(); | 1953 top_page->SetRegionMarks(Page::kAllRegionsCleanMarks); |
| 1907 ASSERT(top_page->is_valid()); | 1954 ASSERT(top_page->is_valid()); |
| 1908 | 1955 |
| 1909 int offset = live_maps % kMapsPerPage * Map::kSize; | 1956 int offset = live_maps % kMapsPerPage * Map::kSize; |
| 1910 Address top = top_page->ObjectAreaStart() + offset; | 1957 Address top = top_page->ObjectAreaStart() + offset; |
| 1911 ASSERT(top < top_page->ObjectAreaEnd()); | 1958 ASSERT(top < top_page->ObjectAreaEnd()); |
| 1912 ASSERT(Contains(top)); | 1959 ASSERT(Contains(top)); |
| 1913 | 1960 |
| 1914 return top; | 1961 return top; |
| 1915 } | 1962 } |
| 1916 | 1963 |
| (...skipping 70 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1987 // extra padding bytes (Page::kPageSize + Page::kObjectStartOffset). | 2034 // extra padding bytes (Page::kPageSize + Page::kObjectStartOffset). |
| 1988 // A large object always starts at Page::kObjectStartOffset to a page. | 2035 // A large object always starts at Page::kObjectStartOffset to a page. |
| 1989 // Large objects do not move during garbage collections. | 2036 // Large objects do not move during garbage collections. |
| 1990 | 2037 |
| 1991 // A LargeObjectChunk holds exactly one large object page with exactly one | 2038 // A LargeObjectChunk holds exactly one large object page with exactly one |
| 1992 // large object. | 2039 // large object. |
| 1993 class LargeObjectChunk { | 2040 class LargeObjectChunk { |
| 1994 public: | 2041 public: |
| 1995 // Allocates a new LargeObjectChunk that contains a large object page | 2042 // Allocates a new LargeObjectChunk that contains a large object page |
| 1996 // (Page::kPageSize aligned) that has at least size_in_bytes (for a large | 2043 // (Page::kPageSize aligned) that has at least size_in_bytes (for a large |
| 1997 // object and possibly extra remembered set words) bytes after the object | 2044 // object) bytes after the object area start of that page. |
| 1998 // area start of that page. The allocated chunk size is set in the output | 2045 // The allocated chunk size is set in the output parameter chunk_size. |
| 1999 // parameter chunk_size. | |
| 2000 static LargeObjectChunk* New(int size_in_bytes, | 2046 static LargeObjectChunk* New(int size_in_bytes, |
| 2001 size_t* chunk_size, | 2047 size_t* chunk_size, |
| 2002 Executability executable); | 2048 Executability executable); |
| 2003 | 2049 |
| 2004 // Interpret a raw address as a large object chunk. | 2050 // Interpret a raw address as a large object chunk. |
| 2005 static LargeObjectChunk* FromAddress(Address address) { | 2051 static LargeObjectChunk* FromAddress(Address address) { |
| 2006 return reinterpret_cast<LargeObjectChunk*>(address); | 2052 return reinterpret_cast<LargeObjectChunk*>(address); |
| 2007 } | 2053 } |
| 2008 | 2054 |
| 2009 // Returns the address of this chunk. | 2055 // Returns the address of this chunk. |
| 2010 Address address() { return reinterpret_cast<Address>(this); } | 2056 Address address() { return reinterpret_cast<Address>(this); } |
| 2011 | 2057 |
| 2012 // Accessors for the fields of the chunk. | 2058 // Accessors for the fields of the chunk. |
| 2013 LargeObjectChunk* next() { return next_; } | 2059 LargeObjectChunk* next() { return next_; } |
| 2014 void set_next(LargeObjectChunk* chunk) { next_ = chunk; } | 2060 void set_next(LargeObjectChunk* chunk) { next_ = chunk; } |
| 2015 | 2061 |
| 2016 size_t size() { return size_; } | 2062 size_t size() { return size_; } |
| 2017 void set_size(size_t size_in_bytes) { size_ = size_in_bytes; } | 2063 void set_size(size_t size_in_bytes) { size_ = size_in_bytes; } |
| 2018 | 2064 |
| 2019 // Returns the object in this chunk. | 2065 // Returns the object in this chunk. |
| 2020 inline HeapObject* GetObject(); | 2066 inline HeapObject* GetObject(); |
| 2021 | 2067 |
| 2022 // Given a requested size (including any extra remembered set words), | 2068 // Given a requested size returns the physical size of a chunk to be |
| 2023 // returns the physical size of a chunk to be allocated. | 2069 // allocated. |
| 2024 static int ChunkSizeFor(int size_in_bytes); | 2070 static int ChunkSizeFor(int size_in_bytes); |
| 2025 | 2071 |
| 2026 // Given a chunk size, returns the object size it can accommodate (not | 2072 // Given a chunk size, returns the object size it can accommodate. Used by |
| 2027 // including any extra remembered set words). Used by | 2073 // LargeObjectSpace::Available. |
| 2028 // LargeObjectSpace::Available. Note that this can overestimate the size | |
| 2029 // of object that will fit in a chunk---if the object requires extra | |
| 2030 // remembered set words (eg, for large fixed arrays), the actual object | |
| 2031 // size for the chunk will be smaller than reported by this function. | |
| 2032 static int ObjectSizeFor(int chunk_size) { | 2074 static int ObjectSizeFor(int chunk_size) { |
| 2033 if (chunk_size <= (Page::kPageSize + Page::kObjectStartOffset)) return 0; | 2075 if (chunk_size <= (Page::kPageSize + Page::kObjectStartOffset)) return 0; |
| 2034 return chunk_size - Page::kPageSize - Page::kObjectStartOffset; | 2076 return chunk_size - Page::kPageSize - Page::kObjectStartOffset; |
| 2035 } | 2077 } |
| 2036 | 2078 |
| 2037 private: | 2079 private: |
| 2038 // A pointer to the next large object chunk in the space or NULL. | 2080 // A pointer to the next large object chunk in the space or NULL. |
| 2039 LargeObjectChunk* next_; | 2081 LargeObjectChunk* next_; |
| 2040 | 2082 |
| 2041 // The size of this chunk. | 2083 // The size of this chunk. |
| (...skipping 15 matching lines...) Expand all Loading... |
| 2057 // Releases internal resources, frees objects in this space. | 2099 // Releases internal resources, frees objects in this space. |
| 2058 void TearDown(); | 2100 void TearDown(); |
| 2059 | 2101 |
| 2060 // Allocates a (non-FixedArray, non-Code) large object. | 2102 // Allocates a (non-FixedArray, non-Code) large object. |
| 2061 Object* AllocateRaw(int size_in_bytes); | 2103 Object* AllocateRaw(int size_in_bytes); |
| 2062 // Allocates a large Code object. | 2104 // Allocates a large Code object. |
| 2063 Object* AllocateRawCode(int size_in_bytes); | 2105 Object* AllocateRawCode(int size_in_bytes); |
| 2064 // Allocates a large FixedArray. | 2106 // Allocates a large FixedArray. |
| 2065 Object* AllocateRawFixedArray(int size_in_bytes); | 2107 Object* AllocateRawFixedArray(int size_in_bytes); |
| 2066 | 2108 |
| 2067 // Available bytes for objects in this space, not including any extra | 2109 // Available bytes for objects in this space. |
| 2068 // remembered set words. | |
| 2069 int Available() { | 2110 int Available() { |
| 2070 return LargeObjectChunk::ObjectSizeFor(MemoryAllocator::Available()); | 2111 return LargeObjectChunk::ObjectSizeFor(MemoryAllocator::Available()); |
| 2071 } | 2112 } |
| 2072 | 2113 |
| 2073 virtual int Size() { | 2114 virtual int Size() { |
| 2074 return size_; | 2115 return size_; |
| 2075 } | 2116 } |
| 2076 | 2117 |
| 2077 int PageCount() { | 2118 int PageCount() { |
| 2078 return page_count_; | 2119 return page_count_; |
| 2079 } | 2120 } |
| 2080 | 2121 |
| 2081 // Finds an object for a given address, returns Failure::Exception() | 2122 // Finds an object for a given address, returns Failure::Exception() |
| 2082 // if it is not found. The function iterates through all objects in this | 2123 // if it is not found. The function iterates through all objects in this |
| 2083 // space, may be slow. | 2124 // space, may be slow. |
| 2084 Object* FindObject(Address a); | 2125 Object* FindObject(Address a); |
| 2085 | 2126 |
| 2086 // Clears remembered sets. | 2127 // Iterates objects covered by dirty regions. |
| 2087 void ClearRSet(); | 2128 void IterateDirtyRegions(ObjectSlotCallback func); |
| 2088 | |
| 2089 // Iterates objects whose remembered set bits are set. | |
| 2090 void IterateRSet(ObjectSlotCallback func); | |
| 2091 | 2129 |
| 2092 // Frees unmarked objects. | 2130 // Frees unmarked objects. |
| 2093 void FreeUnmarkedObjects(); | 2131 void FreeUnmarkedObjects(); |
| 2094 | 2132 |
| 2095 // Checks whether a heap object is in this space; O(1). | 2133 // Checks whether a heap object is in this space; O(1). |
| 2096 bool Contains(HeapObject* obj); | 2134 bool Contains(HeapObject* obj); |
| 2097 | 2135 |
| 2098 // Checks whether the space is empty. | 2136 // Checks whether the space is empty. |
| 2099 bool IsEmpty() { return first_chunk_ == NULL; } | 2137 bool IsEmpty() { return first_chunk_ == NULL; } |
| 2100 | 2138 |
| 2101 // See the comments for ReserveSpace in the Space class. This has to be | 2139 // See the comments for ReserveSpace in the Space class. This has to be |
| 2102 // called after ReserveSpace has been called on the paged spaces, since they | 2140 // called after ReserveSpace has been called on the paged spaces, since they |
| 2103 // may use some memory, leaving less for large objects. | 2141 // may use some memory, leaving less for large objects. |
| 2104 virtual bool ReserveSpace(int bytes); | 2142 virtual bool ReserveSpace(int bytes); |
| 2105 | 2143 |
| 2106 #ifdef ENABLE_HEAP_PROTECTION | 2144 #ifdef ENABLE_HEAP_PROTECTION |
| 2107 // Protect/unprotect the space by marking it read-only/writable. | 2145 // Protect/unprotect the space by marking it read-only/writable. |
| 2108 void Protect(); | 2146 void Protect(); |
| 2109 void Unprotect(); | 2147 void Unprotect(); |
| 2110 #endif | 2148 #endif |
| 2111 | 2149 |
| 2112 #ifdef DEBUG | 2150 #ifdef DEBUG |
| 2113 virtual void Verify(); | 2151 virtual void Verify(); |
| 2114 virtual void Print(); | 2152 virtual void Print(); |
| 2115 void ReportStatistics(); | 2153 void ReportStatistics(); |
| 2116 void CollectCodeStatistics(); | 2154 void CollectCodeStatistics(); |
| 2117 // Dump the remembered sets in the space to stdout. | |
| 2118 void PrintRSet(); | |
| 2119 #endif | 2155 #endif |
| 2120 // Checks whether an address is in the object area in this space. It | 2156 // Checks whether an address is in the object area in this space. It |
| 2121 // iterates all objects in the space. May be slow. | 2157 // iterates all objects in the space. May be slow. |
| 2122 bool SlowContains(Address addr) { return !FindObject(addr)->IsFailure(); } | 2158 bool SlowContains(Address addr) { return !FindObject(addr)->IsFailure(); } |
| 2123 | 2159 |
| 2124 private: | 2160 private: |
| 2125 // The head of the linked list of large object chunks. | 2161 // The head of the linked list of large object chunks. |
| 2126 LargeObjectChunk* first_chunk_; | 2162 LargeObjectChunk* first_chunk_; |
| 2127 int size_; // allocated bytes | 2163 int size_; // allocated bytes |
| 2128 int page_count_; // number of chunks | 2164 int page_count_; // number of chunks |
| 2129 | 2165 |
| 2130 | 2166 |
| 2131 // Shared implementation of AllocateRaw, AllocateRawCode and | 2167 // Shared implementation of AllocateRaw, AllocateRawCode and |
| 2132 // AllocateRawFixedArray. | 2168 // AllocateRawFixedArray. |
| 2133 Object* AllocateRawInternal(int requested_size, | 2169 Object* AllocateRawInternal(int requested_size, |
| 2134 int object_size, | 2170 int object_size, |
| 2135 Executability executable); | 2171 Executability executable); |
| 2136 | 2172 |
| 2137 // Returns the number of extra bytes (rounded up to the nearest full word) | |
| 2138 // required for extra_object_bytes of extra pointers (in bytes). | |
| 2139 static inline int ExtraRSetBytesFor(int extra_object_bytes); | |
| 2140 | |
| 2141 friend class LargeObjectIterator; | 2173 friend class LargeObjectIterator; |
| 2142 | 2174 |
| 2143 public: | 2175 public: |
| 2144 TRACK_MEMORY("LargeObjectSpace") | 2176 TRACK_MEMORY("LargeObjectSpace") |
| 2145 }; | 2177 }; |
| 2146 | 2178 |
| 2147 | 2179 |
| 2148 class LargeObjectIterator: public ObjectIterator { | 2180 class LargeObjectIterator: public ObjectIterator { |
| 2149 public: | 2181 public: |
| 2150 explicit LargeObjectIterator(LargeObjectSpace* space); | 2182 explicit LargeObjectIterator(LargeObjectSpace* space); |
| 2151 LargeObjectIterator(LargeObjectSpace* space, HeapObjectCallback size_func); | 2183 LargeObjectIterator(LargeObjectSpace* space, HeapObjectCallback size_func); |
| 2152 | 2184 |
| 2153 HeapObject* next(); | 2185 HeapObject* next(); |
| 2154 | 2186 |
| 2155 // implementation of ObjectIterator. | 2187 // implementation of ObjectIterator. |
| 2156 virtual HeapObject* next_object() { return next(); } | 2188 virtual HeapObject* next_object() { return next(); } |
| 2157 | 2189 |
| 2158 private: | 2190 private: |
| 2159 LargeObjectChunk* current_; | 2191 LargeObjectChunk* current_; |
| 2160 HeapObjectCallback size_func_; | 2192 HeapObjectCallback size_func_; |
| 2161 }; | 2193 }; |
| 2162 | 2194 |
| 2163 | 2195 |
| 2164 } } // namespace v8::internal | 2196 } } // namespace v8::internal |
| 2165 | 2197 |
| 2166 #endif // V8_SPACES_H_ | 2198 #endif // V8_SPACES_H_ |
| OLD | NEW |