| OLD | NEW |
| 1 // Copyright 2006-2008 the V8 project authors. All rights reserved. | 1 // Copyright 2006-2008 the V8 project authors. All rights reserved. |
| 2 // Redistribution and use in source and binary forms, with or without | 2 // Redistribution and use in source and binary forms, with or without |
| 3 // modification, are permitted provided that the following conditions are | 3 // modification, are permitted provided that the following conditions are |
| 4 // met: | 4 // met: |
| 5 // | 5 // |
| 6 // * Redistributions of source code must retain the above copyright | 6 // * Redistributions of source code must retain the above copyright |
| 7 // notice, this list of conditions and the following disclaimer. | 7 // notice, this list of conditions and the following disclaimer. |
| 8 // * Redistributions in binary form must reproduce the above | 8 // * Redistributions in binary form must reproduce the above |
| 9 // copyright notice, this list of conditions and the following | 9 // copyright notice, this list of conditions and the following |
| 10 // disclaimer in the documentation and/or other materials provided | 10 // disclaimer in the documentation and/or other materials provided |
| (...skipping 27 matching lines...) Expand all Loading... |
| 38 // Heap structures: | 38 // Heap structures: |
| 39 // | 39 // |
| 40 // A JS heap consists of a young generation, an old generation, and a large | 40 // A JS heap consists of a young generation, an old generation, and a large |
| 41 // object space. The young generation is divided into two semispaces. A | 41 // object space. The young generation is divided into two semispaces. A |
| 42 // scavenger implements Cheney's copying algorithm. The old generation is | 42 // scavenger implements Cheney's copying algorithm. The old generation is |
| 43 // separated into a map space and an old object space. The map space contains | 43 // separated into a map space and an old object space. The map space contains |
| 44 // all (and only) map objects, the rest of old objects go into the old space. | 44 // all (and only) map objects, the rest of old objects go into the old space. |
| 45 // The old generation is collected by a mark-sweep-compact collector. | 45 // The old generation is collected by a mark-sweep-compact collector. |
| 46 // | 46 // |
| 47 // The semispaces of the young generation are contiguous. The old and map | 47 // The semispaces of the young generation are contiguous. The old and map |
| 48 // spaces consists of a list of pages. A page has a page header and an object | 48 // spaces consists of a list of pages. A page has a page header, a remembered |
| 49 // area. A page size is deliberately chosen as 8K bytes. | 49 // set area, and an object area. A page size is deliberately chosen as 8K |
| 50 // The first word of a page is an opaque page header that has the | 50 // bytes. The first word of a page is an opaque page header that has the |
| 51 // address of the next page and its ownership information. The second word may | 51 // address of the next page and its ownership information. The second word may |
| 52 // have the allocation top address of this page. Heap objects are aligned to the | 52 // have the allocation top address of this page. The next 248 bytes are |
| 53 // pointer size. | 53 // remembered sets. Heap objects are aligned to the pointer size (4 bytes). A |
| 54 // remembered set bit corresponds to a pointer in the object area. |
| 54 // | 55 // |
| 55 // There is a separate large object space for objects larger than | 56 // There is a separate large object space for objects larger than |
| 56 // Page::kMaxHeapObjectSize, so that they do not have to move during | 57 // Page::kMaxHeapObjectSize, so that they do not have to move during |
| 57 // collection. The large object space is paged. Pages in large object space | 58 // collection. The large object space is paged and uses the same remembered |
| 58 // may be larger than 8K. | 59 // set implementation. Pages in large object space may be larger than 8K. |
| 59 // | 60 // |
| 60 // A card marking write barrier is used to keep track of intergenerational | 61 // NOTE: The mark-compact collector rebuilds the remembered set after a |
| 61 // references. Old space pages are divided into regions of Page::kRegionSize | 62 // collection. It reuses first a few words of the remembered set for |
| 62 // size. Each region has a corresponding dirty bit in the page header which is | 63 // bookkeeping relocation information. |
| 63 // set if the region might contain pointers to new space. For details about | 64 |
| 64 // dirty bits encoding see comments in the Page::GetRegionNumberForAddress() | |
| 65 // method body. | |
| 66 // | |
| 67 // During scavenges and mark-sweep collections we iterate intergenerational | |
| 68 // pointers without decoding heap object maps so if the page belongs to old | |
| 69 // pointer space or large object space it is essential to guarantee that | |
| 70 // the page does not contain any garbage pointers to new space: every pointer | |
| 71 // aligned word which satisfies the Heap::InNewSpace() predicate must be a | |
| 72 // pointer to a live heap object in new space. Thus objects in old pointer | |
| 73 // and large object spaces should have a special layout (e.g. no bare integer | |
| 74 // fields). This requirement does not apply to map space which is iterated in | |
| 75 // a special fashion. However we still require pointer fields of dead maps to | |
| 76 // be cleaned. | |
| 77 // | |
| 78 // To enable lazy cleaning of old space pages we use a notion of allocation | |
| 79 // watermark. Every pointer under watermark is considered to be well formed. | |
| 80 // Page allocation watermark is not necessarily equal to page allocation top but | |
| 81 // all alive objects on page should reside under allocation watermark. | |
| 82 // During scavenge allocation watermark might be bumped and invalid pointers | |
| 83 // might appear below it. To avoid following them we store a valid watermark | |
| 84 // into special field in the page header and set a page WATERMARK_INVALIDATED | |
| 85 // flag. For details see comments in the Page::SetAllocationWatermark() method | |
| 86 // body. | |
| 87 // | |
| 88 | 65 |
| 89 // Some assertion macros used in the debugging mode. | 66 // Some assertion macros used in the debugging mode. |
| 90 | 67 |
| 91 #define ASSERT_PAGE_ALIGNED(address) \ | 68 #define ASSERT_PAGE_ALIGNED(address) \ |
| 92 ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0) | 69 ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0) |
| 93 | 70 |
| 94 #define ASSERT_OBJECT_ALIGNED(address) \ | 71 #define ASSERT_OBJECT_ALIGNED(address) \ |
| 95 ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0) | 72 ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0) |
| 96 | 73 |
| 97 #define ASSERT_MAP_ALIGNED(address) \ | 74 #define ASSERT_MAP_ALIGNED(address) \ |
| 98 ASSERT((OffsetFrom(address) & kMapAlignmentMask) == 0) | 75 ASSERT((OffsetFrom(address) & kMapAlignmentMask) == 0) |
| 99 | 76 |
| 100 #define ASSERT_OBJECT_SIZE(size) \ | 77 #define ASSERT_OBJECT_SIZE(size) \ |
| 101 ASSERT((0 < size) && (size <= Page::kMaxHeapObjectSize)) | 78 ASSERT((0 < size) && (size <= Page::kMaxHeapObjectSize)) |
| 102 | 79 |
| 103 #define ASSERT_PAGE_OFFSET(offset) \ | 80 #define ASSERT_PAGE_OFFSET(offset) \ |
| 104 ASSERT((Page::kObjectStartOffset <= offset) \ | 81 ASSERT((Page::kObjectStartOffset <= offset) \ |
| 105 && (offset <= Page::kPageSize)) | 82 && (offset <= Page::kPageSize)) |
| 106 | 83 |
| 107 #define ASSERT_MAP_PAGE_INDEX(index) \ | 84 #define ASSERT_MAP_PAGE_INDEX(index) \ |
| 108 ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex)) | 85 ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex)) |
| 109 | 86 |
| 110 | 87 |
| 111 class PagedSpace; | 88 class PagedSpace; |
| 112 class MemoryAllocator; | 89 class MemoryAllocator; |
| 113 class AllocationInfo; | 90 class AllocationInfo; |
| 114 | 91 |
| 115 // ----------------------------------------------------------------------------- | 92 // ----------------------------------------------------------------------------- |
| 116 // A page normally has 8K bytes. Large object pages may be larger. A page | 93 // A page normally has 8K bytes. Large object pages may be larger. A page |
| 117 // address is always aligned to the 8K page size. | 94 // address is always aligned to the 8K page size. A page is divided into |
| 95 // three areas: the first two words are used for bookkeeping, the next 248 |
| 96 // bytes are used as remembered set, and the rest of the page is the object |
| 97 // area. |
| 118 // | 98 // |
| 119 // Each page starts with a header of Page::kPageHeaderSize size which contains | 99 // Pointers are aligned to the pointer size (4), only 1 bit is needed |
| 120 // bookkeeping data. | 100 // for a pointer in the remembered set. Given an address, its remembered set |
| 101 // bit position (offset from the start of the page) is calculated by dividing |
| 102 // its page offset by 32. Therefore, the object area in a page starts at the |
| 103 // 256th byte (8K/32). Bytes 0 to 255 do not need the remembered set, so that |
| 104 // the first two words (64 bits) in a page can be used for other purposes. |
| 105 // |
| 106 // On the 64-bit platform, we add an offset to the start of the remembered set, |
| 107 // and pointers are aligned to 8-byte pointer size. This means that we need |
| 108 // only 128 bytes for the RSet, and only get two bytes free in the RSet's RSet. |
| 109 // For this reason we add an offset to get room for the Page data at the start. |
| 121 // | 110 // |
| 122 // The mark-compact collector transforms a map pointer into a page index and a | 111 // The mark-compact collector transforms a map pointer into a page index and a |
| 123 // page offset. The exact encoding is described in the comments for | 112 // page offset. The excact encoding is described in the comments for |
| 124 // class MapWord in objects.h. | 113 // class MapWord in objects.h. |
| 125 // | 114 // |
| 126 // The only way to get a page pointer is by calling factory methods: | 115 // The only way to get a page pointer is by calling factory methods: |
| 127 // Page* p = Page::FromAddress(addr); or | 116 // Page* p = Page::FromAddress(addr); or |
| 128 // Page* p = Page::FromAllocationTop(top); | 117 // Page* p = Page::FromAllocationTop(top); |
| 129 class Page { | 118 class Page { |
| 130 public: | 119 public: |
| 131 // Returns the page containing a given address. The address ranges | 120 // Returns the page containing a given address. The address ranges |
| 132 // from [page_addr .. page_addr + kPageSize[ | 121 // from [page_addr .. page_addr + kPageSize[ |
| 133 // | 122 // |
| (...skipping 20 matching lines...) Expand all Loading... |
| 154 | 143 |
| 155 // Checks whether this is a valid page address. | 144 // Checks whether this is a valid page address. |
| 156 bool is_valid() { return address() != NULL; } | 145 bool is_valid() { return address() != NULL; } |
| 157 | 146 |
| 158 // Returns the next page of this page. | 147 // Returns the next page of this page. |
| 159 inline Page* next_page(); | 148 inline Page* next_page(); |
| 160 | 149 |
| 161 // Return the end of allocation in this page. Undefined for unused pages. | 150 // Return the end of allocation in this page. Undefined for unused pages. |
| 162 inline Address AllocationTop(); | 151 inline Address AllocationTop(); |
| 163 | 152 |
| 164 // Return the allocation watermark for the page. | |
| 165 // For old space pages it is guaranteed that the area under the watermark | |
| 166 // does not contain any garbage pointers to new space. | |
| 167 inline Address AllocationWatermark(); | |
| 168 | |
| 169 // Return the allocation watermark offset from the beginning of the page. | |
| 170 inline uint32_t AllocationWatermarkOffset(); | |
| 171 | |
| 172 inline void SetAllocationWatermark(Address allocation_watermark); | |
| 173 | |
| 174 inline void SetCachedAllocationWatermark(Address allocation_watermark); | |
| 175 inline Address CachedAllocationWatermark(); | |
| 176 | |
| 177 // Returns the start address of the object area in this page. | 153 // Returns the start address of the object area in this page. |
| 178 Address ObjectAreaStart() { return address() + kObjectStartOffset; } | 154 Address ObjectAreaStart() { return address() + kObjectStartOffset; } |
| 179 | 155 |
| 180 // Returns the end address (exclusive) of the object area in this page. | 156 // Returns the end address (exclusive) of the object area in this page. |
| 181 Address ObjectAreaEnd() { return address() + Page::kPageSize; } | 157 Address ObjectAreaEnd() { return address() + Page::kPageSize; } |
| 182 | 158 |
| 159 // Returns the start address of the remembered set area. |
| 160 Address RSetStart() { return address() + kRSetStartOffset; } |
| 161 |
| 162 // Returns the end address of the remembered set area (exclusive). |
| 163 Address RSetEnd() { return address() + kRSetEndOffset; } |
| 164 |
| 183 // Checks whether an address is page aligned. | 165 // Checks whether an address is page aligned. |
| 184 static bool IsAlignedToPageSize(Address a) { | 166 static bool IsAlignedToPageSize(Address a) { |
| 185 return 0 == (OffsetFrom(a) & kPageAlignmentMask); | 167 return 0 == (OffsetFrom(a) & kPageAlignmentMask); |
| 186 } | 168 } |
| 187 | 169 |
| 188 // True if this page was in use before current compaction started. | 170 // True if this page was in use before current compaction started. |
| 189 // Result is valid only for pages owned by paged spaces and | 171 // Result is valid only for pages owned by paged spaces and |
| 190 // only after PagedSpace::PrepareForMarkCompact was called. | 172 // only after PagedSpace::PrepareForMarkCompact was called. |
| 191 inline bool WasInUseBeforeMC(); | 173 inline bool WasInUseBeforeMC(); |
| 192 | 174 |
| (...skipping 11 matching lines...) Expand all Loading... |
| 204 return offset; | 186 return offset; |
| 205 } | 187 } |
| 206 | 188 |
| 207 // Returns the address for a given offset to the this page. | 189 // Returns the address for a given offset to the this page. |
| 208 Address OffsetToAddress(int offset) { | 190 Address OffsetToAddress(int offset) { |
| 209 ASSERT_PAGE_OFFSET(offset); | 191 ASSERT_PAGE_OFFSET(offset); |
| 210 return address() + offset; | 192 return address() + offset; |
| 211 } | 193 } |
| 212 | 194 |
| 213 // --------------------------------------------------------------------- | 195 // --------------------------------------------------------------------- |
| 214 // Card marking support | 196 // Remembered set support |
| 215 | 197 |
| 216 static const uint32_t kAllRegionsCleanMarks = 0x0; | 198 // Clears remembered set in this page. |
| 199 inline void ClearRSet(); |
| 217 | 200 |
| 218 inline uint32_t GetRegionMarks(); | 201 // Return the address of the remembered set word corresponding to an |
| 219 inline void SetRegionMarks(uint32_t dirty); | 202 // object address/offset pair, and the bit encoded as a single-bit |
| 203 // mask in the output parameter 'bitmask'. |
| 204 INLINE(static Address ComputeRSetBitPosition(Address address, int offset, |
| 205 uint32_t* bitmask)); |
| 220 | 206 |
| 221 inline uint32_t GetRegionMaskForAddress(Address addr); | 207 // Sets the corresponding remembered set bit for a given address. |
| 222 inline int GetRegionNumberForAddress(Address addr); | 208 INLINE(static void SetRSet(Address address, int offset)); |
| 223 | 209 |
| 224 inline void MarkRegionDirty(Address addr); | 210 // Clears the corresponding remembered set bit for a given address. |
| 225 inline bool IsRegionDirty(Address addr); | 211 static inline void UnsetRSet(Address address, int offset); |
| 226 | 212 |
| 227 inline void ClearRegionMarks(Address start, | 213 // Checks whether the remembered set bit for a given address is set. |
| 228 Address end, | 214 static inline bool IsRSetSet(Address address, int offset); |
| 229 bool reaches_limit); | 215 |
| 216 #ifdef DEBUG |
| 217 // Use a state to mark whether remembered set space can be used for other |
| 218 // purposes. |
| 219 enum RSetState { IN_USE, NOT_IN_USE }; |
| 220 static bool is_rset_in_use() { return rset_state_ == IN_USE; } |
| 221 static void set_rset_state(RSetState state) { rset_state_ = state; } |
| 222 #endif |
| 230 | 223 |
| 231 // Page size in bytes. This must be a multiple of the OS page size. | 224 // Page size in bytes. This must be a multiple of the OS page size. |
| 232 static const int kPageSize = 1 << kPageSizeBits; | 225 static const int kPageSize = 1 << kPageSizeBits; |
| 233 | 226 |
| 234 // Page size mask. | 227 // Page size mask. |
| 235 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1; | 228 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1; |
| 236 | 229 |
| 237 static const int kPageHeaderSize = kPointerSize + kPointerSize + kIntSize + | 230 // The offset of the remembered set in a page, in addition to the empty bytes |
| 238 kIntSize + kPointerSize; | 231 // formed as the remembered bits of the remembered set itself. |
| 232 #ifdef V8_TARGET_ARCH_X64 |
| 233 static const int kRSetOffset = 4 * kPointerSize; // Room for four pointers. |
| 234 #else |
| 235 static const int kRSetOffset = 0; |
| 236 #endif |
| 237 // The end offset of the remembered set in a page |
| 238 // (heaps are aligned to pointer size). |
| 239 static const int kRSetEndOffset = kRSetOffset + kPageSize / kBitsPerPointer; |
| 239 | 240 |
| 240 // The start offset of the object area in a page. | 241 // The start offset of the object area in a page. |
| 241 static const int kObjectStartOffset = MAP_POINTER_ALIGN(kPageHeaderSize); | 242 // This needs to be at least (bits per uint32_t) * kBitsPerPointer, |
| 243 // to align start of rset to a uint32_t address. |
| 244 static const int kObjectStartOffset = 256; |
| 245 |
| 246 // The start offset of the used part of the remembered set in a page. |
| 247 static const int kRSetStartOffset = kRSetOffset + |
| 248 kObjectStartOffset / kBitsPerPointer; |
| 242 | 249 |
| 243 // Object area size in bytes. | 250 // Object area size in bytes. |
| 244 static const int kObjectAreaSize = kPageSize - kObjectStartOffset; | 251 static const int kObjectAreaSize = kPageSize - kObjectStartOffset; |
| 245 | 252 |
| 246 // Maximum object size that fits in a page. | 253 // Maximum object size that fits in a page. |
| 247 static const int kMaxHeapObjectSize = kObjectAreaSize; | 254 static const int kMaxHeapObjectSize = kObjectAreaSize; |
| 248 | 255 |
| 249 static const int kDirtyFlagOffset = 2 * kPointerSize; | |
| 250 static const int kRegionSizeLog2 = 8; | |
| 251 static const int kRegionSize = 1 << kRegionSizeLog2; | |
| 252 static const intptr_t kRegionAlignmentMask = (kRegionSize - 1); | |
| 253 | |
| 254 STATIC_CHECK(kRegionSize == kPageSize / kBitsPerInt); | |
| 255 | |
| 256 enum PageFlag { | 256 enum PageFlag { |
| 257 IS_NORMAL_PAGE = 1 << 0, | 257 IS_NORMAL_PAGE = 1 << 0, |
| 258 WAS_IN_USE_BEFORE_MC = 1 << 1, | 258 WAS_IN_USE_BEFORE_MC = 1 << 1 |
| 259 | |
| 260 // Page allocation watermark was bumped by preallocation during scavenge. | |
| 261 // Correct watermark can be retrieved by CachedAllocationWatermark() method | |
| 262 WATERMARK_INVALIDATED = 1 << 2 | |
| 263 }; | 259 }; |
| 264 | 260 |
| 265 // To avoid an additional WATERMARK_INVALIDATED flag clearing pass during | |
| 266 // scavenge we just invalidate the watermark on each old space page after | |
| 267 // processing it. And then we flip the meaning of the WATERMARK_INVALIDATED | |
| 268 // flag at the beginning of the next scavenge and each page becomes marked as | |
| 269 // having a valid watermark. | |
| 270 // | |
| 271 // The following invariant must hold for pages in old pointer and map spaces: | |
| 272 // If page is in use then page is marked as having invalid watermark at | |
| 273 // the beginning and at the end of any GC. | |
| 274 // | |
| 275 // This invariant guarantees that after flipping flag meaning at the | |
| 276 // beginning of scavenge all pages in use will be marked as having valid | |
| 277 // watermark. | |
| 278 static inline void FlipMeaningOfInvalidatedWatermarkFlag(); | |
| 279 | |
| 280 // Returns true if the page allocation watermark was not altered during | |
| 281 // scavenge. | |
| 282 inline bool IsWatermarkValid(); | |
| 283 | |
| 284 inline void InvalidateWatermark(bool value); | |
| 285 | |
| 286 inline bool GetPageFlag(PageFlag flag); | 261 inline bool GetPageFlag(PageFlag flag); |
| 287 inline void SetPageFlag(PageFlag flag, bool value); | 262 inline void SetPageFlag(PageFlag flag, bool value); |
| 288 inline void ClearPageFlags(); | |
| 289 | |
| 290 static const int kAllocationWatermarkOffsetShift = 3; | |
| 291 static const int kAllocationWatermarkOffsetBits = kPageSizeBits + 1; | |
| 292 static const uint32_t kAllocationWatermarkOffsetMask = | |
| 293 ((1 << kAllocationWatermarkOffsetBits) - 1) << | |
| 294 kAllocationWatermarkOffsetShift; | |
| 295 | |
| 296 static const uint32_t kFlagsMask = | |
| 297 ((1 << kAllocationWatermarkOffsetShift) - 1); | |
| 298 | |
| 299 STATIC_CHECK(kBitsPerInt - kAllocationWatermarkOffsetShift >= | |
| 300 kAllocationWatermarkOffsetBits); | |
| 301 | |
| 302 // This field contains the meaning of the WATERMARK_INVALIDATED flag. | |
| 303 // Instead of clearing this flag from all pages we just flip | |
| 304 // its meaning at the beginning of a scavenge. | |
| 305 static intptr_t watermark_invalidated_mark_; | |
| 306 | 263 |
| 307 //--------------------------------------------------------------------------- | 264 //--------------------------------------------------------------------------- |
| 308 // Page header description. | 265 // Page header description. |
| 309 // | 266 // |
| 310 // If a page is not in the large object space, the first word, | 267 // If a page is not in the large object space, the first word, |
| 311 // opaque_header, encodes the next page address (aligned to kPageSize 8K) | 268 // opaque_header, encodes the next page address (aligned to kPageSize 8K) |
| 312 // and the chunk number (0 ~ 8K-1). Only MemoryAllocator should use | 269 // and the chunk number (0 ~ 8K-1). Only MemoryAllocator should use |
| 313 // opaque_header. The value range of the opaque_header is [0..kPageSize[, | 270 // opaque_header. The value range of the opaque_header is [0..kPageSize[, |
| 314 // or [next_page_start, next_page_end[. It cannot point to a valid address | 271 // or [next_page_start, next_page_end[. It cannot point to a valid address |
| 315 // in the current page. If a page is in the large object space, the first | 272 // in the current page. If a page is in the large object space, the first |
| 316 // word *may* (if the page start and large object chunk start are the | 273 // word *may* (if the page start and large object chunk start are the |
| 317 // same) contain the address of the next large object chunk. | 274 // same) contain the address of the next large object chunk. |
| 318 intptr_t opaque_header; | 275 intptr_t opaque_header; |
| 319 | 276 |
| 320 // If the page is not in the large object space, the low-order bit of the | 277 // If the page is not in the large object space, the low-order bit of the |
| 321 // second word is set. If the page is in the large object space, the | 278 // second word is set. If the page is in the large object space, the |
| 322 // second word *may* (if the page start and large object chunk start are | 279 // second word *may* (if the page start and large object chunk start are |
| 323 // the same) contain the large object chunk size. In either case, the | 280 // the same) contain the large object chunk size. In either case, the |
| 324 // low-order bit for large object pages will be cleared. | 281 // low-order bit for large object pages will be cleared. |
| 325 // For normal pages this word is used to store page flags and | 282 // For normal pages this word is used to store various page flags. |
| 326 // offset of allocation top. | 283 int flags; |
| 327 intptr_t flags_; | |
| 328 | 284 |
| 329 // This field contains dirty marks for regions covering the page. Only dirty | 285 // The following fields may overlap with remembered set, they can only |
| 330 // regions might contain intergenerational references. | 286 // be used in the mark-compact collector when remembered set is not |
| 331 // Only 32 dirty marks are supported so for large object pages several regions | 287 // used. |
| 332 // might be mapped to a single dirty mark. | |
| 333 uint32_t dirty_regions_; | |
| 334 | 288 |
| 335 // The index of the page in its owner space. | 289 // The index of the page in its owner space. |
| 336 int mc_page_index; | 290 int mc_page_index; |
| 337 | 291 |
| 338 // During mark-compact collections this field contains the forwarding address | 292 // The allocation pointer after relocating objects to this page. |
| 339 // of the first live object in this page. | 293 Address mc_relocation_top; |
| 340 // During scavenge collection this field is used to store allocation watermark | 294 |
| 341 // if it is altered during scavenge. | 295 // The forwarding address of the first live object in this page. |
| 342 Address mc_first_forwarded; | 296 Address mc_first_forwarded; |
| 297 |
| 298 #ifdef DEBUG |
| 299 private: |
| 300 static RSetState rset_state_; // state of the remembered set |
| 301 #endif |
| 343 }; | 302 }; |
| 344 | 303 |
| 345 | 304 |
| 346 // ---------------------------------------------------------------------------- | 305 // ---------------------------------------------------------------------------- |
| 347 // Space is the abstract superclass for all allocation spaces. | 306 // Space is the abstract superclass for all allocation spaces. |
| 348 class Space : public Malloced { | 307 class Space : public Malloced { |
| 349 public: | 308 public: |
| 350 Space(AllocationSpace id, Executability executable) | 309 Space(AllocationSpace id, Executability executable) |
| 351 : id_(id), executable_(executable) {} | 310 : id_(id), executable_(executable) {} |
| 352 | 311 |
| (...skipping 602 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 955 | 914 |
| 956 // Given an address occupied by a live object, return that object if it is | 915 // Given an address occupied by a live object, return that object if it is |
| 957 // in this space, or Failure::Exception() if it is not. The implementation | 916 // in this space, or Failure::Exception() if it is not. The implementation |
| 958 // iterates over objects in the page containing the address, the cost is | 917 // iterates over objects in the page containing the address, the cost is |
| 959 // linear in the number of objects in the page. It may be slow. | 918 // linear in the number of objects in the page. It may be slow. |
| 960 Object* FindObject(Address addr); | 919 Object* FindObject(Address addr); |
| 961 | 920 |
| 962 // Checks whether page is currently in use by this space. | 921 // Checks whether page is currently in use by this space. |
| 963 bool IsUsed(Page* page); | 922 bool IsUsed(Page* page); |
| 964 | 923 |
| 965 void MarkAllPagesClean(); | 924 // Clears remembered sets of pages in this space. |
| 925 void ClearRSet(); |
| 966 | 926 |
| 967 // Prepares for a mark-compact GC. | 927 // Prepares for a mark-compact GC. |
| 968 virtual void PrepareForMarkCompact(bool will_compact); | 928 virtual void PrepareForMarkCompact(bool will_compact); |
| 969 | 929 |
| 970 // The top of allocation in a page in this space. Undefined if page is unused. | 930 // The top of allocation in a page in this space. Undefined if page is unused. |
| 971 Address PageAllocationTop(Page* page) { | 931 Address PageAllocationTop(Page* page) { |
| 972 return page == TopPageOf(allocation_info_) ? top() | 932 return page == TopPageOf(allocation_info_) ? top() |
| 973 : PageAllocationLimit(page); | 933 : PageAllocationLimit(page); |
| 974 } | 934 } |
| 975 | 935 |
| 976 // The limit of allocation for a page in this space. | 936 // The limit of allocation for a page in this space. |
| 977 virtual Address PageAllocationLimit(Page* page) = 0; | 937 virtual Address PageAllocationLimit(Page* page) = 0; |
| 978 | 938 |
| 979 void FlushTopPageWatermark() { | |
| 980 AllocationTopPage()->SetCachedAllocationWatermark(top()); | |
| 981 AllocationTopPage()->InvalidateWatermark(true); | |
| 982 } | |
| 983 | |
| 984 // Current capacity without growing (Size() + Available() + Waste()). | 939 // Current capacity without growing (Size() + Available() + Waste()). |
| 985 int Capacity() { return accounting_stats_.Capacity(); } | 940 int Capacity() { return accounting_stats_.Capacity(); } |
| 986 | 941 |
| 987 // Total amount of memory committed for this space. For paged | 942 // Total amount of memory committed for this space. For paged |
| 988 // spaces this equals the capacity. | 943 // spaces this equals the capacity. |
| 989 int CommittedMemory() { return Capacity(); } | 944 int CommittedMemory() { return Capacity(); } |
| 990 | 945 |
| 991 // Available bytes without growing. | 946 // Available bytes without growing. |
| 992 int Available() { return accounting_stats_.Available(); } | 947 int Available() { return accounting_stats_.Available(); } |
| 993 | 948 |
| (...skipping 34 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1028 } | 983 } |
| 1029 | 984 |
| 1030 // --------------------------------------------------------------------------- | 985 // --------------------------------------------------------------------------- |
| 1031 // Mark-compact collection support functions | 986 // Mark-compact collection support functions |
| 1032 | 987 |
| 1033 // Set the relocation point to the beginning of the space. | 988 // Set the relocation point to the beginning of the space. |
| 1034 void MCResetRelocationInfo(); | 989 void MCResetRelocationInfo(); |
| 1035 | 990 |
| 1036 // Writes relocation info to the top page. | 991 // Writes relocation info to the top page. |
| 1037 void MCWriteRelocationInfoToPage() { | 992 void MCWriteRelocationInfoToPage() { |
| 1038 TopPageOf(mc_forwarding_info_)-> | 993 TopPageOf(mc_forwarding_info_)->mc_relocation_top = mc_forwarding_info_.top; |
| 1039 SetAllocationWatermark(mc_forwarding_info_.top); | |
| 1040 } | 994 } |
| 1041 | 995 |
| 1042 // Computes the offset of a given address in this space to the beginning | 996 // Computes the offset of a given address in this space to the beginning |
| 1043 // of the space. | 997 // of the space. |
| 1044 int MCSpaceOffsetForAddress(Address addr); | 998 int MCSpaceOffsetForAddress(Address addr); |
| 1045 | 999 |
| 1046 // Updates the allocation pointer to the relocation top after a mark-compact | 1000 // Updates the allocation pointer to the relocation top after a mark-compact |
| 1047 // collection. | 1001 // collection. |
| 1048 virtual void MCCommitRelocationInfo() = 0; | 1002 virtual void MCCommitRelocationInfo() = 0; |
| 1049 | 1003 |
| (...skipping 97 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1147 | 1101 |
| 1148 // Slow path of AllocateRaw. This function is space-dependent. | 1102 // Slow path of AllocateRaw. This function is space-dependent. |
| 1149 virtual HeapObject* SlowAllocateRaw(int size_in_bytes) = 0; | 1103 virtual HeapObject* SlowAllocateRaw(int size_in_bytes) = 0; |
| 1150 | 1104 |
| 1151 // Slow path of MCAllocateRaw. | 1105 // Slow path of MCAllocateRaw. |
| 1152 HeapObject* SlowMCAllocateRaw(int size_in_bytes); | 1106 HeapObject* SlowMCAllocateRaw(int size_in_bytes); |
| 1153 | 1107 |
| 1154 #ifdef DEBUG | 1108 #ifdef DEBUG |
| 1155 // Returns the number of total pages in this space. | 1109 // Returns the number of total pages in this space. |
| 1156 int CountTotalPages(); | 1110 int CountTotalPages(); |
| 1111 |
| 1112 void DoPrintRSet(const char* space_name); |
| 1157 #endif | 1113 #endif |
| 1158 private: | 1114 private: |
| 1159 | 1115 |
| 1160 // Returns a pointer to the page of the relocation pointer. | 1116 // Returns a pointer to the page of the relocation pointer. |
| 1161 Page* MCRelocationTopPage() { return TopPageOf(mc_forwarding_info_); } | 1117 Page* MCRelocationTopPage() { return TopPageOf(mc_forwarding_info_); } |
| 1162 | 1118 |
| 1163 friend class PageIterator; | 1119 friend class PageIterator; |
| 1164 }; | 1120 }; |
| 1165 | 1121 |
| 1166 | 1122 |
| (...skipping 616 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1783 // Give a block of memory to the space's free list. It might be added to | 1739 // Give a block of memory to the space's free list. It might be added to |
| 1784 // the free list or accounted as waste. | 1740 // the free list or accounted as waste. |
| 1785 // If add_to_freelist is false then just accounting stats are updated and | 1741 // If add_to_freelist is false then just accounting stats are updated and |
| 1786 // no attempt to add area to free list is made. | 1742 // no attempt to add area to free list is made. |
| 1787 void Free(Address start, int size_in_bytes, bool add_to_freelist) { | 1743 void Free(Address start, int size_in_bytes, bool add_to_freelist) { |
| 1788 accounting_stats_.DeallocateBytes(size_in_bytes); | 1744 accounting_stats_.DeallocateBytes(size_in_bytes); |
| 1789 | 1745 |
| 1790 if (add_to_freelist) { | 1746 if (add_to_freelist) { |
| 1791 int wasted_bytes = free_list_.Free(start, size_in_bytes); | 1747 int wasted_bytes = free_list_.Free(start, size_in_bytes); |
| 1792 accounting_stats_.WasteBytes(wasted_bytes); | 1748 accounting_stats_.WasteBytes(wasted_bytes); |
| 1793 } else { | |
| 1794 #ifdef DEBUG | |
| 1795 MemoryAllocator::ZapBlock(start, size_in_bytes); | |
| 1796 #endif | |
| 1797 } | 1749 } |
| 1798 } | 1750 } |
| 1799 | 1751 |
| 1800 // Prepare for full garbage collection. Resets the relocation pointer and | 1752 // Prepare for full garbage collection. Resets the relocation pointer and |
| 1801 // clears the free list. | 1753 // clears the free list. |
| 1802 virtual void PrepareForMarkCompact(bool will_compact); | 1754 virtual void PrepareForMarkCompact(bool will_compact); |
| 1803 | 1755 |
| 1804 // Updates the allocation pointer to the relocation top after a mark-compact | 1756 // Updates the allocation pointer to the relocation top after a mark-compact |
| 1805 // collection. | 1757 // collection. |
| 1806 virtual void MCCommitRelocationInfo(); | 1758 virtual void MCCommitRelocationInfo(); |
| 1807 | 1759 |
| 1808 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); | 1760 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); |
| 1809 | 1761 |
| 1810 #ifdef DEBUG | 1762 #ifdef DEBUG |
| 1811 // Reports statistics for the space | 1763 // Reports statistics for the space |
| 1812 void ReportStatistics(); | 1764 void ReportStatistics(); |
| 1765 // Dump the remembered sets in the space to stdout. |
| 1766 void PrintRSet(); |
| 1813 #endif | 1767 #endif |
| 1814 | 1768 |
| 1815 protected: | 1769 protected: |
| 1816 // Virtual function in the superclass. Slow path of AllocateRaw. | 1770 // Virtual function in the superclass. Slow path of AllocateRaw. |
| 1817 HeapObject* SlowAllocateRaw(int size_in_bytes); | 1771 HeapObject* SlowAllocateRaw(int size_in_bytes); |
| 1818 | 1772 |
| 1819 // Virtual function in the superclass. Allocate linearly at the start of | 1773 // Virtual function in the superclass. Allocate linearly at the start of |
| 1820 // the page after current_page (there is assumed to be one). | 1774 // the page after current_page (there is assumed to be one). |
| 1821 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); | 1775 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); |
| 1822 | 1776 |
| (...skipping 28 matching lines...) Expand all Loading... |
| 1851 } | 1805 } |
| 1852 | 1806 |
| 1853 int object_size_in_bytes() { return object_size_in_bytes_; } | 1807 int object_size_in_bytes() { return object_size_in_bytes_; } |
| 1854 | 1808 |
| 1855 // Give a fixed sized block of memory to the space's free list. | 1809 // Give a fixed sized block of memory to the space's free list. |
| 1856 // If add_to_freelist is false then just accounting stats are updated and | 1810 // If add_to_freelist is false then just accounting stats are updated and |
| 1857 // no attempt to add area to free list is made. | 1811 // no attempt to add area to free list is made. |
| 1858 void Free(Address start, bool add_to_freelist) { | 1812 void Free(Address start, bool add_to_freelist) { |
| 1859 if (add_to_freelist) { | 1813 if (add_to_freelist) { |
| 1860 free_list_.Free(start); | 1814 free_list_.Free(start); |
| 1861 } else { | |
| 1862 #ifdef DEBUG | |
| 1863 MemoryAllocator::ZapBlock(start, object_size_in_bytes_); | |
| 1864 #endif | |
| 1865 } | 1815 } |
| 1866 accounting_stats_.DeallocateBytes(object_size_in_bytes_); | 1816 accounting_stats_.DeallocateBytes(object_size_in_bytes_); |
| 1867 } | 1817 } |
| 1868 | 1818 |
| 1869 // Prepares for a mark-compact GC. | 1819 // Prepares for a mark-compact GC. |
| 1870 virtual void PrepareForMarkCompact(bool will_compact); | 1820 virtual void PrepareForMarkCompact(bool will_compact); |
| 1871 | 1821 |
| 1872 // Updates the allocation pointer to the relocation top after a mark-compact | 1822 // Updates the allocation pointer to the relocation top after a mark-compact |
| 1873 // collection. | 1823 // collection. |
| 1874 virtual void MCCommitRelocationInfo(); | 1824 virtual void MCCommitRelocationInfo(); |
| 1875 | 1825 |
| 1876 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); | 1826 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); |
| 1877 | 1827 |
| 1878 #ifdef DEBUG | 1828 #ifdef DEBUG |
| 1879 // Reports statistic info of the space | 1829 // Reports statistic info of the space |
| 1880 void ReportStatistics(); | 1830 void ReportStatistics(); |
| 1831 |
| 1832 // Dump the remembered sets in the space to stdout. |
| 1833 void PrintRSet(); |
| 1881 #endif | 1834 #endif |
| 1882 | 1835 |
| 1883 protected: | 1836 protected: |
| 1884 // Virtual function in the superclass. Slow path of AllocateRaw. | 1837 // Virtual function in the superclass. Slow path of AllocateRaw. |
| 1885 HeapObject* SlowAllocateRaw(int size_in_bytes); | 1838 HeapObject* SlowAllocateRaw(int size_in_bytes); |
| 1886 | 1839 |
| 1887 // Virtual function in the superclass. Allocate linearly at the start of | 1840 // Virtual function in the superclass. Allocate linearly at the start of |
| 1888 // the page after current_page (there is assumed to be one). | 1841 // the page after current_page (there is assumed to be one). |
| 1889 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); | 1842 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); |
| 1890 | 1843 |
| (...skipping 48 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 1939 return !MapPointersEncodable() && live_maps <= CompactionThreshold(); | 1892 return !MapPointersEncodable() && live_maps <= CompactionThreshold(); |
| 1940 } | 1893 } |
| 1941 | 1894 |
| 1942 Address TopAfterCompaction(int live_maps) { | 1895 Address TopAfterCompaction(int live_maps) { |
| 1943 ASSERT(NeedsCompaction(live_maps)); | 1896 ASSERT(NeedsCompaction(live_maps)); |
| 1944 | 1897 |
| 1945 int pages_left = live_maps / kMapsPerPage; | 1898 int pages_left = live_maps / kMapsPerPage; |
| 1946 PageIterator it(this, PageIterator::ALL_PAGES); | 1899 PageIterator it(this, PageIterator::ALL_PAGES); |
| 1947 while (pages_left-- > 0) { | 1900 while (pages_left-- > 0) { |
| 1948 ASSERT(it.has_next()); | 1901 ASSERT(it.has_next()); |
| 1949 it.next()->SetRegionMarks(Page::kAllRegionsCleanMarks); | 1902 it.next()->ClearRSet(); |
| 1950 } | 1903 } |
| 1951 ASSERT(it.has_next()); | 1904 ASSERT(it.has_next()); |
| 1952 Page* top_page = it.next(); | 1905 Page* top_page = it.next(); |
| 1953 top_page->SetRegionMarks(Page::kAllRegionsCleanMarks); | 1906 top_page->ClearRSet(); |
| 1954 ASSERT(top_page->is_valid()); | 1907 ASSERT(top_page->is_valid()); |
| 1955 | 1908 |
| 1956 int offset = live_maps % kMapsPerPage * Map::kSize; | 1909 int offset = live_maps % kMapsPerPage * Map::kSize; |
| 1957 Address top = top_page->ObjectAreaStart() + offset; | 1910 Address top = top_page->ObjectAreaStart() + offset; |
| 1958 ASSERT(top < top_page->ObjectAreaEnd()); | 1911 ASSERT(top < top_page->ObjectAreaEnd()); |
| 1959 ASSERT(Contains(top)); | 1912 ASSERT(Contains(top)); |
| 1960 | 1913 |
| 1961 return top; | 1914 return top; |
| 1962 } | 1915 } |
| 1963 | 1916 |
| (...skipping 70 matching lines...) Expand 10 before | Expand all | Expand 10 after Loading... |
| 2034 // extra padding bytes (Page::kPageSize + Page::kObjectStartOffset). | 1987 // extra padding bytes (Page::kPageSize + Page::kObjectStartOffset). |
| 2035 // A large object always starts at Page::kObjectStartOffset to a page. | 1988 // A large object always starts at Page::kObjectStartOffset to a page. |
| 2036 // Large objects do not move during garbage collections. | 1989 // Large objects do not move during garbage collections. |
| 2037 | 1990 |
| 2038 // A LargeObjectChunk holds exactly one large object page with exactly one | 1991 // A LargeObjectChunk holds exactly one large object page with exactly one |
| 2039 // large object. | 1992 // large object. |
| 2040 class LargeObjectChunk { | 1993 class LargeObjectChunk { |
| 2041 public: | 1994 public: |
| 2042 // Allocates a new LargeObjectChunk that contains a large object page | 1995 // Allocates a new LargeObjectChunk that contains a large object page |
| 2043 // (Page::kPageSize aligned) that has at least size_in_bytes (for a large | 1996 // (Page::kPageSize aligned) that has at least size_in_bytes (for a large |
| 2044 // object) bytes after the object area start of that page. | 1997 // object and possibly extra remembered set words) bytes after the object |
| 2045 // The allocated chunk size is set in the output parameter chunk_size. | 1998 // area start of that page. The allocated chunk size is set in the output |
| 1999 // parameter chunk_size. |
| 2046 static LargeObjectChunk* New(int size_in_bytes, | 2000 static LargeObjectChunk* New(int size_in_bytes, |
| 2047 size_t* chunk_size, | 2001 size_t* chunk_size, |
| 2048 Executability executable); | 2002 Executability executable); |
| 2049 | 2003 |
| 2050 // Interpret a raw address as a large object chunk. | 2004 // Interpret a raw address as a large object chunk. |
| 2051 static LargeObjectChunk* FromAddress(Address address) { | 2005 static LargeObjectChunk* FromAddress(Address address) { |
| 2052 return reinterpret_cast<LargeObjectChunk*>(address); | 2006 return reinterpret_cast<LargeObjectChunk*>(address); |
| 2053 } | 2007 } |
| 2054 | 2008 |
| 2055 // Returns the address of this chunk. | 2009 // Returns the address of this chunk. |
| 2056 Address address() { return reinterpret_cast<Address>(this); } | 2010 Address address() { return reinterpret_cast<Address>(this); } |
| 2057 | 2011 |
| 2058 // Accessors for the fields of the chunk. | 2012 // Accessors for the fields of the chunk. |
| 2059 LargeObjectChunk* next() { return next_; } | 2013 LargeObjectChunk* next() { return next_; } |
| 2060 void set_next(LargeObjectChunk* chunk) { next_ = chunk; } | 2014 void set_next(LargeObjectChunk* chunk) { next_ = chunk; } |
| 2061 | 2015 |
| 2062 size_t size() { return size_; } | 2016 size_t size() { return size_; } |
| 2063 void set_size(size_t size_in_bytes) { size_ = size_in_bytes; } | 2017 void set_size(size_t size_in_bytes) { size_ = size_in_bytes; } |
| 2064 | 2018 |
| 2065 // Returns the object in this chunk. | 2019 // Returns the object in this chunk. |
| 2066 inline HeapObject* GetObject(); | 2020 inline HeapObject* GetObject(); |
| 2067 | 2021 |
| 2068 // Given a requested size returns the physical size of a chunk to be | 2022 // Given a requested size (including any extra remembered set words), |
| 2069 // allocated. | 2023 // returns the physical size of a chunk to be allocated. |
| 2070 static int ChunkSizeFor(int size_in_bytes); | 2024 static int ChunkSizeFor(int size_in_bytes); |
| 2071 | 2025 |
| 2072 // Given a chunk size, returns the object size it can accommodate. Used by | 2026 // Given a chunk size, returns the object size it can accommodate (not |
| 2073 // LargeObjectSpace::Available. | 2027 // including any extra remembered set words). Used by |
| 2028 // LargeObjectSpace::Available. Note that this can overestimate the size |
| 2029 // of object that will fit in a chunk---if the object requires extra |
| 2030 // remembered set words (eg, for large fixed arrays), the actual object |
| 2031 // size for the chunk will be smaller than reported by this function. |
| 2074 static int ObjectSizeFor(int chunk_size) { | 2032 static int ObjectSizeFor(int chunk_size) { |
| 2075 if (chunk_size <= (Page::kPageSize + Page::kObjectStartOffset)) return 0; | 2033 if (chunk_size <= (Page::kPageSize + Page::kObjectStartOffset)) return 0; |
| 2076 return chunk_size - Page::kPageSize - Page::kObjectStartOffset; | 2034 return chunk_size - Page::kPageSize - Page::kObjectStartOffset; |
| 2077 } | 2035 } |
| 2078 | 2036 |
| 2079 private: | 2037 private: |
| 2080 // A pointer to the next large object chunk in the space or NULL. | 2038 // A pointer to the next large object chunk in the space or NULL. |
| 2081 LargeObjectChunk* next_; | 2039 LargeObjectChunk* next_; |
| 2082 | 2040 |
| 2083 // The size of this chunk. | 2041 // The size of this chunk. |
| (...skipping 15 matching lines...) Expand all Loading... |
| 2099 // Releases internal resources, frees objects in this space. | 2057 // Releases internal resources, frees objects in this space. |
| 2100 void TearDown(); | 2058 void TearDown(); |
| 2101 | 2059 |
| 2102 // Allocates a (non-FixedArray, non-Code) large object. | 2060 // Allocates a (non-FixedArray, non-Code) large object. |
| 2103 Object* AllocateRaw(int size_in_bytes); | 2061 Object* AllocateRaw(int size_in_bytes); |
| 2104 // Allocates a large Code object. | 2062 // Allocates a large Code object. |
| 2105 Object* AllocateRawCode(int size_in_bytes); | 2063 Object* AllocateRawCode(int size_in_bytes); |
| 2106 // Allocates a large FixedArray. | 2064 // Allocates a large FixedArray. |
| 2107 Object* AllocateRawFixedArray(int size_in_bytes); | 2065 Object* AllocateRawFixedArray(int size_in_bytes); |
| 2108 | 2066 |
| 2109 // Available bytes for objects in this space. | 2067 // Available bytes for objects in this space, not including any extra |
| 2068 // remembered set words. |
| 2110 int Available() { | 2069 int Available() { |
| 2111 return LargeObjectChunk::ObjectSizeFor(MemoryAllocator::Available()); | 2070 return LargeObjectChunk::ObjectSizeFor(MemoryAllocator::Available()); |
| 2112 } | 2071 } |
| 2113 | 2072 |
| 2114 virtual int Size() { | 2073 virtual int Size() { |
| 2115 return size_; | 2074 return size_; |
| 2116 } | 2075 } |
| 2117 | 2076 |
| 2118 int PageCount() { | 2077 int PageCount() { |
| 2119 return page_count_; | 2078 return page_count_; |
| 2120 } | 2079 } |
| 2121 | 2080 |
| 2122 // Finds an object for a given address, returns Failure::Exception() | 2081 // Finds an object for a given address, returns Failure::Exception() |
| 2123 // if it is not found. The function iterates through all objects in this | 2082 // if it is not found. The function iterates through all objects in this |
| 2124 // space, may be slow. | 2083 // space, may be slow. |
| 2125 Object* FindObject(Address a); | 2084 Object* FindObject(Address a); |
| 2126 | 2085 |
| 2127 // Iterates objects covered by dirty regions. | 2086 // Clears remembered sets. |
| 2128 void IterateDirtyRegions(ObjectSlotCallback func); | 2087 void ClearRSet(); |
| 2088 |
| 2089 // Iterates objects whose remembered set bits are set. |
| 2090 void IterateRSet(ObjectSlotCallback func); |
| 2129 | 2091 |
| 2130 // Frees unmarked objects. | 2092 // Frees unmarked objects. |
| 2131 void FreeUnmarkedObjects(); | 2093 void FreeUnmarkedObjects(); |
| 2132 | 2094 |
| 2133 // Checks whether a heap object is in this space; O(1). | 2095 // Checks whether a heap object is in this space; O(1). |
| 2134 bool Contains(HeapObject* obj); | 2096 bool Contains(HeapObject* obj); |
| 2135 | 2097 |
| 2136 // Checks whether the space is empty. | 2098 // Checks whether the space is empty. |
| 2137 bool IsEmpty() { return first_chunk_ == NULL; } | 2099 bool IsEmpty() { return first_chunk_ == NULL; } |
| 2138 | 2100 |
| 2139 // See the comments for ReserveSpace in the Space class. This has to be | 2101 // See the comments for ReserveSpace in the Space class. This has to be |
| 2140 // called after ReserveSpace has been called on the paged spaces, since they | 2102 // called after ReserveSpace has been called on the paged spaces, since they |
| 2141 // may use some memory, leaving less for large objects. | 2103 // may use some memory, leaving less for large objects. |
| 2142 virtual bool ReserveSpace(int bytes); | 2104 virtual bool ReserveSpace(int bytes); |
| 2143 | 2105 |
| 2144 #ifdef ENABLE_HEAP_PROTECTION | 2106 #ifdef ENABLE_HEAP_PROTECTION |
| 2145 // Protect/unprotect the space by marking it read-only/writable. | 2107 // Protect/unprotect the space by marking it read-only/writable. |
| 2146 void Protect(); | 2108 void Protect(); |
| 2147 void Unprotect(); | 2109 void Unprotect(); |
| 2148 #endif | 2110 #endif |
| 2149 | 2111 |
| 2150 #ifdef DEBUG | 2112 #ifdef DEBUG |
| 2151 virtual void Verify(); | 2113 virtual void Verify(); |
| 2152 virtual void Print(); | 2114 virtual void Print(); |
| 2153 void ReportStatistics(); | 2115 void ReportStatistics(); |
| 2154 void CollectCodeStatistics(); | 2116 void CollectCodeStatistics(); |
| 2117 // Dump the remembered sets in the space to stdout. |
| 2118 void PrintRSet(); |
| 2155 #endif | 2119 #endif |
| 2156 // Checks whether an address is in the object area in this space. It | 2120 // Checks whether an address is in the object area in this space. It |
| 2157 // iterates all objects in the space. May be slow. | 2121 // iterates all objects in the space. May be slow. |
| 2158 bool SlowContains(Address addr) { return !FindObject(addr)->IsFailure(); } | 2122 bool SlowContains(Address addr) { return !FindObject(addr)->IsFailure(); } |
| 2159 | 2123 |
| 2160 private: | 2124 private: |
| 2161 // The head of the linked list of large object chunks. | 2125 // The head of the linked list of large object chunks. |
| 2162 LargeObjectChunk* first_chunk_; | 2126 LargeObjectChunk* first_chunk_; |
| 2163 int size_; // allocated bytes | 2127 int size_; // allocated bytes |
| 2164 int page_count_; // number of chunks | 2128 int page_count_; // number of chunks |
| 2165 | 2129 |
| 2166 | 2130 |
| 2167 // Shared implementation of AllocateRaw, AllocateRawCode and | 2131 // Shared implementation of AllocateRaw, AllocateRawCode and |
| 2168 // AllocateRawFixedArray. | 2132 // AllocateRawFixedArray. |
| 2169 Object* AllocateRawInternal(int requested_size, | 2133 Object* AllocateRawInternal(int requested_size, |
| 2170 int object_size, | 2134 int object_size, |
| 2171 Executability executable); | 2135 Executability executable); |
| 2172 | 2136 |
| 2137 // Returns the number of extra bytes (rounded up to the nearest full word) |
| 2138 // required for extra_object_bytes of extra pointers (in bytes). |
| 2139 static inline int ExtraRSetBytesFor(int extra_object_bytes); |
| 2140 |
| 2173 friend class LargeObjectIterator; | 2141 friend class LargeObjectIterator; |
| 2174 | 2142 |
| 2175 public: | 2143 public: |
| 2176 TRACK_MEMORY("LargeObjectSpace") | 2144 TRACK_MEMORY("LargeObjectSpace") |
| 2177 }; | 2145 }; |
| 2178 | 2146 |
| 2179 | 2147 |
| 2180 class LargeObjectIterator: public ObjectIterator { | 2148 class LargeObjectIterator: public ObjectIterator { |
| 2181 public: | 2149 public: |
| 2182 explicit LargeObjectIterator(LargeObjectSpace* space); | 2150 explicit LargeObjectIterator(LargeObjectSpace* space); |
| 2183 LargeObjectIterator(LargeObjectSpace* space, HeapObjectCallback size_func); | 2151 LargeObjectIterator(LargeObjectSpace* space, HeapObjectCallback size_func); |
| 2184 | 2152 |
| 2185 HeapObject* next(); | 2153 HeapObject* next(); |
| 2186 | 2154 |
| 2187 // implementation of ObjectIterator. | 2155 // implementation of ObjectIterator. |
| 2188 virtual HeapObject* next_object() { return next(); } | 2156 virtual HeapObject* next_object() { return next(); } |
| 2189 | 2157 |
| 2190 private: | 2158 private: |
| 2191 LargeObjectChunk* current_; | 2159 LargeObjectChunk* current_; |
| 2192 HeapObjectCallback size_func_; | 2160 HeapObjectCallback size_func_; |
| 2193 }; | 2161 }; |
| 2194 | 2162 |
| 2195 | 2163 |
| 2196 } } // namespace v8::internal | 2164 } } // namespace v8::internal |
| 2197 | 2165 |
| 2198 #endif // V8_SPACES_H_ | 2166 #endif // V8_SPACES_H_ |
| OLD | NEW |