Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(125)

Side by Side Diff: src/spaces.h

Issue 2073018: Reverting r4703. (Closed) Base URL: http://v8.googlecode.com/svn/branches/bleeding_edge/
Patch Set: Created 10 years, 7 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « src/runtime.cc ('k') | src/spaces.cc » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 // Copyright 2006-2008 the V8 project authors. All rights reserved. 1 // Copyright 2006-2008 the V8 project authors. All rights reserved.
2 // Redistribution and use in source and binary forms, with or without 2 // Redistribution and use in source and binary forms, with or without
3 // modification, are permitted provided that the following conditions are 3 // modification, are permitted provided that the following conditions are
4 // met: 4 // met:
5 // 5 //
6 // * Redistributions of source code must retain the above copyright 6 // * Redistributions of source code must retain the above copyright
7 // notice, this list of conditions and the following disclaimer. 7 // notice, this list of conditions and the following disclaimer.
8 // * Redistributions in binary form must reproduce the above 8 // * Redistributions in binary form must reproduce the above
9 // copyright notice, this list of conditions and the following 9 // copyright notice, this list of conditions and the following
10 // disclaimer in the documentation and/or other materials provided 10 // disclaimer in the documentation and/or other materials provided
(...skipping 27 matching lines...) Expand all
38 // Heap structures: 38 // Heap structures:
39 // 39 //
40 // A JS heap consists of a young generation, an old generation, and a large 40 // A JS heap consists of a young generation, an old generation, and a large
41 // object space. The young generation is divided into two semispaces. A 41 // object space. The young generation is divided into two semispaces. A
42 // scavenger implements Cheney's copying algorithm. The old generation is 42 // scavenger implements Cheney's copying algorithm. The old generation is
43 // separated into a map space and an old object space. The map space contains 43 // separated into a map space and an old object space. The map space contains
44 // all (and only) map objects, the rest of old objects go into the old space. 44 // all (and only) map objects, the rest of old objects go into the old space.
45 // The old generation is collected by a mark-sweep-compact collector. 45 // The old generation is collected by a mark-sweep-compact collector.
46 // 46 //
47 // The semispaces of the young generation are contiguous. The old and map 47 // The semispaces of the young generation are contiguous. The old and map
48 // spaces consists of a list of pages. A page has a page header and an object 48 // spaces consists of a list of pages. A page has a page header, a remembered
49 // area. A page size is deliberately chosen as 8K bytes. 49 // set area, and an object area. A page size is deliberately chosen as 8K
50 // The first word of a page is an opaque page header that has the 50 // bytes. The first word of a page is an opaque page header that has the
51 // address of the next page and its ownership information. The second word may 51 // address of the next page and its ownership information. The second word may
52 // have the allocation top address of this page. Heap objects are aligned to the 52 // have the allocation top address of this page. The next 248 bytes are
53 // pointer size. 53 // remembered sets. Heap objects are aligned to the pointer size (4 bytes). A
54 // remembered set bit corresponds to a pointer in the object area.
54 // 55 //
55 // There is a separate large object space for objects larger than 56 // There is a separate large object space for objects larger than
56 // Page::kMaxHeapObjectSize, so that they do not have to move during 57 // Page::kMaxHeapObjectSize, so that they do not have to move during
57 // collection. The large object space is paged. Pages in large object space 58 // collection. The large object space is paged and uses the same remembered
58 // may be larger than 8K. 59 // set implementation. Pages in large object space may be larger than 8K.
59 // 60 //
60 // A card marking write barrier is used to keep track of intergenerational 61 // NOTE: The mark-compact collector rebuilds the remembered set after a
61 // references. Old space pages are divided into regions of Page::kRegionSize 62 // collection. It reuses first a few words of the remembered set for
62 // size. Each region has a corresponding dirty bit in the page header which is 63 // bookkeeping relocation information.
63 // set if the region might contain pointers to new space. For details about 64
64 // dirty bits encoding see comments in the Page::GetRegionNumberForAddress()
65 // method body.
66 //
67 // During scavenges and mark-sweep collections we iterate intergenerational
68 // pointers without decoding heap object maps so if the page belongs to old
69 // pointer space or large object space it is essential to guarantee that
70 // the page does not contain any garbage pointers to new space: every pointer
71 // aligned word which satisfies the Heap::InNewSpace() predicate must be a
72 // pointer to a live heap object in new space. Thus objects in old pointer
73 // and large object spaces should have a special layout (e.g. no bare integer
74 // fields). This requirement does not apply to map space which is iterated in
75 // a special fashion. However we still require pointer fields of dead maps to
76 // be cleaned.
77 //
78 // To enable lazy cleaning of old space pages we use a notion of allocation
79 // watermark. Every pointer under watermark is considered to be well formed.
80 // Page allocation watermark is not necessarily equal to page allocation top but
81 // all alive objects on page should reside under allocation watermark.
82 // During scavenge allocation watermark might be bumped and invalid pointers
83 // might appear below it. To avoid following them we store a valid watermark
84 // into special field in the page header and set a page WATERMARK_INVALIDATED
85 // flag. For details see comments in the Page::SetAllocationWatermark() method
86 // body.
87 //
88 65
89 // Some assertion macros used in the debugging mode. 66 // Some assertion macros used in the debugging mode.
90 67
91 #define ASSERT_PAGE_ALIGNED(address) \ 68 #define ASSERT_PAGE_ALIGNED(address) \
92 ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0) 69 ASSERT((OffsetFrom(address) & Page::kPageAlignmentMask) == 0)
93 70
94 #define ASSERT_OBJECT_ALIGNED(address) \ 71 #define ASSERT_OBJECT_ALIGNED(address) \
95 ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0) 72 ASSERT((OffsetFrom(address) & kObjectAlignmentMask) == 0)
96 73
97 #define ASSERT_MAP_ALIGNED(address) \ 74 #define ASSERT_MAP_ALIGNED(address) \
98 ASSERT((OffsetFrom(address) & kMapAlignmentMask) == 0) 75 ASSERT((OffsetFrom(address) & kMapAlignmentMask) == 0)
99 76
100 #define ASSERT_OBJECT_SIZE(size) \ 77 #define ASSERT_OBJECT_SIZE(size) \
101 ASSERT((0 < size) && (size <= Page::kMaxHeapObjectSize)) 78 ASSERT((0 < size) && (size <= Page::kMaxHeapObjectSize))
102 79
103 #define ASSERT_PAGE_OFFSET(offset) \ 80 #define ASSERT_PAGE_OFFSET(offset) \
104 ASSERT((Page::kObjectStartOffset <= offset) \ 81 ASSERT((Page::kObjectStartOffset <= offset) \
105 && (offset <= Page::kPageSize)) 82 && (offset <= Page::kPageSize))
106 83
107 #define ASSERT_MAP_PAGE_INDEX(index) \ 84 #define ASSERT_MAP_PAGE_INDEX(index) \
108 ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex)) 85 ASSERT((0 <= index) && (index <= MapSpace::kMaxMapPageIndex))
109 86
110 87
111 class PagedSpace; 88 class PagedSpace;
112 class MemoryAllocator; 89 class MemoryAllocator;
113 class AllocationInfo; 90 class AllocationInfo;
114 91
115 // ----------------------------------------------------------------------------- 92 // -----------------------------------------------------------------------------
116 // A page normally has 8K bytes. Large object pages may be larger. A page 93 // A page normally has 8K bytes. Large object pages may be larger. A page
117 // address is always aligned to the 8K page size. 94 // address is always aligned to the 8K page size. A page is divided into
95 // three areas: the first two words are used for bookkeeping, the next 248
96 // bytes are used as remembered set, and the rest of the page is the object
97 // area.
118 // 98 //
119 // Each page starts with a header of Page::kPageHeaderSize size which contains 99 // Pointers are aligned to the pointer size (4), only 1 bit is needed
120 // bookkeeping data. 100 // for a pointer in the remembered set. Given an address, its remembered set
101 // bit position (offset from the start of the page) is calculated by dividing
102 // its page offset by 32. Therefore, the object area in a page starts at the
103 // 256th byte (8K/32). Bytes 0 to 255 do not need the remembered set, so that
104 // the first two words (64 bits) in a page can be used for other purposes.
105 //
106 // On the 64-bit platform, we add an offset to the start of the remembered set,
107 // and pointers are aligned to 8-byte pointer size. This means that we need
108 // only 128 bytes for the RSet, and only get two bytes free in the RSet's RSet.
109 // For this reason we add an offset to get room for the Page data at the start.
121 // 110 //
122 // The mark-compact collector transforms a map pointer into a page index and a 111 // The mark-compact collector transforms a map pointer into a page index and a
123 // page offset. The exact encoding is described in the comments for 112 // page offset. The excact encoding is described in the comments for
124 // class MapWord in objects.h. 113 // class MapWord in objects.h.
125 // 114 //
126 // The only way to get a page pointer is by calling factory methods: 115 // The only way to get a page pointer is by calling factory methods:
127 // Page* p = Page::FromAddress(addr); or 116 // Page* p = Page::FromAddress(addr); or
128 // Page* p = Page::FromAllocationTop(top); 117 // Page* p = Page::FromAllocationTop(top);
129 class Page { 118 class Page {
130 public: 119 public:
131 // Returns the page containing a given address. The address ranges 120 // Returns the page containing a given address. The address ranges
132 // from [page_addr .. page_addr + kPageSize[ 121 // from [page_addr .. page_addr + kPageSize[
133 // 122 //
(...skipping 20 matching lines...) Expand all
154 143
155 // Checks whether this is a valid page address. 144 // Checks whether this is a valid page address.
156 bool is_valid() { return address() != NULL; } 145 bool is_valid() { return address() != NULL; }
157 146
158 // Returns the next page of this page. 147 // Returns the next page of this page.
159 inline Page* next_page(); 148 inline Page* next_page();
160 149
161 // Return the end of allocation in this page. Undefined for unused pages. 150 // Return the end of allocation in this page. Undefined for unused pages.
162 inline Address AllocationTop(); 151 inline Address AllocationTop();
163 152
164 // Return the allocation watermark for the page.
165 // For old space pages it is guaranteed that the area under the watermark
166 // does not contain any garbage pointers to new space.
167 inline Address AllocationWatermark();
168
169 // Return the allocation watermark offset from the beginning of the page.
170 inline uint32_t AllocationWatermarkOffset();
171
172 inline void SetAllocationWatermark(Address allocation_watermark);
173
174 inline void SetCachedAllocationWatermark(Address allocation_watermark);
175 inline Address CachedAllocationWatermark();
176
177 // Returns the start address of the object area in this page. 153 // Returns the start address of the object area in this page.
178 Address ObjectAreaStart() { return address() + kObjectStartOffset; } 154 Address ObjectAreaStart() { return address() + kObjectStartOffset; }
179 155
180 // Returns the end address (exclusive) of the object area in this page. 156 // Returns the end address (exclusive) of the object area in this page.
181 Address ObjectAreaEnd() { return address() + Page::kPageSize; } 157 Address ObjectAreaEnd() { return address() + Page::kPageSize; }
182 158
159 // Returns the start address of the remembered set area.
160 Address RSetStart() { return address() + kRSetStartOffset; }
161
162 // Returns the end address of the remembered set area (exclusive).
163 Address RSetEnd() { return address() + kRSetEndOffset; }
164
183 // Checks whether an address is page aligned. 165 // Checks whether an address is page aligned.
184 static bool IsAlignedToPageSize(Address a) { 166 static bool IsAlignedToPageSize(Address a) {
185 return 0 == (OffsetFrom(a) & kPageAlignmentMask); 167 return 0 == (OffsetFrom(a) & kPageAlignmentMask);
186 } 168 }
187 169
188 // True if this page was in use before current compaction started. 170 // True if this page was in use before current compaction started.
189 // Result is valid only for pages owned by paged spaces and 171 // Result is valid only for pages owned by paged spaces and
190 // only after PagedSpace::PrepareForMarkCompact was called. 172 // only after PagedSpace::PrepareForMarkCompact was called.
191 inline bool WasInUseBeforeMC(); 173 inline bool WasInUseBeforeMC();
192 174
(...skipping 11 matching lines...) Expand all
204 return offset; 186 return offset;
205 } 187 }
206 188
207 // Returns the address for a given offset to the this page. 189 // Returns the address for a given offset to the this page.
208 Address OffsetToAddress(int offset) { 190 Address OffsetToAddress(int offset) {
209 ASSERT_PAGE_OFFSET(offset); 191 ASSERT_PAGE_OFFSET(offset);
210 return address() + offset; 192 return address() + offset;
211 } 193 }
212 194
213 // --------------------------------------------------------------------- 195 // ---------------------------------------------------------------------
214 // Card marking support 196 // Remembered set support
215 197
216 static const uint32_t kAllRegionsCleanMarks = 0x0; 198 // Clears remembered set in this page.
217 static const uint32_t kAllRegionsDirtyMarks = 0xFFFFFFFF; 199 inline void ClearRSet();
218 200
219 inline uint32_t GetRegionMarks(); 201 // Return the address of the remembered set word corresponding to an
220 inline void SetRegionMarks(uint32_t dirty); 202 // object address/offset pair, and the bit encoded as a single-bit
203 // mask in the output parameter 'bitmask'.
204 INLINE(static Address ComputeRSetBitPosition(Address address, int offset,
205 uint32_t* bitmask));
221 206
222 inline uint32_t GetRegionMaskForAddress(Address addr); 207 // Sets the corresponding remembered set bit for a given address.
223 inline int GetRegionNumberForAddress(Address addr); 208 INLINE(static void SetRSet(Address address, int offset));
224 209
225 inline void MarkRegionDirty(Address addr); 210 // Clears the corresponding remembered set bit for a given address.
226 inline bool IsRegionDirty(Address addr); 211 static inline void UnsetRSet(Address address, int offset);
227 212
228 inline void ClearRegionMarks(Address start, 213 // Checks whether the remembered set bit for a given address is set.
229 Address end, 214 static inline bool IsRSetSet(Address address, int offset);
230 bool reaches_limit); 215
216 #ifdef DEBUG
217 // Use a state to mark whether remembered set space can be used for other
218 // purposes.
219 enum RSetState { IN_USE, NOT_IN_USE };
220 static bool is_rset_in_use() { return rset_state_ == IN_USE; }
221 static void set_rset_state(RSetState state) { rset_state_ = state; }
222 #endif
231 223
232 // Page size in bytes. This must be a multiple of the OS page size. 224 // Page size in bytes. This must be a multiple of the OS page size.
233 static const int kPageSize = 1 << kPageSizeBits; 225 static const int kPageSize = 1 << kPageSizeBits;
234 226
235 // Page size mask. 227 // Page size mask.
236 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1; 228 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1;
237 229
238 static const int kPageHeaderSize = kPointerSize + kPointerSize + kIntSize + 230 // The offset of the remembered set in a page, in addition to the empty bytes
239 kIntSize + kPointerSize; 231 // formed as the remembered bits of the remembered set itself.
232 #ifdef V8_TARGET_ARCH_X64
233 static const int kRSetOffset = 4 * kPointerSize; // Room for four pointers.
234 #else
235 static const int kRSetOffset = 0;
236 #endif
237 // The end offset of the remembered set in a page
238 // (heaps are aligned to pointer size).
239 static const int kRSetEndOffset = kRSetOffset + kPageSize / kBitsPerPointer;
240 240
241 // The start offset of the object area in a page. 241 // The start offset of the object area in a page.
242 static const int kObjectStartOffset = MAP_POINTER_ALIGN(kPageHeaderSize); 242 // This needs to be at least (bits per uint32_t) * kBitsPerPointer,
243 // to align start of rset to a uint32_t address.
244 static const int kObjectStartOffset = 256;
245
246 // The start offset of the used part of the remembered set in a page.
247 static const int kRSetStartOffset = kRSetOffset +
248 kObjectStartOffset / kBitsPerPointer;
243 249
244 // Object area size in bytes. 250 // Object area size in bytes.
245 static const int kObjectAreaSize = kPageSize - kObjectStartOffset; 251 static const int kObjectAreaSize = kPageSize - kObjectStartOffset;
246 252
247 // Maximum object size that fits in a page. 253 // Maximum object size that fits in a page.
248 static const int kMaxHeapObjectSize = kObjectAreaSize; 254 static const int kMaxHeapObjectSize = kObjectAreaSize;
249 255
250 static const int kDirtyFlagOffset = 2 * kPointerSize;
251 static const int kRegionSizeLog2 = 8;
252 static const int kRegionSize = 1 << kRegionSizeLog2;
253 static const intptr_t kRegionAlignmentMask = (kRegionSize - 1);
254
255 STATIC_CHECK(kRegionSize == kPageSize / kBitsPerInt);
256
257 enum PageFlag { 256 enum PageFlag {
258 IS_NORMAL_PAGE = 1 << 0, 257 IS_NORMAL_PAGE = 1 << 0,
259 WAS_IN_USE_BEFORE_MC = 1 << 1, 258 WAS_IN_USE_BEFORE_MC = 1 << 1
260
261 // Page allocation watermark was bumped by preallocation during scavenge.
262 // Correct watermark can be retrieved by CachedAllocationWatermark() method
263 WATERMARK_INVALIDATED = 1 << 2
264 }; 259 };
265 260
266 // To avoid an additional WATERMARK_INVALIDATED flag clearing pass during
267 // scavenge we just invalidate the watermark on each old space page after
268 // processing it. And then we flip the meaning of the WATERMARK_INVALIDATED
269 // flag at the beginning of the next scavenge and each page becomes marked as
270 // having a valid watermark.
271 //
272 // The following invariant must hold for pages in old pointer and map spaces:
273 // If page is in use then page is marked as having invalid watermark at
274 // the beginning and at the end of any GC.
275 //
276 // This invariant guarantees that after flipping flag meaning at the
277 // beginning of scavenge all pages in use will be marked as having valid
278 // watermark.
279 static inline void FlipMeaningOfInvalidatedWatermarkFlag();
280
281 // Returns true if the page allocation watermark was not altered during
282 // scavenge.
283 inline bool IsWatermarkValid();
284
285 inline void InvalidateWatermark(bool value);
286
287 inline bool GetPageFlag(PageFlag flag); 261 inline bool GetPageFlag(PageFlag flag);
288 inline void SetPageFlag(PageFlag flag, bool value); 262 inline void SetPageFlag(PageFlag flag, bool value);
289 inline void ClearPageFlags();
290
291 static const int kAllocationWatermarkOffsetShift = 3;
292 static const int kAllocationWatermarkOffsetBits = kPageSizeBits + 1;
293 static const uint32_t kAllocationWatermarkOffsetMask =
294 ((1 << kAllocationWatermarkOffsetBits) - 1) <<
295 kAllocationWatermarkOffsetShift;
296
297 static const uint32_t kFlagsMask =
298 ((1 << kAllocationWatermarkOffsetShift) - 1);
299
300 STATIC_CHECK(kBitsPerInt - kAllocationWatermarkOffsetShift >=
301 kAllocationWatermarkOffsetBits);
302
303 // This field contains the meaning of the WATERMARK_INVALIDATED flag.
304 // Instead of clearing this flag from all pages we just flip
305 // its meaning at the beginning of a scavenge.
306 static intptr_t watermark_invalidated_mark_;
307 263
308 //--------------------------------------------------------------------------- 264 //---------------------------------------------------------------------------
309 // Page header description. 265 // Page header description.
310 // 266 //
311 // If a page is not in the large object space, the first word, 267 // If a page is not in the large object space, the first word,
312 // opaque_header, encodes the next page address (aligned to kPageSize 8K) 268 // opaque_header, encodes the next page address (aligned to kPageSize 8K)
313 // and the chunk number (0 ~ 8K-1). Only MemoryAllocator should use 269 // and the chunk number (0 ~ 8K-1). Only MemoryAllocator should use
314 // opaque_header. The value range of the opaque_header is [0..kPageSize[, 270 // opaque_header. The value range of the opaque_header is [0..kPageSize[,
315 // or [next_page_start, next_page_end[. It cannot point to a valid address 271 // or [next_page_start, next_page_end[. It cannot point to a valid address
316 // in the current page. If a page is in the large object space, the first 272 // in the current page. If a page is in the large object space, the first
317 // word *may* (if the page start and large object chunk start are the 273 // word *may* (if the page start and large object chunk start are the
318 // same) contain the address of the next large object chunk. 274 // same) contain the address of the next large object chunk.
319 intptr_t opaque_header; 275 intptr_t opaque_header;
320 276
321 // If the page is not in the large object space, the low-order bit of the 277 // If the page is not in the large object space, the low-order bit of the
322 // second word is set. If the page is in the large object space, the 278 // second word is set. If the page is in the large object space, the
323 // second word *may* (if the page start and large object chunk start are 279 // second word *may* (if the page start and large object chunk start are
324 // the same) contain the large object chunk size. In either case, the 280 // the same) contain the large object chunk size. In either case, the
325 // low-order bit for large object pages will be cleared. 281 // low-order bit for large object pages will be cleared.
326 // For normal pages this word is used to store page flags and 282 // For normal pages this word is used to store various page flags.
327 // offset of allocation top. 283 int flags;
328 intptr_t flags_;
329 284
330 // This field contains dirty marks for regions covering the page. Only dirty 285 // The following fields may overlap with remembered set, they can only
331 // regions might contain intergenerational references. 286 // be used in the mark-compact collector when remembered set is not
332 // Only 32 dirty marks are supported so for large object pages several regions 287 // used.
333 // might be mapped to a single dirty mark.
334 uint32_t dirty_regions_;
335 288
336 // The index of the page in its owner space. 289 // The index of the page in its owner space.
337 int mc_page_index; 290 int mc_page_index;
338 291
339 // During mark-compact collections this field contains the forwarding address 292 // The allocation pointer after relocating objects to this page.
340 // of the first live object in this page. 293 Address mc_relocation_top;
341 // During scavenge collection this field is used to store allocation watermark 294
342 // if it is altered during scavenge. 295 // The forwarding address of the first live object in this page.
343 Address mc_first_forwarded; 296 Address mc_first_forwarded;
297
298 #ifdef DEBUG
299 private:
300 static RSetState rset_state_; // state of the remembered set
301 #endif
344 }; 302 };
345 303
346 304
347 // ---------------------------------------------------------------------------- 305 // ----------------------------------------------------------------------------
348 // Space is the abstract superclass for all allocation spaces. 306 // Space is the abstract superclass for all allocation spaces.
349 class Space : public Malloced { 307 class Space : public Malloced {
350 public: 308 public:
351 Space(AllocationSpace id, Executability executable) 309 Space(AllocationSpace id, Executability executable)
352 : id_(id), executable_(executable) {} 310 : id_(id), executable_(executable) {}
353 311
(...skipping 602 matching lines...) Expand 10 before | Expand all | Expand 10 after
956 914
957 // Given an address occupied by a live object, return that object if it is 915 // Given an address occupied by a live object, return that object if it is
958 // in this space, or Failure::Exception() if it is not. The implementation 916 // in this space, or Failure::Exception() if it is not. The implementation
959 // iterates over objects in the page containing the address, the cost is 917 // iterates over objects in the page containing the address, the cost is
960 // linear in the number of objects in the page. It may be slow. 918 // linear in the number of objects in the page. It may be slow.
961 Object* FindObject(Address addr); 919 Object* FindObject(Address addr);
962 920
963 // Checks whether page is currently in use by this space. 921 // Checks whether page is currently in use by this space.
964 bool IsUsed(Page* page); 922 bool IsUsed(Page* page);
965 923
966 void MarkAllPagesClean(); 924 // Clears remembered sets of pages in this space.
925 void ClearRSet();
967 926
968 // Prepares for a mark-compact GC. 927 // Prepares for a mark-compact GC.
969 virtual void PrepareForMarkCompact(bool will_compact); 928 virtual void PrepareForMarkCompact(bool will_compact);
970 929
971 // The top of allocation in a page in this space. Undefined if page is unused. 930 // The top of allocation in a page in this space. Undefined if page is unused.
972 Address PageAllocationTop(Page* page) { 931 Address PageAllocationTop(Page* page) {
973 return page == TopPageOf(allocation_info_) ? top() 932 return page == TopPageOf(allocation_info_) ? top()
974 : PageAllocationLimit(page); 933 : PageAllocationLimit(page);
975 } 934 }
976 935
977 // The limit of allocation for a page in this space. 936 // The limit of allocation for a page in this space.
978 virtual Address PageAllocationLimit(Page* page) = 0; 937 virtual Address PageAllocationLimit(Page* page) = 0;
979 938
980 void FlushTopPageWatermark() {
981 AllocationTopPage()->SetCachedAllocationWatermark(top());
982 AllocationTopPage()->InvalidateWatermark(true);
983 }
984
985 // Current capacity without growing (Size() + Available() + Waste()). 939 // Current capacity without growing (Size() + Available() + Waste()).
986 int Capacity() { return accounting_stats_.Capacity(); } 940 int Capacity() { return accounting_stats_.Capacity(); }
987 941
988 // Total amount of memory committed for this space. For paged 942 // Total amount of memory committed for this space. For paged
989 // spaces this equals the capacity. 943 // spaces this equals the capacity.
990 int CommittedMemory() { return Capacity(); } 944 int CommittedMemory() { return Capacity(); }
991 945
992 // Available bytes without growing. 946 // Available bytes without growing.
993 int Available() { return accounting_stats_.Available(); } 947 int Available() { return accounting_stats_.Available(); }
994 948
(...skipping 34 matching lines...) Expand 10 before | Expand all | Expand 10 after
1029 } 983 }
1030 984
1031 // --------------------------------------------------------------------------- 985 // ---------------------------------------------------------------------------
1032 // Mark-compact collection support functions 986 // Mark-compact collection support functions
1033 987
1034 // Set the relocation point to the beginning of the space. 988 // Set the relocation point to the beginning of the space.
1035 void MCResetRelocationInfo(); 989 void MCResetRelocationInfo();
1036 990
1037 // Writes relocation info to the top page. 991 // Writes relocation info to the top page.
1038 void MCWriteRelocationInfoToPage() { 992 void MCWriteRelocationInfoToPage() {
1039 TopPageOf(mc_forwarding_info_)-> 993 TopPageOf(mc_forwarding_info_)->mc_relocation_top = mc_forwarding_info_.top;
1040 SetAllocationWatermark(mc_forwarding_info_.top);
1041 } 994 }
1042 995
1043 // Computes the offset of a given address in this space to the beginning 996 // Computes the offset of a given address in this space to the beginning
1044 // of the space. 997 // of the space.
1045 int MCSpaceOffsetForAddress(Address addr); 998 int MCSpaceOffsetForAddress(Address addr);
1046 999
1047 // Updates the allocation pointer to the relocation top after a mark-compact 1000 // Updates the allocation pointer to the relocation top after a mark-compact
1048 // collection. 1001 // collection.
1049 virtual void MCCommitRelocationInfo() = 0; 1002 virtual void MCCommitRelocationInfo() = 0;
1050 1003
(...skipping 97 matching lines...) Expand 10 before | Expand all | Expand 10 after
1148 1101
1149 // Slow path of AllocateRaw. This function is space-dependent. 1102 // Slow path of AllocateRaw. This function is space-dependent.
1150 virtual HeapObject* SlowAllocateRaw(int size_in_bytes) = 0; 1103 virtual HeapObject* SlowAllocateRaw(int size_in_bytes) = 0;
1151 1104
1152 // Slow path of MCAllocateRaw. 1105 // Slow path of MCAllocateRaw.
1153 HeapObject* SlowMCAllocateRaw(int size_in_bytes); 1106 HeapObject* SlowMCAllocateRaw(int size_in_bytes);
1154 1107
1155 #ifdef DEBUG 1108 #ifdef DEBUG
1156 // Returns the number of total pages in this space. 1109 // Returns the number of total pages in this space.
1157 int CountTotalPages(); 1110 int CountTotalPages();
1111
1112 void DoPrintRSet(const char* space_name);
1158 #endif 1113 #endif
1159 private: 1114 private:
1160 1115
1161 // Returns a pointer to the page of the relocation pointer. 1116 // Returns a pointer to the page of the relocation pointer.
1162 Page* MCRelocationTopPage() { return TopPageOf(mc_forwarding_info_); } 1117 Page* MCRelocationTopPage() { return TopPageOf(mc_forwarding_info_); }
1163 1118
1164 friend class PageIterator; 1119 friend class PageIterator;
1165 }; 1120 };
1166 1121
1167 1122
(...skipping 632 matching lines...) Expand 10 before | Expand all | Expand 10 after
1800 1755
1801 // Updates the allocation pointer to the relocation top after a mark-compact 1756 // Updates the allocation pointer to the relocation top after a mark-compact
1802 // collection. 1757 // collection.
1803 virtual void MCCommitRelocationInfo(); 1758 virtual void MCCommitRelocationInfo();
1804 1759
1805 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); 1760 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page);
1806 1761
1807 #ifdef DEBUG 1762 #ifdef DEBUG
1808 // Reports statistics for the space 1763 // Reports statistics for the space
1809 void ReportStatistics(); 1764 void ReportStatistics();
1765 // Dump the remembered sets in the space to stdout.
1766 void PrintRSet();
1810 #endif 1767 #endif
1811 1768
1812 protected: 1769 protected:
1813 // Virtual function in the superclass. Slow path of AllocateRaw. 1770 // Virtual function in the superclass. Slow path of AllocateRaw.
1814 HeapObject* SlowAllocateRaw(int size_in_bytes); 1771 HeapObject* SlowAllocateRaw(int size_in_bytes);
1815 1772
1816 // Virtual function in the superclass. Allocate linearly at the start of 1773 // Virtual function in the superclass. Allocate linearly at the start of
1817 // the page after current_page (there is assumed to be one). 1774 // the page after current_page (there is assumed to be one).
1818 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); 1775 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes);
1819 1776
(...skipping 44 matching lines...) Expand 10 before | Expand all | Expand 10 after
1864 1821
1865 // Updates the allocation pointer to the relocation top after a mark-compact 1822 // Updates the allocation pointer to the relocation top after a mark-compact
1866 // collection. 1823 // collection.
1867 virtual void MCCommitRelocationInfo(); 1824 virtual void MCCommitRelocationInfo();
1868 1825
1869 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page); 1826 virtual void PutRestOfCurrentPageOnFreeList(Page* current_page);
1870 1827
1871 #ifdef DEBUG 1828 #ifdef DEBUG
1872 // Reports statistic info of the space 1829 // Reports statistic info of the space
1873 void ReportStatistics(); 1830 void ReportStatistics();
1831
1832 // Dump the remembered sets in the space to stdout.
1833 void PrintRSet();
1874 #endif 1834 #endif
1875 1835
1876 protected: 1836 protected:
1877 // Virtual function in the superclass. Slow path of AllocateRaw. 1837 // Virtual function in the superclass. Slow path of AllocateRaw.
1878 HeapObject* SlowAllocateRaw(int size_in_bytes); 1838 HeapObject* SlowAllocateRaw(int size_in_bytes);
1879 1839
1880 // Virtual function in the superclass. Allocate linearly at the start of 1840 // Virtual function in the superclass. Allocate linearly at the start of
1881 // the page after current_page (there is assumed to be one). 1841 // the page after current_page (there is assumed to be one).
1882 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes); 1842 HeapObject* AllocateInNextPage(Page* current_page, int size_in_bytes);
1883 1843
(...skipping 48 matching lines...) Expand 10 before | Expand all | Expand 10 after
1932 return !MapPointersEncodable() && live_maps <= CompactionThreshold(); 1892 return !MapPointersEncodable() && live_maps <= CompactionThreshold();
1933 } 1893 }
1934 1894
1935 Address TopAfterCompaction(int live_maps) { 1895 Address TopAfterCompaction(int live_maps) {
1936 ASSERT(NeedsCompaction(live_maps)); 1896 ASSERT(NeedsCompaction(live_maps));
1937 1897
1938 int pages_left = live_maps / kMapsPerPage; 1898 int pages_left = live_maps / kMapsPerPage;
1939 PageIterator it(this, PageIterator::ALL_PAGES); 1899 PageIterator it(this, PageIterator::ALL_PAGES);
1940 while (pages_left-- > 0) { 1900 while (pages_left-- > 0) {
1941 ASSERT(it.has_next()); 1901 ASSERT(it.has_next());
1942 it.next()->SetRegionMarks(Page::kAllRegionsCleanMarks); 1902 it.next()->ClearRSet();
1943 } 1903 }
1944 ASSERT(it.has_next()); 1904 ASSERT(it.has_next());
1945 Page* top_page = it.next(); 1905 Page* top_page = it.next();
1946 top_page->SetRegionMarks(Page::kAllRegionsCleanMarks); 1906 top_page->ClearRSet();
1947 ASSERT(top_page->is_valid()); 1907 ASSERT(top_page->is_valid());
1948 1908
1949 int offset = live_maps % kMapsPerPage * Map::kSize; 1909 int offset = live_maps % kMapsPerPage * Map::kSize;
1950 Address top = top_page->ObjectAreaStart() + offset; 1910 Address top = top_page->ObjectAreaStart() + offset;
1951 ASSERT(top < top_page->ObjectAreaEnd()); 1911 ASSERT(top < top_page->ObjectAreaEnd());
1952 ASSERT(Contains(top)); 1912 ASSERT(Contains(top));
1953 1913
1954 return top; 1914 return top;
1955 } 1915 }
1956 1916
(...skipping 70 matching lines...) Expand 10 before | Expand all | Expand 10 after
2027 // extra padding bytes (Page::kPageSize + Page::kObjectStartOffset). 1987 // extra padding bytes (Page::kPageSize + Page::kObjectStartOffset).
2028 // A large object always starts at Page::kObjectStartOffset to a page. 1988 // A large object always starts at Page::kObjectStartOffset to a page.
2029 // Large objects do not move during garbage collections. 1989 // Large objects do not move during garbage collections.
2030 1990
2031 // A LargeObjectChunk holds exactly one large object page with exactly one 1991 // A LargeObjectChunk holds exactly one large object page with exactly one
2032 // large object. 1992 // large object.
2033 class LargeObjectChunk { 1993 class LargeObjectChunk {
2034 public: 1994 public:
2035 // Allocates a new LargeObjectChunk that contains a large object page 1995 // Allocates a new LargeObjectChunk that contains a large object page
2036 // (Page::kPageSize aligned) that has at least size_in_bytes (for a large 1996 // (Page::kPageSize aligned) that has at least size_in_bytes (for a large
2037 // object) bytes after the object area start of that page. 1997 // object and possibly extra remembered set words) bytes after the object
2038 // The allocated chunk size is set in the output parameter chunk_size. 1998 // area start of that page. The allocated chunk size is set in the output
1999 // parameter chunk_size.
2039 static LargeObjectChunk* New(int size_in_bytes, 2000 static LargeObjectChunk* New(int size_in_bytes,
2040 size_t* chunk_size, 2001 size_t* chunk_size,
2041 Executability executable); 2002 Executability executable);
2042 2003
2043 // Interpret a raw address as a large object chunk. 2004 // Interpret a raw address as a large object chunk.
2044 static LargeObjectChunk* FromAddress(Address address) { 2005 static LargeObjectChunk* FromAddress(Address address) {
2045 return reinterpret_cast<LargeObjectChunk*>(address); 2006 return reinterpret_cast<LargeObjectChunk*>(address);
2046 } 2007 }
2047 2008
2048 // Returns the address of this chunk. 2009 // Returns the address of this chunk.
2049 Address address() { return reinterpret_cast<Address>(this); } 2010 Address address() { return reinterpret_cast<Address>(this); }
2050 2011
2051 // Accessors for the fields of the chunk. 2012 // Accessors for the fields of the chunk.
2052 LargeObjectChunk* next() { return next_; } 2013 LargeObjectChunk* next() { return next_; }
2053 void set_next(LargeObjectChunk* chunk) { next_ = chunk; } 2014 void set_next(LargeObjectChunk* chunk) { next_ = chunk; }
2054 2015
2055 size_t size() { return size_; } 2016 size_t size() { return size_; }
2056 void set_size(size_t size_in_bytes) { size_ = size_in_bytes; } 2017 void set_size(size_t size_in_bytes) { size_ = size_in_bytes; }
2057 2018
2058 // Returns the object in this chunk. 2019 // Returns the object in this chunk.
2059 inline HeapObject* GetObject(); 2020 inline HeapObject* GetObject();
2060 2021
2061 // Given a requested size returns the physical size of a chunk to be 2022 // Given a requested size (including any extra remembered set words),
2062 // allocated. 2023 // returns the physical size of a chunk to be allocated.
2063 static int ChunkSizeFor(int size_in_bytes); 2024 static int ChunkSizeFor(int size_in_bytes);
2064 2025
2065 // Given a chunk size, returns the object size it can accommodate. Used by 2026 // Given a chunk size, returns the object size it can accommodate (not
2066 // LargeObjectSpace::Available. 2027 // including any extra remembered set words). Used by
2028 // LargeObjectSpace::Available. Note that this can overestimate the size
2029 // of object that will fit in a chunk---if the object requires extra
2030 // remembered set words (eg, for large fixed arrays), the actual object
2031 // size for the chunk will be smaller than reported by this function.
2067 static int ObjectSizeFor(int chunk_size) { 2032 static int ObjectSizeFor(int chunk_size) {
2068 if (chunk_size <= (Page::kPageSize + Page::kObjectStartOffset)) return 0; 2033 if (chunk_size <= (Page::kPageSize + Page::kObjectStartOffset)) return 0;
2069 return chunk_size - Page::kPageSize - Page::kObjectStartOffset; 2034 return chunk_size - Page::kPageSize - Page::kObjectStartOffset;
2070 } 2035 }
2071 2036
2072 private: 2037 private:
2073 // A pointer to the next large object chunk in the space or NULL. 2038 // A pointer to the next large object chunk in the space or NULL.
2074 LargeObjectChunk* next_; 2039 LargeObjectChunk* next_;
2075 2040
2076 // The size of this chunk. 2041 // The size of this chunk.
(...skipping 15 matching lines...) Expand all
2092 // Releases internal resources, frees objects in this space. 2057 // Releases internal resources, frees objects in this space.
2093 void TearDown(); 2058 void TearDown();
2094 2059
2095 // Allocates a (non-FixedArray, non-Code) large object. 2060 // Allocates a (non-FixedArray, non-Code) large object.
2096 Object* AllocateRaw(int size_in_bytes); 2061 Object* AllocateRaw(int size_in_bytes);
2097 // Allocates a large Code object. 2062 // Allocates a large Code object.
2098 Object* AllocateRawCode(int size_in_bytes); 2063 Object* AllocateRawCode(int size_in_bytes);
2099 // Allocates a large FixedArray. 2064 // Allocates a large FixedArray.
2100 Object* AllocateRawFixedArray(int size_in_bytes); 2065 Object* AllocateRawFixedArray(int size_in_bytes);
2101 2066
2102 // Available bytes for objects in this space. 2067 // Available bytes for objects in this space, not including any extra
2068 // remembered set words.
2103 int Available() { 2069 int Available() {
2104 return LargeObjectChunk::ObjectSizeFor(MemoryAllocator::Available()); 2070 return LargeObjectChunk::ObjectSizeFor(MemoryAllocator::Available());
2105 } 2071 }
2106 2072
2107 virtual int Size() { 2073 virtual int Size() {
2108 return size_; 2074 return size_;
2109 } 2075 }
2110 2076
2111 int PageCount() { 2077 int PageCount() {
2112 return page_count_; 2078 return page_count_;
2113 } 2079 }
2114 2080
2115 // Finds an object for a given address, returns Failure::Exception() 2081 // Finds an object for a given address, returns Failure::Exception()
2116 // if it is not found. The function iterates through all objects in this 2082 // if it is not found. The function iterates through all objects in this
2117 // space, may be slow. 2083 // space, may be slow.
2118 Object* FindObject(Address a); 2084 Object* FindObject(Address a);
2119 2085
2120 // Iterates objects covered by dirty regions. 2086 // Clears remembered sets.
2121 void IterateDirtyRegions(ObjectSlotCallback func); 2087 void ClearRSet();
2088
2089 // Iterates objects whose remembered set bits are set.
2090 void IterateRSet(ObjectSlotCallback func);
2122 2091
2123 // Frees unmarked objects. 2092 // Frees unmarked objects.
2124 void FreeUnmarkedObjects(); 2093 void FreeUnmarkedObjects();
2125 2094
2126 // Checks whether a heap object is in this space; O(1). 2095 // Checks whether a heap object is in this space; O(1).
2127 bool Contains(HeapObject* obj); 2096 bool Contains(HeapObject* obj);
2128 2097
2129 // Checks whether the space is empty. 2098 // Checks whether the space is empty.
2130 bool IsEmpty() { return first_chunk_ == NULL; } 2099 bool IsEmpty() { return first_chunk_ == NULL; }
2131 2100
2132 // See the comments for ReserveSpace in the Space class. This has to be 2101 // See the comments for ReserveSpace in the Space class. This has to be
2133 // called after ReserveSpace has been called on the paged spaces, since they 2102 // called after ReserveSpace has been called on the paged spaces, since they
2134 // may use some memory, leaving less for large objects. 2103 // may use some memory, leaving less for large objects.
2135 virtual bool ReserveSpace(int bytes); 2104 virtual bool ReserveSpace(int bytes);
2136 2105
2137 #ifdef ENABLE_HEAP_PROTECTION 2106 #ifdef ENABLE_HEAP_PROTECTION
2138 // Protect/unprotect the space by marking it read-only/writable. 2107 // Protect/unprotect the space by marking it read-only/writable.
2139 void Protect(); 2108 void Protect();
2140 void Unprotect(); 2109 void Unprotect();
2141 #endif 2110 #endif
2142 2111
2143 #ifdef DEBUG 2112 #ifdef DEBUG
2144 virtual void Verify(); 2113 virtual void Verify();
2145 virtual void Print(); 2114 virtual void Print();
2146 void ReportStatistics(); 2115 void ReportStatistics();
2147 void CollectCodeStatistics(); 2116 void CollectCodeStatistics();
2117 // Dump the remembered sets in the space to stdout.
2118 void PrintRSet();
2148 #endif 2119 #endif
2149 // Checks whether an address is in the object area in this space. It 2120 // Checks whether an address is in the object area in this space. It
2150 // iterates all objects in the space. May be slow. 2121 // iterates all objects in the space. May be slow.
2151 bool SlowContains(Address addr) { return !FindObject(addr)->IsFailure(); } 2122 bool SlowContains(Address addr) { return !FindObject(addr)->IsFailure(); }
2152 2123
2153 private: 2124 private:
2154 // The head of the linked list of large object chunks. 2125 // The head of the linked list of large object chunks.
2155 LargeObjectChunk* first_chunk_; 2126 LargeObjectChunk* first_chunk_;
2156 int size_; // allocated bytes 2127 int size_; // allocated bytes
2157 int page_count_; // number of chunks 2128 int page_count_; // number of chunks
2158 2129
2159 2130
2160 // Shared implementation of AllocateRaw, AllocateRawCode and 2131 // Shared implementation of AllocateRaw, AllocateRawCode and
2161 // AllocateRawFixedArray. 2132 // AllocateRawFixedArray.
2162 Object* AllocateRawInternal(int requested_size, 2133 Object* AllocateRawInternal(int requested_size,
2163 int object_size, 2134 int object_size,
2164 Executability executable); 2135 Executability executable);
2165 2136
2137 // Returns the number of extra bytes (rounded up to the nearest full word)
2138 // required for extra_object_bytes of extra pointers (in bytes).
2139 static inline int ExtraRSetBytesFor(int extra_object_bytes);
2140
2166 friend class LargeObjectIterator; 2141 friend class LargeObjectIterator;
2167 2142
2168 public: 2143 public:
2169 TRACK_MEMORY("LargeObjectSpace") 2144 TRACK_MEMORY("LargeObjectSpace")
2170 }; 2145 };
2171 2146
2172 2147
2173 class LargeObjectIterator: public ObjectIterator { 2148 class LargeObjectIterator: public ObjectIterator {
2174 public: 2149 public:
2175 explicit LargeObjectIterator(LargeObjectSpace* space); 2150 explicit LargeObjectIterator(LargeObjectSpace* space);
2176 LargeObjectIterator(LargeObjectSpace* space, HeapObjectCallback size_func); 2151 LargeObjectIterator(LargeObjectSpace* space, HeapObjectCallback size_func);
2177 2152
2178 HeapObject* next(); 2153 HeapObject* next();
2179 2154
2180 // implementation of ObjectIterator. 2155 // implementation of ObjectIterator.
2181 virtual HeapObject* next_object() { return next(); } 2156 virtual HeapObject* next_object() { return next(); }
2182 2157
2183 private: 2158 private:
2184 LargeObjectChunk* current_; 2159 LargeObjectChunk* current_;
2185 HeapObjectCallback size_func_; 2160 HeapObjectCallback size_func_;
2186 }; 2161 };
2187 2162
2188 2163
2189 } } // namespace v8::internal 2164 } } // namespace v8::internal
2190 2165
2191 #endif // V8_SPACES_H_ 2166 #endif // V8_SPACES_H_
OLDNEW
« no previous file with comments | « src/runtime.cc ('k') | src/spaces.cc » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698