Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(296)

Side by Side Diff: src/spaces.h

Issue 6250076: Start using store buffers. Handle store buffer overflow situation.... (Closed) Base URL: http://v8.googlecode.com/svn/branches/experimental/gc/
Patch Set: Created 9 years, 10 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
OLDNEW
1 // Copyright 2006-2010 the V8 project authors. All rights reserved. 1 // Copyright 2006-2010 the V8 project authors. All rights reserved.
2 // Redistribution and use in source and binary forms, with or without 2 // Redistribution and use in source and binary forms, with or without
3 // modification, are permitted provided that the following conditions are 3 // modification, are permitted provided that the following conditions are
4 // met: 4 // met:
5 // 5 //
6 // * Redistributions of source code must retain the above copyright 6 // * Redistributions of source code must retain the above copyright
7 // notice, this list of conditions and the following disclaimer. 7 // notice, this list of conditions and the following disclaimer.
8 // * Redistributions in binary form must reproduce the above 8 // * Redistributions in binary form must reproduce the above
9 // copyright notice, this list of conditions and the following 9 // copyright notice, this list of conditions and the following
10 // disclaimer in the documentation and/or other materials provided 10 // disclaimer in the documentation and/or other materials provided
(...skipping 28 matching lines...) Expand all
39 // 39 //
40 // A JS heap consists of a young generation, an old generation, and a large 40 // A JS heap consists of a young generation, an old generation, and a large
41 // object space. The young generation is divided into two semispaces. A 41 // object space. The young generation is divided into two semispaces. A
42 // scavenger implements Cheney's copying algorithm. The old generation is 42 // scavenger implements Cheney's copying algorithm. The old generation is
43 // separated into a map space and an old object space. The map space contains 43 // separated into a map space and an old object space. The map space contains
44 // all (and only) map objects, the rest of old objects go into the old space. 44 // all (and only) map objects, the rest of old objects go into the old space.
45 // The old generation is collected by a mark-sweep-compact collector. 45 // The old generation is collected by a mark-sweep-compact collector.
46 // 46 //
47 // The semispaces of the young generation are contiguous. The old and map 47 // The semispaces of the young generation are contiguous. The old and map
48 // spaces consists of a list of pages. A page has a page header and an object 48 // spaces consists of a list of pages. A page has a page header and an object
49 // area. A page size is deliberately chosen as 8K bytes. 49 // area.
50 // The first word of a page is an opaque page header that has the
51 // address of the next page and its ownership information. The second word may
52 // have the allocation top address of this page. Heap objects are aligned to the
53 // pointer size.
54 // 50 //
55 // There is a separate large object space for objects larger than 51 // There is a separate large object space for objects larger than
56 // Page::kMaxHeapObjectSize, so that they do not have to move during 52 // Page::kMaxHeapObjectSize, so that they do not have to move during
57 // collection. The large object space is paged. Pages in large object space 53 // collection. The large object space is paged. Pages in large object space
58 // may be larger than 8K. 54 // may be larger than the page size.
59 // 55 //
60 // A card marking write barrier is used to keep track of intergenerational 56 // A store-buffer based write barrier is used to keep track of intergenerational
61 // references. Old space pages are divided into regions of Page::kRegionSize 57 // references. See store-buffer.h.
62 // size. Each region has a corresponding dirty bit in the page header which is
63 // set if the region might contain pointers to new space. For details about
64 // dirty bits encoding see comments in the Page::GetRegionNumberForAddress()
65 // method body.
66 // 58 //
67 // During scavenges and mark-sweep collections we iterate intergenerational 59 // During scavenges and mark-sweep collections we sometimes (after a store
68 // pointers without decoding heap object maps so if the page belongs to old 60 // buffer overflow) iterate intergenerational pointers without decoding heap
69 // pointer space or large object space it is essential to guarantee that 61 // object maps so if the page belongs to old pointer space or large object
70 // the page does not contain any garbage pointers to new space: every pointer 62 // space it is essential to guarantee that the page does not contain any
71 // aligned word which satisfies the Heap::InNewSpace() predicate must be a 63 // garbage pointers to new space: every pointer aligned word which satisfies
72 // pointer to a live heap object in new space. Thus objects in old pointer 64 // the Heap::InNewSpace() predicate must be a pointer to a live heap object in
73 // and large object spaces should have a special layout (e.g. no bare integer 65 // new space. Thus objects in old pointer and large object spaces should have a
74 // fields). This requirement does not apply to map space which is iterated in 66 // special layout (e.g. no bare integer fields). This requirement does not
75 // a special fashion. However we still require pointer fields of dead maps to 67 // apply to map space which is iterated in a special fashion. However we still
76 // be cleaned. 68 // require pointer fields of dead maps to be cleaned.
77 // 69 //
78 // To enable lazy cleaning of old space pages we use a notion of allocation 70 // To enable lazy cleaning of old space pages we use a notion of allocation
79 // watermark. Every pointer under watermark is considered to be well formed. 71 // watermark. Every pointer under watermark is considered to be well formed.
80 // Page allocation watermark is not necessarily equal to page allocation top but 72 // Page allocation watermark is not necessarily equal to page allocation top but
81 // all alive objects on page should reside under allocation watermark. 73 // all alive objects on page should reside under allocation watermark.
82 // During scavenge allocation watermark might be bumped and invalid pointers 74 // During scavenge allocation watermark might be bumped and invalid pointers
83 // might appear below it. To avoid following them we store a valid watermark 75 // might appear below it. To avoid following them we store a valid watermark
84 // into special field in the page header and set a page WATERMARK_INVALIDATED 76 // into special field in the page header and set a page WATERMARK_INVALIDATED
85 // flag. For details see comments in the Page::SetAllocationWatermark() method 77 // flag. For details see comments in the Page::SetAllocationWatermark() method
86 // body. 78 // body.
(...skipping 404 matching lines...) Expand 10 before | Expand all | Expand 10 after
491 return offset; 483 return offset;
492 } 484 }
493 485
494 // Returns the address for a given offset to the this page. 486 // Returns the address for a given offset to the this page.
495 Address OffsetToAddress(int offset) { 487 Address OffsetToAddress(int offset) {
496 ASSERT_PAGE_OFFSET(offset); 488 ASSERT_PAGE_OFFSET(offset);
497 return address() + offset; 489 return address() + offset;
498 } 490 }
499 491
500 // --------------------------------------------------------------------- 492 // ---------------------------------------------------------------------
501 // Card marking support
502
503 static const uint32_t kAllRegionsCleanMarks = 0x0;
504 static const uint32_t kAllRegionsDirtyMarks = 0xFFFFFFFF;
505
506 inline uint32_t GetRegionMarks();
507 inline void SetRegionMarks(uint32_t dirty);
508
509 inline uint32_t GetRegionMaskForAddress(Address addr);
510 inline uint32_t GetRegionMaskForSpan(Address start, int length_in_bytes);
511 inline int GetRegionNumberForAddress(Address addr);
512
513 inline void MarkRegionDirty(Address addr);
514 inline bool IsRegionDirty(Address addr);
515
516 inline void ClearRegionMarks(Address start,
517 Address end,
518 bool reaches_limit);
519 493
520 // Page size in bytes. This must be a multiple of the OS page size. 494 // Page size in bytes. This must be a multiple of the OS page size.
521 static const int kPageSize = 1 << kPageSizeBits; 495 static const int kPageSize = 1 << kPageSizeBits;
522 496
523 // Page size mask. 497 // Page size mask.
524 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1; 498 static const intptr_t kPageAlignmentMask = (1 << kPageSizeBits) - 1;
525 499
526 // The start offset of the object area in a page. Aligned to both maps and 500 // The start offset of the object area in a page. Aligned to both maps and
527 // code alignment to be suitable for both. 501 // code alignment to be suitable for both.
528 static const int kObjectStartOffset = kBodyOffset; 502 static const int kObjectStartOffset = kBodyOffset;
529 503
530 // Object area size in bytes. 504 // Object area size in bytes.
531 static const int kObjectAreaSize = kPageSize - kObjectStartOffset; 505 static const int kObjectAreaSize = kPageSize - kObjectStartOffset;
532 506
533 // Maximum object size that fits in a page. 507 // Maximum object size that fits in a page.
534 static const int kMaxHeapObjectSize = kObjectAreaSize; 508 static const int kMaxHeapObjectSize = kObjectAreaSize;
535 509
536 static const int kFirstUsedCell = 510 static const int kFirstUsedCell =
537 (kBodyOffset/kPointerSize) >> MarkbitsBitmap::kBitsPerCellLog2; 511 (kBodyOffset/kPointerSize) >> MarkbitsBitmap::kBitsPerCellLog2;
538 512
539 static const int kLastUsedCell = 513 static const int kLastUsedCell =
540 ((kPageSize - kPointerSize)/kPointerSize) >> 514 ((kPageSize - kPointerSize)/kPointerSize) >>
541 MarkbitsBitmap::kBitsPerCellLog2; 515 MarkbitsBitmap::kBitsPerCellLog2;
542 516
543 517
544 #ifdef ENABLE_CARDMARKING_WRITE_BARRIER
545 static const int kDirtyFlagOffset = 2 * kPointerSize;
546 static const int kRegionSizeLog2 = 8;
547 static const int kRegionSize = 1 << kRegionSizeLog2;
548 static const intptr_t kRegionAlignmentMask = (kRegionSize - 1);
549
550 STATIC_CHECK(kRegionSize == kPageSize / kBitsPerInt);
551 #endif
552
553 enum PageFlag { 518 enum PageFlag {
554 // Page allocation watermark was bumped by preallocation during scavenge. 519 // Page allocation watermark was bumped by preallocation during scavenge.
555 // Correct watermark can be retrieved by CachedAllocationWatermark() method 520 // Correct watermark can be retrieved by CachedAllocationWatermark() method
556 WATERMARK_INVALIDATED = NUM_MEMORY_CHUNK_FLAGS, 521 WATERMARK_INVALIDATED = NUM_MEMORY_CHUNK_FLAGS,
557 522
558 // We say that memory region [start_addr, end_addr[ is continuous if 523 // We say that memory region [start_addr, end_addr[ is continuous if
559 // and only if: 524 // and only if:
560 // a) start_addr coincides with the start of a valid heap object 525 // a) start_addr coincides with the start of a valid heap object
561 // b) for any valid heap object o in this region address 526 // b) for any valid heap object o in this region address
562 // o->address() + o->Size() is either equal to end_addr or coincides 527 // o->address() + o->Size() is either equal to end_addr or coincides
(...skipping 1590 matching lines...) Expand 10 before | Expand all | Expand 10 after
2153 static const int kMaxMapPageIndex = 1 << 16; 2118 static const int kMaxMapPageIndex = 1 << 16;
2154 2119
2155 // Are map pointers encodable into map word? 2120 // Are map pointers encodable into map word?
2156 bool MapPointersEncodable() { 2121 bool MapPointersEncodable() {
2157 return false; 2122 return false;
2158 } 2123 }
2159 2124
2160 // Should be called after forced sweep to find out if map space needs 2125 // Should be called after forced sweep to find out if map space needs
2161 // compaction. 2126 // compaction.
2162 bool NeedsCompaction(int live_maps) { 2127 bool NeedsCompaction(int live_maps) {
2163 return !MapPointersEncodable() && live_maps <= CompactionThreshold(); 2128 return false; // TODO(gc): Bring back map compaction.
2164 }
2165
2166 Address TopAfterCompaction(int live_maps) {
2167 ASSERT(NeedsCompaction(live_maps));
2168
2169 int pages_left = live_maps / kMapsPerPage;
2170 PageIterator it(this, PageIterator::ALL_PAGES);
2171 while (pages_left-- > 0) {
2172 ASSERT(it.has_next());
2173 it.next()->SetRegionMarks(Page::kAllRegionsCleanMarks);
2174 }
2175 ASSERT(it.has_next());
2176 Page* top_page = it.next();
2177 top_page->SetRegionMarks(Page::kAllRegionsCleanMarks);
2178 ASSERT(top_page->is_valid());
2179
2180 int offset = live_maps % kMapsPerPage * Map::kSize;
2181 Address top = top_page->ObjectAreaStart() + offset;
2182 ASSERT(top < top_page->ObjectAreaEnd());
2183 ASSERT(Contains(top));
2184
2185 return top;
2186 }
2187
2188 void FinishCompaction(Address new_top, int live_maps) {
2189 Page* top_page = Page::FromAddress(new_top);
2190 ASSERT(top_page->is_valid());
2191
2192 SetAllocationInfo(&allocation_info_, top_page);
2193 allocation_info_.top = new_top;
2194
2195 int new_size = live_maps * Map::kSize;
2196 accounting_stats_.DeallocateBytes(accounting_stats_.Size());
2197 accounting_stats_.AllocateBytes(new_size);
2198
2199 #ifdef DEBUG
2200 if (FLAG_enable_slow_asserts) {
2201 intptr_t actual_size = 0;
2202 for (Page* p = first_page_; p != top_page; p = p->next_page())
2203 actual_size += kMapsPerPage * Map::kSize;
2204 actual_size += (new_top - top_page->ObjectAreaStart());
2205 ASSERT(accounting_stats_.Size() == actual_size);
2206 }
2207 #endif
2208
2209 Shrink();
2210 ResetFreeList();
2211 } 2129 }
2212 2130
2213 protected: 2131 protected:
2214 #ifdef DEBUG 2132 #ifdef DEBUG
2215 virtual void VerifyObject(HeapObject* obj); 2133 virtual void VerifyObject(HeapObject* obj);
2216 #endif 2134 #endif
2217 2135
2218 private: 2136 private:
2219 static const int kMapsPerPage = Page::kObjectAreaSize / Map::kSize; 2137 static const int kMapsPerPage = Page::kObjectAreaSize / Map::kSize;
2220 2138
(...skipping 77 matching lines...) Expand 10 before | Expand all | Expand 10 after
2298 2216
2299 // Finds an object for a given address, returns Failure::Exception() 2217 // Finds an object for a given address, returns Failure::Exception()
2300 // if it is not found. The function iterates through all objects in this 2218 // if it is not found. The function iterates through all objects in this
2301 // space, may be slow. 2219 // space, may be slow.
2302 MaybeObject* FindObject(Address a); 2220 MaybeObject* FindObject(Address a);
2303 2221
2304 // Finds a large object page containing the given pc, returns NULL 2222 // Finds a large object page containing the given pc, returns NULL
2305 // if such a page doesn't exist. 2223 // if such a page doesn't exist.
2306 LargePage* FindPageContainingPc(Address pc); 2224 LargePage* FindPageContainingPc(Address pc);
2307 2225
2308 // Iterates objects covered by dirty regions. 2226 // Iterates over pointers to new space.
2309 void IterateDirtyRegions(ObjectSlotCallback func); 2227 void IteratePointersToNewSpace(ObjectSlotCallback func);
2310 2228
2311 // Frees unmarked objects. 2229 // Frees unmarked objects.
2312 void FreeUnmarkedObjects(); 2230 void FreeUnmarkedObjects();
2313 2231
2314 // Checks whether a heap object is in this space; O(1). 2232 // Checks whether a heap object is in this space; O(1).
2315 bool Contains(HeapObject* obj); 2233 bool Contains(HeapObject* obj);
2316 2234
2317 // Checks whether the space is empty. 2235 // Checks whether the space is empty.
2318 bool IsEmpty() { return first_page_ == NULL; } 2236 bool IsEmpty() { return first_page_ == NULL; }
2319 2237
(...skipping 49 matching lines...) Expand 10 before | Expand all | Expand 10 after
2369 2287
2370 private: 2288 private:
2371 LargePage* current_; 2289 LargePage* current_;
2372 HeapObjectCallback size_func_; 2290 HeapObjectCallback size_func_;
2373 }; 2291 };
2374 2292
2375 2293
2376 } } // namespace v8::internal 2294 } } // namespace v8::internal
2377 2295
2378 #endif // V8_SPACES_H_ 2296 #endif // V8_SPACES_H_
OLDNEW

Powered by Google App Engine
This is Rietveld 408576698