Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(151)

Side by Side Diff: test/cctest/heap/test-heap.cc

Issue 2013713003: [heap] Switch to 500k pages (Closed) Base URL: https://chromium.googlesource.com/v8/v8.git@master
Patch Set: Fix ReleaseOverReservedPages for no snapshot builds Created 4 years, 4 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 // Copyright 2012 the V8 project authors. All rights reserved. 1 // Copyright 2012 the V8 project authors. All rights reserved.
2 // Redistribution and use in source and binary forms, with or without 2 // Redistribution and use in source and binary forms, with or without
3 // modification, are permitted provided that the following conditions are 3 // modification, are permitted provided that the following conditions are
4 // met: 4 // met:
5 // 5 //
6 // * Redistributions of source code must retain the above copyright 6 // * Redistributions of source code must retain the above copyright
7 // notice, this list of conditions and the following disclaimer. 7 // notice, this list of conditions and the following disclaimer.
8 // * Redistributions in binary form must reproduce the above 8 // * Redistributions in binary form must reproduce the above
9 // copyright notice, this list of conditions and the following 9 // copyright notice, this list of conditions and the following
10 // disclaimer in the documentation and/or other materials provided 10 // disclaimer in the documentation and/or other materials provided
(...skipping 2210 matching lines...) Expand 10 before | Expand all | Expand 10 after
2221 Address top = *top_addr; 2221 Address top = *top_addr;
2222 // Now force the remaining allocation onto the free list. 2222 // Now force the remaining allocation onto the free list.
2223 CcTest::heap()->old_space()->EmptyAllocationInfo(); 2223 CcTest::heap()->old_space()->EmptyAllocationInfo();
2224 return top; 2224 return top;
2225 } 2225 }
2226 2226
2227 2227
2228 // Test the case where allocation must be done from the free list, so filler 2228 // Test the case where allocation must be done from the free list, so filler
2229 // may precede or follow the object. 2229 // may precede or follow the object.
2230 TEST(TestAlignedOverAllocation) { 2230 TEST(TestAlignedOverAllocation) {
2231 Heap* heap = CcTest::heap();
2232 // Test checks for fillers before and behind objects and requires a fresh
2233 // page and empty free list.
2234 heap::AbandonCurrentlyFreeMemory(heap->old_space());
2235 // Allocate a dummy object to properly set up the linear allocation info.
2236 AllocationResult dummy =
2237 heap->old_space()->AllocateRawUnaligned(kPointerSize);
2238 CHECK(!dummy.IsRetry());
2239 heap->CreateFillerObjectAt(
2240 HeapObject::cast(dummy.ToObjectChecked())->address(), kPointerSize,
2241 ClearRecordedSlots::kNo);
2242
2231 // Double misalignment is 4 on 32-bit platforms, 0 on 64-bit ones. 2243 // Double misalignment is 4 on 32-bit platforms, 0 on 64-bit ones.
2232 const intptr_t double_misalignment = kDoubleSize - kPointerSize; 2244 const intptr_t double_misalignment = kDoubleSize - kPointerSize;
2233 Address start; 2245 Address start;
2234 HeapObject* obj; 2246 HeapObject* obj;
2235 HeapObject* filler1; 2247 HeapObject* filler1;
2236 HeapObject* filler2; 2248 HeapObject* filler2;
2237 if (double_misalignment) { 2249 if (double_misalignment) {
2238 start = AlignOldSpace(kDoubleAligned, 0); 2250 start = AlignOldSpace(kDoubleAligned, 0);
2239 obj = OldSpaceAllocateAligned(kPointerSize, kDoubleAligned); 2251 obj = OldSpaceAllocateAligned(kPointerSize, kDoubleAligned);
2240 // The object is aligned, and a filler object is created after. 2252 // The object is aligned, and a filler object is created after.
(...skipping 1365 matching lines...) Expand 10 before | Expand all | Expand 10 after
3606 i::FLAG_parallel_compaction = false; 3618 i::FLAG_parallel_compaction = false;
3607 // Concurrent sweeping adds non determinism, depending on when memory is 3619 // Concurrent sweeping adds non determinism, depending on when memory is
3608 // available for further reuse. 3620 // available for further reuse.
3609 i::FLAG_concurrent_sweeping = false; 3621 i::FLAG_concurrent_sweeping = false;
3610 // Fast evacuation of pages may result in a different page count in old space. 3622 // Fast evacuation of pages may result in a different page count in old space.
3611 i::FLAG_page_promotion = false; 3623 i::FLAG_page_promotion = false;
3612 CcTest::InitializeVM(); 3624 CcTest::InitializeVM();
3613 Isolate* isolate = CcTest::i_isolate(); 3625 Isolate* isolate = CcTest::i_isolate();
3614 Factory* factory = isolate->factory(); 3626 Factory* factory = isolate->factory();
3615 Heap* heap = isolate->heap(); 3627 Heap* heap = isolate->heap();
3628
3616 v8::HandleScope scope(CcTest::isolate()); 3629 v8::HandleScope scope(CcTest::isolate());
3617 static const int number_of_test_pages = 20; 3630 static const int number_of_test_pages = 20;
3618 3631
3619 // Prepare many pages with low live-bytes count. 3632 // Prepare many pages with low live-bytes count.
3620 PagedSpace* old_space = heap->old_space(); 3633 PagedSpace* old_space = heap->old_space();
3621 const int initial_page_count = old_space->CountTotalPages(); 3634 const int initial_page_count = old_space->CountTotalPages();
3622 const int overall_page_count = number_of_test_pages + initial_page_count; 3635 const int overall_page_count = number_of_test_pages + initial_page_count;
3623 for (int i = 0; i < number_of_test_pages; i++) { 3636 for (int i = 0; i < number_of_test_pages; i++) {
3624 AlwaysAllocateScope always_allocate(isolate); 3637 AlwaysAllocateScope always_allocate(isolate);
3625 heap::SimulateFullSpace(old_space); 3638 heap::SimulateFullSpace(old_space);
3626 factory->NewFixedArray(1, TENURED); 3639 factory->NewFixedArray(1, TENURED);
3627 } 3640 }
3628 CHECK_EQ(overall_page_count, old_space->CountTotalPages()); 3641 CHECK_EQ(overall_page_count, old_space->CountTotalPages());
3629 3642
3630 // Triggering one GC will cause a lot of garbage to be discovered but 3643 // Triggering one GC will cause a lot of garbage to be discovered but
3631 // even spread across all allocated pages. 3644 // even spread across all allocated pages.
3632 heap->CollectAllGarbage(Heap::kFinalizeIncrementalMarkingMask, 3645 heap->CollectAllGarbage(Heap::kFinalizeIncrementalMarkingMask,
3633 "triggered for preparation"); 3646 "triggered for preparation");
3634 CHECK_GE(overall_page_count, old_space->CountTotalPages()); 3647 CHECK_GE(overall_page_count, old_space->CountTotalPages());
3635 3648
3636 // Triggering subsequent GCs should cause at least half of the pages 3649 // Triggering subsequent GCs should cause at least half of the pages
3637 // to be released to the OS after at most two cycles. 3650 // to be released to the OS after at most two cycles.
3638 heap->CollectAllGarbage(Heap::kFinalizeIncrementalMarkingMask, 3651 heap->CollectAllGarbage(Heap::kFinalizeIncrementalMarkingMask,
3639 "triggered by test 1"); 3652 "triggered by test 1");
3640 CHECK_GE(overall_page_count, old_space->CountTotalPages()); 3653 CHECK_GE(overall_page_count, old_space->CountTotalPages());
3641 heap->CollectAllGarbage(Heap::kFinalizeIncrementalMarkingMask, 3654 heap->CollectAllGarbage(Heap::kFinalizeIncrementalMarkingMask,
3642 "triggered by test 2"); 3655 "triggered by test 2");
3643 CHECK_GE(overall_page_count, old_space->CountTotalPages() * 2); 3656 CHECK_GE(overall_page_count, old_space->CountTotalPages() * 2);
3644 3657
3645 // Triggering a last-resort GC should cause all pages to be released to the
3646 // OS so that other processes can seize the memory. If we get a failure here
3647 // where there are 2 pages left instead of 1, then we should increase the
3648 // size of the first page a little in SizeOfFirstPage in spaces.cc. The
3649 // first page should be small in order to reduce memory used when the VM
3650 // boots, but if the 20 small arrays don't fit on the first page then that's
3651 // an indication that it is too small.
3652 heap->CollectAllAvailableGarbage("triggered really hard"); 3658 heap->CollectAllAvailableGarbage("triggered really hard");
3653 CHECK_EQ(initial_page_count, old_space->CountTotalPages()); 3659 if (isolate->snapshot_available()) {
3660 // Triggering a last-resort GC should cause all pages to be released to the
3661 // OS. If we get a failure here, adjust Snapshot::SizeOfSnapshot or
3662 // PagedSpace::AreaSizeDuringDeserialization. The initialized heap after
3663 // deserialization should be small, but still large enough to hold some
3664 // small arrays.
3665 CHECK_EQ(initial_page_count, old_space->CountTotalPages());
3666 } else {
3667 // Without a snapshot we cannot guarantee that deserialization will leave
3668 // enough space for the fixed arrays to fit on the first page. However,
3669 // allocating the small arrays should result in at most one more page.
3670 CHECK_GE(initial_page_count + 1, old_space->CountTotalPages());
3671 }
3654 } 3672 }
3655 3673
3656 static int forced_gc_counter = 0; 3674 static int forced_gc_counter = 0;
3657 3675
3658 void MockUseCounterCallback(v8::Isolate* isolate, 3676 void MockUseCounterCallback(v8::Isolate* isolate,
3659 v8::Isolate::UseCounterFeature feature) { 3677 v8::Isolate::UseCounterFeature feature) {
3660 isolate->GetCurrentContext(); 3678 isolate->GetCurrentContext();
3661 if (feature == v8::Isolate::kForcedGC) { 3679 if (feature == v8::Isolate::kForcedGC) {
3662 forced_gc_counter++; 3680 forced_gc_counter++;
3663 } 3681 }
(...skipping 3263 matching lines...) Expand 10 before | Expand all | Expand 10 after
6927 chunk, chunk->area_end() - kPointerSize, chunk->area_end()); 6945 chunk, chunk->area_end() - kPointerSize, chunk->area_end());
6928 slots[chunk->area_end() - kPointerSize] = false; 6946 slots[chunk->area_end() - kPointerSize] = false;
6929 RememberedSet<OLD_TO_NEW>::Iterate(chunk, [&slots](Address addr) { 6947 RememberedSet<OLD_TO_NEW>::Iterate(chunk, [&slots](Address addr) {
6930 CHECK(slots[addr]); 6948 CHECK(slots[addr]);
6931 return KEEP_SLOT; 6949 return KEEP_SLOT;
6932 }); 6950 });
6933 } 6951 }
6934 6952
6935 } // namespace internal 6953 } // namespace internal
6936 } // namespace v8 6954 } // namespace v8
OLDNEW

Powered by Google App Engine
This is Rietveld 408576698