Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(207)

Unified Diff: Source/platform/heap/Heap.cpp

Issue 798293002: Oilpan: attempt first-fit freelist allocation for backing heaps. (Closed) Base URL: svn://svn.chromium.org/blink/trunk
Patch Set: Comments + tidying Created 6 years ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View side-by-side diff with in-line comments
Download patch
« Source/platform/heap/Heap.h ('K') | « Source/platform/heap/Heap.h ('k') | no next file » | no next file with comments »
Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
Index: Source/platform/heap/Heap.cpp
diff --git a/Source/platform/heap/Heap.cpp b/Source/platform/heap/Heap.cpp
index 96a3bebd1d58c422fb5e370da5f9d2f6b4f04d60..325aef49a2638a15b2c1f641da83d75d858cedae 100644
--- a/Source/platform/heap/Heap.cpp
+++ b/Source/platform/heap/Heap.cpp
@@ -678,7 +678,7 @@ void ThreadHeap<Header>::updateRemainingAllocationSize()
}
template<typename Header>
-Address ThreadHeap<Header>::outOfLineAllocate(size_t payloadSize, size_t allocationSize, const GCInfo* gcInfo)
+Address ThreadHeap<Header>::outOfLineAllocate(size_t allocationSize, const GCInfo* gcInfo)
{
ASSERT(allocationSize > remainingAllocationSize());
if (allocationSize > blinkPageSize / 2)
@@ -687,55 +687,115 @@ Address ThreadHeap<Header>::outOfLineAllocate(size_t payloadSize, size_t allocat
updateRemainingAllocationSize();
threadState()->scheduleGCOrForceConservativeGCIfNeeded();
- setAllocationPoint(nullptr, 0);
ASSERT(allocationSize >= allocationGranularity);
- if (allocateFromFreeList(allocationSize))
- return allocate(payloadSize, gcInfo);
- if (coalesce(allocationSize) && allocateFromFreeList(allocationSize))
- return allocate(payloadSize, gcInfo);
+ Address result = allocateFromFreeList(allocationSize, gcInfo);
+ if (result)
+ return result;
+ setAllocationPoint(nullptr, 0);
+ if (coalesce(allocationSize)) {
+ result = allocateFromFreeList(allocationSize, gcInfo);
+ if (result)
+ return result;
+ }
addPageToHeap(gcInfo);
- bool success = allocateFromFreeList(allocationSize);
- RELEASE_ASSERT(success);
- return allocate(payloadSize, gcInfo);
+ result = allocateFromFreeList(allocationSize, gcInfo);
+ RELEASE_ASSERT(result);
+ return result;
+}
+
+static bool shouldUseFirstFitForHeap(int heapIndex)
haraken 2014/12/16 02:26:33 Isn't this "best fit", not "first fit"?
sof 2014/12/16 07:31:54 Not as I know the terms -- best fit would search t
haraken 2014/12/16 08:26:14 (I'm not intending to bike-shed terms but) as far
sof 2014/12/16 08:32:15 It performs first fit within segregated bins.
haraken 2014/12/16 08:51:19 Then our worst-fit is also working as a first-fit
sof 2014/12/16 08:54:22 It's performing first-fit within the worst bin in
haraken 2014/12/16 09:06:37 That's why I think shouldUseFirstFit doesn't captu
+{
+ // For an allocation of size N, should a heap perform a first-fit
+ // matching within the sized bin that N belongs to?
+ //
+ // Theory: quickly reusing a previously freed backing store block stands
+ // a chance of maintaining cached presence of that block (maintains
+ // "locality".) This is preferable to starting to bump allocate from a
+ // new and bigger block, which is what allocateFromFreeList() does by
+ // default. Hence, the backing store heaps are considered for binned
+ // first-fit matching.
haraken 2014/12/16 02:26:33 Shall we also mention that this optimization makes
sof 2014/12/16 07:31:54 Coalescing is not the key -- as I pointed out yest
haraken 2014/12/16 08:26:14 Hmm. Then probably the situation is like this? Fa
sof 2014/12/16 08:36:18 As I said earlier, doing a decision based on size
haraken 2014/12/16 08:51:19 I'm sorry about it... Then the situation is that w
+ //
+ // This appears to hold true through performance expermentation; at
+ // least no signficant performance regressions have been observed.
+ //
+ // This theory of improved performance does not hold true for other
+ // heap types. We are currently seeking an understanding of why;
haraken 2014/12/16 02:26:33 I guess the reason this optimization doesn't help
sof 2014/12/16 07:31:54 They do coalesce when sweeping though.
+ // larger amounts of small block fragmentation might be one reason
+ // for it. TBC.
+ //
+ if (heapIndex >= VectorBackingHeap && heapIndex <= HashTableBackingHeap)
haraken 2014/12/16 02:26:33 For safety, I'd prefer: heapIndex == VectorBack
sof 2014/12/16 07:31:54 And also the inline heap case -- this seems verbos
+ return true;
+ if (heapIndex >= VectorBackingHeapNonFinalized && heapIndex <= HashTableBackingHeapNonFinalized)
haraken 2014/12/16 02:26:33 Ditto.
+ return true;
+
+ return false;
}
template<typename Header>
-FreeListEntry* FreeList<Header>::takeEntry(size_t allocationSize)
-{
- size_t bucketSize = 1 << m_biggestFreeListIndex;
- int i = m_biggestFreeListIndex;
- for (; i > 0; i--, bucketSize >>= 1) {
- if (bucketSize < allocationSize) {
- // A FreeListEntry for bucketSize might be larger than allocationSize.
- // FIXME: We check only the first FreeListEntry because searching
- // the entire list is costly.
- if (!m_freeLists[i] || m_freeLists[i]->size() < allocationSize)
+Address ThreadHeap<Header>::allocateFromFreeList(size_t allocationSize, const GCInfo* gcInfo)
+{
+ // The freelist allocation scheme is currently as follows:
+ //
+ // - If the heap is of an appropriate type, try to pick the first
+ // entry from the sized bin corresponding to |allocationSize|.
+ // [See shouldUseFirstFitForHeap() comment for motivation on why.]
+ //
+ // - If that didn't satisfy the allocation, try reusing a block
+ // from the largest bin. The underlying reasoning being that
+ // we want to amortize this slow allocation call by carving
+ // off as a large a free block as possible in one go; a block
+ // that will service this block and let following allocations
+ // be serviced quickly by bump allocation.
+ //
+ // - Fail; allocation cannot be serviced by the freelist.
+ // The allocator will handle that failure by requesting more
+ // heap pages from the OS and re-initiate the allocation request.
+ //
+ int bucket = FreeList<Header>::bucketIndexForSize(allocationSize) + 1;
haraken 2014/12/16 02:26:33 bucket => index
sof 2014/12/16 07:31:54 No, sorry - this is a bucket, not an index. Or do
+ if (bucket <= m_freeList.m_biggestFreeListIndex && shouldUseFirstFitForHeap(m_index)) {
+ if (FreeListEntry* entry = m_freeList.m_freeLists[bucket]) {
haraken 2014/12/16 02:26:33 Don't we want to look up the entry in m_freeLists[
sof 2014/12/16 07:31:54 That's been checked as regressing performance earl
+ entry->unlink(&m_freeList.m_freeLists[bucket]);
+ if (!m_freeList.m_freeLists[bucket] && bucket == m_freeList.m_biggestFreeListIndex) {
+ // Recalculate the biggest index of non-empty buckets.
+ int maxIndex = m_freeList.m_biggestFreeListIndex - 1;
+ for (; maxIndex >= 0 && !m_freeList.m_freeLists[maxIndex]; maxIndex--) { }
+ m_freeList.m_biggestFreeListIndex = maxIndex < 0 ? 0 : maxIndex;
haraken 2014/12/16 02:26:33 The calculation of m_biggestFreeListIndex is getti
+ }
+ // Allocate into the freelist block without disturbing the current allocation area.
+ ASSERT(entry->size() >= allocationSize);
+ if (entry->size() > allocationSize)
+ addToFreeList(entry->address() + allocationSize, entry->size() - allocationSize);
+ Heap::increaseAllocatedObjectSize(allocationSize);
+ return allocateAtAddress(entry->address(), allocationSize, gcInfo);
+ }
+ // Failed to find a first-fit freelist entry; fall into the standard case of
+ // chopping off the largest free block and bump allocate from it.
+ }
+ size_t bucketSize = 1 << m_freeList.m_biggestFreeListIndex;
+ int index = m_freeList.m_biggestFreeListIndex;
+ for (; index > 0; index--, bucketSize >>= 1) {
haraken 2014/12/16 02:26:33 --index
+ FreeListEntry* entry = m_freeList.m_freeLists[index];
+ if (allocationSize > bucketSize) {
+ // Final bucket candidate; check initial entry if it is able
+ // to service this allocation. Do not perform a linear scan,
+ // as it is considered too costly.
+ if (!entry || entry->size() < allocationSize)
break;
}
- if (FreeListEntry* entry = m_freeLists[i]) {
- m_biggestFreeListIndex = i;
- entry->unlink(&m_freeLists[i]);
- return entry;
+ if (entry) {
+ entry->unlink(&m_freeList.m_freeLists[index]);
+ setAllocationPoint(entry->address(), entry->size());
+ ASSERT(hasCurrentAllocationArea());
+ ASSERT(remainingAllocationSize() >= allocationSize);
+ m_freeList.m_biggestFreeListIndex = index;
+ return allocateSize(allocationSize, gcInfo);
}
}
- m_biggestFreeListIndex = i;
+ m_freeList.m_biggestFreeListIndex = index;
return nullptr;
}
-template<typename Header>
-bool ThreadHeap<Header>::allocateFromFreeList(size_t allocationSize)
-{
- ASSERT(!hasCurrentAllocationArea());
- if (FreeListEntry* entry = m_freeList.takeEntry(allocationSize)) {
- setAllocationPoint(entry->address(), entry->size());
- ASSERT(hasCurrentAllocationArea());
- ASSERT(remainingAllocationSize() >= allocationSize);
- return true;
- }
- return false;
-}
-
#if ENABLE(ASSERT)
template<typename Header>
static bool isLargeObjectAligned(LargeObject<Header>* largeObject, Address address)
« Source/platform/heap/Heap.h ('K') | « Source/platform/heap/Heap.h ('k') | no next file » | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698