|
|
DescriptionSimple BlinkGC heap compaction.
This implements heap compaction for the Blink GC infrastructure
(Oilpan), compacting the arenas of the BlinkGC heap which are most
susceptible to becoming fragmented during actual use.
Fragmentation is a real problem and a growing one while browsing anything
but static pages: the amount of unused, but allocated, memory is
fluctuating higher over time.
To avoid leaving increasing amounts of unused holes in our heaps,
heap compaction will periodically squeeze out the unused portions,
packing together the live objects. The heap pages that are then
left as unused, are subsequently released and returned to the OS.
Due to a fortunate property of Blink heap collection types, providing
such compaction is within relatively easy reach. Experiments show that
the arenas which hold such collection objects ("backing stores") are
the ones that develop fragmentation the most & persistently. While not
a complete heap compactor of all Blink GC arenas, it addresses the
fragmentation problem where it is most pressing. More can be done, later.
Explainer / design document:
https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU
R=haraken
BUG=672030
Committed: https://crrev.com/2c7d13f7694fa208d996947731eacd152902ebdd
Cr-Commit-Position: refs/heads/master@{#438125}
Patch Set 1 #Patch Set 2 : tidy up comment #
Total comments: 21
Patch Set 3 : synchronize on compaction finish #
Total comments: 58
Patch Set 4 : review updates #Patch Set 5 : cros compile fix #
Total comments: 77
Patch Set 6 : review updates #Patch Set 7 : Tidy up logging #Patch Set 8 : add more comments #
Total comments: 68
Patch Set 9 : review updates #Patch Set 10 : asan compile fix #Patch Set 11 : improve assert #Patch Set 12 : introduce compaction state context #Patch Set 13 : fix mismerge #Patch Set 14 : comment tidying #Patch Set 15 : add pointer alignment handling to SparseHeapBitmap #
Total comments: 96
Patch Set 16 : handle 4 byte pointer aligns #Patch Set 17 : rebased up to r437488 #Patch Set 18 : review updates #Patch Set 19 : further review updates #Patch Set 20 : verify that lazy unmarking doesn't intersect with compactables #Patch Set 21 : tweak assert #
Total comments: 1
Patch Set 22 : adjust copyright attributions #Patch Set 23 : Clear unused pages before decommitting #
Total comments: 2
Patch Set 24 : leave unused pages zapped, if approp. #Patch Set 25 : rebased upto r438110 #Messages
Total messages: 185 (108 generated)
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while visiting all but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. With the heap pages that are then left as unused, are subsequently released and returned to the OS. Explainer / design document: https://docs.google.com/document/d/1ZwupmxaXQHo1msY5USN0fkzZKkE9zHKWMS6zwSLrsJo R= BUG= ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1ZwupmxaXQHo1msY5USN0fkzZKkE9zHKWMS6zwSLrsJo R= BUG= ==========
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1ZwupmxaXQHo1msY5USN0fkzZKkE9zHKWMS6zwSLrsJo R= BUG= ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R= BUG= ==========
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
sigbjornf@opera.com changed reviewers: + oilpan-reviews@chromium.org
please take a look.
(Let me take a look at this tomorrow.)
haraken@chromium.org changed reviewers: + haraken@chromium.org
A couple of high-level comments. https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/Heap.h (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/Heap.h:408: void registerRelocation(MovableReference* slot); What's a difference between registerMovingObjectReference and registerRelocation? Also I cannot find any call site of registerRelocation...? https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:53: if (m_relocatablePages.contains(slotPage)) { Does the interior pointer problem happen only in ListHashSet? If yes, is there any option to introduce a dedicated arena for ListHashSet and not enable the heap compaction on the arena? ListHashSet is not a popular data structure, so I'm not quite sure if we want to introduce a lot of complexity for it. https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h:143: visitor->registerBackingStoreReference(reference); What is this change for? Aren't we registering all backing stores in wtf/XXX.h? https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); - Is there any reason you put compact() after pre-finalization and eager sweeping? I'm just curious. - As I mentioned in the doc, it's (at least theoretically) unsafe to do a compaction in preSweep() because other threads may already be accessing the heap collections we're compacting right now. To prevent the risk, we need to do the compaction during the stop-the-world phase. (Note that other threads cannot obtain a CrossThreadPersistent while some thread is in the stop-the-world phase.) So we want to move the heap compaction to postGC(), but we cannot easily do that because the heap compaction must run after the thread-local weak processing. Is my understanding correct? - Once we finish shipping the per-thread heap, we no longer need the thread-local weak processing because everything runs on the single thread. Will it simplify your things? https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/wtf/HashTable.h (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/wtf/HashTable.h:2089: Allocator::registerDelayedMarkNoTracing(visitor, &m_table); Why don't we need to register the backing store of weak hash tables?
https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/Heap.h (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/Heap.h:408: void registerRelocation(MovableReference* slot); On 2016/11/30 06:29:52, haraken wrote: > > What's a difference between registerMovingObjectReference and > registerRelocation? > > Also I cannot find any call site of registerRelocation...? > Have a look at the documentation of HeapCompact::registerRelocation(). (registerRelocation() is currently not needed for Blink.) https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:53: if (m_relocatablePages.contains(slotPage)) { On 2016/11/30 06:29:53, haraken wrote: > > Does the interior pointer problem happen only in ListHashSet? If yes, is there > any option to introduce a dedicated arena for ListHashSet and not enable the > heap compaction on the arena? ListHashSet is not a popular data structure, so > I'm not quite sure if we want to introduce a lot of complexity for it. > > Interior pointers can happen in a number of places, i.e., container objects containing part container objects. LinkedHashSet<> is the only abstraction that requires internal fixups by way of a callback. https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h:143: visitor->registerBackingStoreReference(reference); On 2016/11/30 06:29:53, haraken wrote: > > What is this change for? Aren't we registering all backing stores in wtf/XXX.h? Not the weakly-held backing stores, which is what this handles. https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); On 2016/11/30 06:29:53, haraken wrote: > > - Is there any reason you put compact() after pre-finalization and eager > sweeping? I'm just curious. > I had stability problems doing it a step or two earlier here, I'm fairly sure - but I can't recall the details just now. > - As I mentioned in the doc, it's (at least theoretically) unsafe to do a > compaction in preSweep() because other threads may already be accessing the heap > collections we're compacting right now. To prevent the risk, we need to do the > compaction during the stop-the-world phase. (Note that other threads cannot > obtain a CrossThreadPersistent while some thread is in the stop-the-world > phase.) So we want to move the heap compaction to postGC(), but we cannot easily > do that because the heap compaction must run after the thread-local weak > processing. Is my understanding correct? > Yes, quite right it is - other threads shouldn't be able to view an arena that's in the process of being compacted. I haven't seen it manifest itself as a problem, but having the per-thread signalling of "i've finished compacting" also synchronize the thread, would be one place to address this problem. > - Once we finish shipping the per-thread heap, we no longer need the > thread-local weak processing because everything runs on the single thread. Will > it simplify your things? > Potentially, but I do wonder/worry how per-thread collection will interact with cross-thread persistents referring to compactable & movable objects. https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/wtf/HashTable.h (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/wtf/HashTable.h:2089: Allocator::registerDelayedMarkNoTracing(visitor, &m_table); On 2016/11/30 06:29:53, haraken wrote: > > Why don't we need to register the backing store of weak hash tables? See above answer.
https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:53: if (m_relocatablePages.contains(slotPage)) { On 2016/11/30 06:52:43, sof wrote: > On 2016/11/30 06:29:53, haraken wrote: > > > > Does the interior pointer problem happen only in ListHashSet? If yes, is there > > any option to introduce a dedicated arena for ListHashSet and not enable the > > heap compaction on the arena? ListHashSet is not a popular data structure, so > > I'm not quite sure if we want to introduce a lot of complexity for it. > > > > > > Interior pointers can happen in a number of places, i.e., container objects > containing part container objects. I think I'm failing at understanding the problem :) Even if you have HeapHashSet<HeapVector<Member>>, HeapVector<Member>::trace is always called and registers the backing store to the compactor. Then we just need to fix up the registered slot. In what cases do we need the bitmap lookup? > > LinkedHashSet<> is the only abstraction that requires internal fixups by way of > a callback. https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h:143: visitor->registerBackingStoreReference(reference); On 2016/11/30 06:52:43, sof wrote: > On 2016/11/30 06:29:53, haraken wrote: > > > > What is this change for? Aren't we registering all backing stores in > wtf/XXX.h? > > Not the weakly-held backing stores, which is what this handles. What's a reason you need to handle weak backing stores specially? https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); On 2016/11/30 06:52:44, sof wrote: > On 2016/11/30 06:29:53, haraken wrote: > > > > - Is there any reason you put compact() after pre-finalization and eager > > sweeping? I'm just curious. > > > > I had stability problems doing it a step or two earlier here, I'm fairly sure - > but I can't recall the details just now. > > > - As I mentioned in the doc, it's (at least theoretically) unsafe to do a > > compaction in preSweep() because other threads may already be accessing the > heap > > collections we're compacting right now. To prevent the risk, we need to do the > > compaction during the stop-the-world phase. (Note that other threads cannot > > obtain a CrossThreadPersistent while some thread is in the stop-the-world > > phase.) So we want to move the heap compaction to postGC(), but we cannot > easily > > do that because the heap compaction must run after the thread-local weak > > processing. Is my understanding correct? > > > > Yes, quite right it is - other threads shouldn't be able to view an arena that's > in the process of being compacted. > > I haven't seen it manifest itself as a problem, but having the per-thread > signalling of "i've finished compacting" also synchronize the thread, would be > one place to address this problem. BTW, do you know why we need to run the heap compaction after the thread-local weak processing? What happens if we simply move the heap compaction to postGC()? > > > - Once we finish shipping the per-thread heap, we no longer need the > > thread-local weak processing because everything runs on the single thread. > Will > > it simplify your things? > > > > Potentially, but I do wonder/worry how per-thread collection will interact with > cross-thread persistents referring to compactable & movable objects. The cross-thread heap collection must go through a CrossThreadPersistent. The CrossThreadPersistent::get() needs to acquire a lock. The lock cannot be acquired while some thread is doing a GC. In any case, it will take a couple of more time to finish shipping the per-thread heap, so we shouldn't rely on the per-thread heap.
https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:53: if (m_relocatablePages.contains(slotPage)) { On 2016/11/30 07:09:49, haraken wrote: > On 2016/11/30 06:52:43, sof wrote: > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > Does the interior pointer problem happen only in ListHashSet? If yes, is > there > > > any option to introduce a dedicated arena for ListHashSet and not enable the > > > heap compaction on the arena? ListHashSet is not a popular data structure, > so > > > I'm not quite sure if we want to introduce a lot of complexity for it. > > > > > > > > > > Interior pointers can happen in a number of places, i.e., container objects > > containing part container objects. > > I think I'm failing at understanding the problem :) > > Even if you have HeapHashSet<HeapVector<Member>>, HeapVector<Member>::trace is > always called and registers the backing store to the compactor. Then we just > need to fix up the registered slot. > > In what cases do we need the bitmap lookup? > Consider HeapHashMap<int, HeapVector<Member<something>> -- there you have a backing store containing relocatable slots. And you don't know what order the backing stores will be marked. See the HeapCompactTest unit tests at the end for some other examples. https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h:143: visitor->registerBackingStoreReference(reference); On 2016/11/30 07:09:49, haraken wrote: > On 2016/11/30 06:52:43, sof wrote: > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > What is this change for? Aren't we registering all backing stores in > > wtf/XXX.h? > > > > Not the weakly-held backing stores, which is what this handles. > > What's a reason you need to handle weak backing stores specially? Keeping their marking and fixup registration together, a property worth maintaining. https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); On 2016/11/30 07:09:49, haraken wrote: > On 2016/11/30 06:52:44, sof wrote: > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > - Is there any reason you put compact() after pre-finalization and eager > > > sweeping? I'm just curious. > > > > > > > I had stability problems doing it a step or two earlier here, I'm fairly sure > - > > but I can't recall the details just now. > > > > > - As I mentioned in the doc, it's (at least theoretically) unsafe to do a > > > compaction in preSweep() because other threads may already be accessing the > > heap > > > collections we're compacting right now. To prevent the risk, we need to do > the > > > compaction during the stop-the-world phase. (Note that other threads cannot > > > obtain a CrossThreadPersistent while some thread is in the stop-the-world > > > phase.) So we want to move the heap compaction to postGC(), but we cannot > > easily > > > do that because the heap compaction must run after the thread-local weak > > > processing. Is my understanding correct? > > > > > > > Yes, quite right it is - other threads shouldn't be able to view an arena > that's > > in the process of being compacted. > > > > I haven't seen it manifest itself as a problem, but having the per-thread > > signalling of "i've finished compacting" also synchronize the thread, would be > > one place to address this problem. > > BTW, do you know why we need to run the heap compaction after the thread-local > weak processing? What happens if we simply move the heap compaction to postGC()? > We cannot run compaction before the liveness of the (weak) objects have been determined (and all fixups registered). What if a weak object will become alive, point to others that will etc. > > > > > - Once we finish shipping the per-thread heap, we no longer need the > > > thread-local weak processing because everything runs on the single thread. > > Will > > > it simplify your things? > > > > > > > Potentially, but I do wonder/worry how per-thread collection will interact > with > > cross-thread persistents referring to compactable & movable objects. > > The cross-thread heap collection must go through a CrossThreadPersistent. The > CrossThreadPersistent::get() needs to acquire a lock. The lock cannot be > acquired while some thread is doing a GC. > That could be done, cross-thread persistents aren't meant for fine-grained & often use, but would "read only" access of a cross-thread heap object be problematic without compaction, while a per-thread GC runs?
https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:53: if (m_relocatablePages.contains(slotPage)) { On 2016/11/30 07:26:38, sof wrote: > On 2016/11/30 07:09:49, haraken wrote: > > On 2016/11/30 06:52:43, sof wrote: > > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > > > Does the interior pointer problem happen only in ListHashSet? If yes, is > > there > > > > any option to introduce a dedicated arena for ListHashSet and not enable > the > > > > heap compaction on the arena? ListHashSet is not a popular data structure, > > so > > > > I'm not quite sure if we want to introduce a lot of complexity for it. > > > > > > > > > > > > > > Interior pointers can happen in a number of places, i.e., container objects > > > containing part container objects. > > > > I think I'm failing at understanding the problem :) > > > > Even if you have HeapHashSet<HeapVector<Member>>, HeapVector<Member>::trace is > > always called and registers the backing store to the compactor. Then we just > > need to fix up the registered slot. > > > > In what cases do we need the bitmap lookup? > > > > Consider HeapHashMap<int, HeapVector<Member<something>> -- there you have a > backing store containing relocatable slots. And you don't know what order the > backing stores will be marked. > > See the HeapCompactTest unit tests at the end for some other examples. But no matter what order the backing store is traced, it should be traced. So the system should know a complete set of container objects that are holding pointers to backing stores. In this example, at the end of the marking, the system should know a complete of set of HeapVectors. Is it not true? https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); On 2016/11/30 07:26:38, sof wrote: > On 2016/11/30 07:09:49, haraken wrote: > > On 2016/11/30 06:52:44, sof wrote: > > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > > > - Is there any reason you put compact() after pre-finalization and eager > > > > sweeping? I'm just curious. > > > > > > > > > > I had stability problems doing it a step or two earlier here, I'm fairly > sure > > - > > > but I can't recall the details just now. > > > > > > > - As I mentioned in the doc, it's (at least theoretically) unsafe to do a > > > > compaction in preSweep() because other threads may already be accessing > the > > > heap > > > > collections we're compacting right now. To prevent the risk, we need to do > > the > > > > compaction during the stop-the-world phase. (Note that other threads > cannot > > > > obtain a CrossThreadPersistent while some thread is in the stop-the-world > > > > phase.) So we want to move the heap compaction to postGC(), but we cannot > > > easily > > > > do that because the heap compaction must run after the thread-local weak > > > > processing. Is my understanding correct? > > > > > > > > > > Yes, quite right it is - other threads shouldn't be able to view an arena > > that's > > > in the process of being compacted. > > > > > > I haven't seen it manifest itself as a problem, but having the per-thread > > > signalling of "i've finished compacting" also synchronize the thread, would > be > > > one place to address this problem. > > > > BTW, do you know why we need to run the heap compaction after the thread-local > > weak processing? What happens if we simply move the heap compaction to > postGC()? > > > > We cannot run compaction before the liveness of the (weak) objects have been > determined (and all fixups registered). What if a weak object will become alive, > point to others that will etc. The liveness should have been determined by the end of the stop-the-world phase, right? The thread-local weak processing is already not allowed to mutate the object graph. Maybe the complicated thread-local weak processing on HashTable causes some issue, but I'm not sure what it is. > > > > > > > - Once we finish shipping the per-thread heap, we no longer need the > > > > thread-local weak processing because everything runs on the single thread. > > > Will > > > > it simplify your things? > > > > > > > > > > Potentially, but I do wonder/worry how per-thread collection will interact > > with > > > cross-thread persistents referring to compactable & movable objects. > > > > The cross-thread heap collection must go through a CrossThreadPersistent. The > > CrossThreadPersistent::get() needs to acquire a lock. The lock cannot be > > acquired while some thread is doing a GC. > > > > That could be done, cross-thread persistents aren't meant for fine-grained & > often use, but would "read only" access of a cross-thread heap object be > problematic without compaction, while a per-thread GC runs? Would it be an option to make the read-only access acquire the lock as well? Before talking about cross-thread heap collections, it's already dangerous to let one thread access cross-thread things while the other thread is running a GC.
https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:53: if (m_relocatablePages.contains(slotPage)) { On 2016/11/30 08:51:19, haraken wrote: > On 2016/11/30 07:26:38, sof wrote: > > On 2016/11/30 07:09:49, haraken wrote: > > > On 2016/11/30 06:52:43, sof wrote: > > > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > > > > > Does the interior pointer problem happen only in ListHashSet? If yes, is > > > there > > > > > any option to introduce a dedicated arena for ListHashSet and not enable > > the > > > > > heap compaction on the arena? ListHashSet is not a popular data > structure, > > > so > > > > > I'm not quite sure if we want to introduce a lot of complexity for it. > > > > > > > > > > > > > > > > > > Interior pointers can happen in a number of places, i.e., container > objects > > > > containing part container objects. > > > > > > I think I'm failing at understanding the problem :) > > > > > > Even if you have HeapHashSet<HeapVector<Member>>, HeapVector<Member>::trace > is > > > always called and registers the backing store to the compactor. Then we just > > > need to fix up the registered slot. > > > > > > In what cases do we need the bitmap lookup? > > > > > > > Consider HeapHashMap<int, HeapVector<Member<something>> -- there you have a > > backing store containing relocatable slots. And you don't know what order the > > backing stores will be marked. > > > > See the HeapCompactTest unit tests at the end for some other examples. > > But no matter what order the backing store is traced, it should be traced. So > the system should know a complete set of container objects that are holding > pointers to backing stores. In this example, at the end of the marking, the > system should know a complete of set of HeapVectors. Is it not true? > Yes, but where are the relocatable slots - have they been moved already or not when you come to compacting them? That's the problem that needs handling via interior pointers. Write down a diagram of a hash table backing store object containing hash tables (=> with relocatable slots of their own), and then see what happens when you try to compact that object. When moving the backing stores of the internal slots, which may appear anywhere wrt the "top" backing store in the heap, their slot location may have moved, so you need to track that somehow. Which is what interior pointers do. https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); On 2016/11/30 08:51:19, haraken wrote: > On 2016/11/30 07:26:38, sof wrote: > > On 2016/11/30 07:09:49, haraken wrote: > > > On 2016/11/30 06:52:44, sof wrote: > > > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > > > > > - Is there any reason you put compact() after pre-finalization and eager > > > > > sweeping? I'm just curious. > > > > > > > > > > > > > I had stability problems doing it a step or two earlier here, I'm fairly > > sure > > > - > > > > but I can't recall the details just now. > > > > > > > > > - As I mentioned in the doc, it's (at least theoretically) unsafe to do > a > > > > > compaction in preSweep() because other threads may already be accessing > > the > > > > heap > > > > > collections we're compacting right now. To prevent the risk, we need to > do > > > the > > > > > compaction during the stop-the-world phase. (Note that other threads > > cannot > > > > > obtain a CrossThreadPersistent while some thread is in the > stop-the-world > > > > > phase.) So we want to move the heap compaction to postGC(), but we > cannot > > > > easily > > > > > do that because the heap compaction must run after the thread-local weak > > > > > processing. Is my understanding correct? > > > > > > > > > > > > > Yes, quite right it is - other threads shouldn't be able to view an arena > > > that's > > > > in the process of being compacted. > > > > > > > > I haven't seen it manifest itself as a problem, but having the per-thread > > > > signalling of "i've finished compacting" also synchronize the thread, > would > > be > > > > one place to address this problem. > > > > > > BTW, do you know why we need to run the heap compaction after the > thread-local > > > weak processing? What happens if we simply move the heap compaction to > > postGC()? > > > > > > > We cannot run compaction before the liveness of the (weak) objects have been > > determined (and all fixups registered). What if a weak object will become > alive, > > point to others that will etc. > > The liveness should have been determined by the end of the stop-the-world phase, > right? The thread-local weak processing is already not allowed to mutate the > object graph. > > Maybe the complicated thread-local weak processing on HashTable causes some > issue, but I'm not sure what it is. > > > > > > > > > > - Once we finish shipping the per-thread heap, we no longer need the > > > > > thread-local weak processing because everything runs on the single > thread. > > > > Will > > > > > it simplify your things? > > > > > > > > > > > > > Potentially, but I do wonder/worry how per-thread collection will interact > > > with > > > > cross-thread persistents referring to compactable & movable objects. > > > > > > The cross-thread heap collection must go through a CrossThreadPersistent. > The > > > CrossThreadPersistent::get() needs to acquire a lock. The lock cannot be > > > acquired while some thread is doing a GC. > > > > > > > That could be done, cross-thread persistents aren't meant for fine-grained & > > often use, but would "read only" access of a cross-thread heap object be > > problematic without compaction, while a per-thread GC runs? > > Would it be an option to make the read-only access acquire the lock as well? > Before talking about cross-thread heap collections, it's already dangerous to > let one thread access cross-thread things while the other thread is running a > GC. Yes, could do that then also. I don't know if you had planned to insist on doing this already, or if this would be a new requirement to support a "movable" GC.
On 2016/11/30 09:22:14, sof wrote: > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): > > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > third_party/WebKit/Source/platform/heap/HeapCompact.cpp:53: if > (m_relocatablePages.contains(slotPage)) { > On 2016/11/30 08:51:19, haraken wrote: > > On 2016/11/30 07:26:38, sof wrote: > > > On 2016/11/30 07:09:49, haraken wrote: > > > > On 2016/11/30 06:52:43, sof wrote: > > > > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > > > > > > > Does the interior pointer problem happen only in ListHashSet? If yes, > is > > > > there > > > > > > any option to introduce a dedicated arena for ListHashSet and not > enable > > > the > > > > > > heap compaction on the arena? ListHashSet is not a popular data > > structure, > > > > so > > > > > > I'm not quite sure if we want to introduce a lot of complexity for it. > > > > > > > > > > > > > > > > > > > > > > Interior pointers can happen in a number of places, i.e., container > > objects > > > > > containing part container objects. > > > > > > > > I think I'm failing at understanding the problem :) > > > > > > > > Even if you have HeapHashSet<HeapVector<Member>>, > HeapVector<Member>::trace > > is > > > > always called and registers the backing store to the compactor. Then we > just > > > > need to fix up the registered slot. > > > > > > > > In what cases do we need the bitmap lookup? > > > > > > > > > > Consider HeapHashMap<int, HeapVector<Member<something>> -- there you have a > > > backing store containing relocatable slots. And you don't know what order > the > > > backing stores will be marked. > > > > > > See the HeapCompactTest unit tests at the end for some other examples. > > > > But no matter what order the backing store is traced, it should be traced. So > > the system should know a complete set of container objects that are holding > > pointers to backing stores. In this example, at the end of the marking, the > > system should know a complete of set of HeapVectors. Is it not true? > > > > Yes, but where are the relocatable slots - have they been moved already or not > when you come to compacting them? That's the problem that needs handling via > interior pointers. > > Write down a diagram of a hash table backing store object containing hash tables > (=> with relocatable slots of their own), and then see what happens when you try > to compact that object. When moving the backing stores of the internal slots, > which may appear anywhere wrt the "top" backing store in the heap, their slot > location may have moved, so you need to track that somehow. Which is what > interior pointers do. Ah, thanks. I now understand the problem :) > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): > > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); > On 2016/11/30 08:51:19, haraken wrote: > > On 2016/11/30 07:26:38, sof wrote: > > > On 2016/11/30 07:09:49, haraken wrote: > > > > On 2016/11/30 06:52:44, sof wrote: > > > > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > > > > > > > - Is there any reason you put compact() after pre-finalization and > eager > > > > > > sweeping? I'm just curious. > > > > > > > > > > > > > > > > I had stability problems doing it a step or two earlier here, I'm fairly > > > sure > > > > - > > > > > but I can't recall the details just now. > > > > > > > > > > > - As I mentioned in the doc, it's (at least theoretically) unsafe to > do > > a > > > > > > compaction in preSweep() because other threads may already be > accessing > > > the > > > > > heap > > > > > > collections we're compacting right now. To prevent the risk, we need > to > > do > > > > the > > > > > > compaction during the stop-the-world phase. (Note that other threads > > > cannot > > > > > > obtain a CrossThreadPersistent while some thread is in the > > stop-the-world > > > > > > phase.) So we want to move the heap compaction to postGC(), but we > > cannot > > > > > easily > > > > > > do that because the heap compaction must run after the thread-local > weak > > > > > > processing. Is my understanding correct? > > > > > > > > > > > > > > > > Yes, quite right it is - other threads shouldn't be able to view an > arena > > > > that's > > > > > in the process of being compacted. > > > > > > > > > > I haven't seen it manifest itself as a problem, but having the > per-thread > > > > > signalling of "i've finished compacting" also synchronize the thread, > > would > > > be > > > > > one place to address this problem. > > > > > > > > BTW, do you know why we need to run the heap compaction after the > > thread-local > > > > weak processing? What happens if we simply move the heap compaction to > > > postGC()? > > > > > > > > > > We cannot run compaction before the liveness of the (weak) objects have been > > > determined (and all fixups registered). What if a weak object will become > > alive, > > > point to others that will etc. > > > > The liveness should have been determined by the end of the stop-the-world > phase, > > right? The thread-local weak processing is already not allowed to mutate the > > object graph. > > > > Maybe the complicated thread-local weak processing on HashTable causes some > > issue, but I'm not sure what it is. > > > > > > > > > > > > > > - Once we finish shipping the per-thread heap, we no longer need the > > > > > > thread-local weak processing because everything runs on the single > > thread. > > > > > Will > > > > > > it simplify your things? > > > > > > > > > > > > > > > > Potentially, but I do wonder/worry how per-thread collection will > interact > > > > with > > > > > cross-thread persistents referring to compactable & movable objects. > > > > > > > > The cross-thread heap collection must go through a CrossThreadPersistent. > > The > > > > CrossThreadPersistent::get() needs to acquire a lock. The lock cannot be > > > > acquired while some thread is doing a GC. > > > > > > > > > > That could be done, cross-thread persistents aren't meant for fine-grained & > > > often use, but would "read only" access of a cross-thread heap object be > > > problematic without compaction, while a per-thread GC runs? > > > > Would it be an option to make the read-only access acquire the lock as well? > > Before talking about cross-thread heap collections, it's already dangerous to > > let one thread access cross-thread things while the other thread is running a > > GC. > > Yes, could do that then also. I don't know if you had planned to insist on doing > this already, or if this would be a new requirement to support a "movable" GC. It was planned in the per-thread heap (where GC can run in parallel) and it should be already guaranteed. CrossThreadPersistent already needs to acquire a lock. (This is an issue of a per-thread heap, not a movable GC.)
https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); On 2016/11/30 06:52:44, sof wrote: > On 2016/11/30 06:29:53, haraken wrote: > > > > - Is there any reason you put compact() after pre-finalization and eager > > sweeping? I'm just curious. > > > > I had stability problems doing it a step or two earlier here, I'm fairly sure - > but I can't recall the details just now. > Ah, now I remember the issue: if you finalize container objects before prefinalizers or eager sweeping have run, you upset the expect finalization ordering -- none of these can now access collections about to be swept, which they previously could.
On 2016/11/30 10:47:41, sof wrote: > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): > > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); > On 2016/11/30 06:52:44, sof wrote: > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > - Is there any reason you put compact() after pre-finalization and eager > > > sweeping? I'm just curious. > > > > > > > I had stability problems doing it a step or two earlier here, I'm fairly sure > - > > but I can't recall the details just now. > > > > Ah, now I remember the issue: if you finalize container objects before > prefinalizers or eager sweeping have run, you upset the expect finalization > ordering -- none of these can now access collections about to be swept, which > they previously could. That makes sense. So the heap compaction must happen after pre-finalization. The pre-finalization must happen after the thread is resumed. Therefore things must happen in the following order: 1) Thread resuming 2) Pre-finalization 3) heap compaction This causes a problem because another thread might be accessing the heap collection during the heap compaction. Or another thread might mutate the object graph before the heap compaction happens. Hmm. My middle-term proposal is: 1) Ship the per-thread heap completely. 2) Move everything of preSweep into postGC. This puts the pre-finalization and the heap compaction in the stop-the-world phase. Then the problem will be resolved. The question is how we can work around the problem before shipping the per-frame scheduler. Or would it be better to try to ship the per-frame scheduler asap (e.g., in two weeks)? +keishi@
Questions about the bitmap: - Are there many interior heap collections? Do you think it's an option to rewrite HeapHashMap<HeapVector<>, ...> to HeapHashMap<Member<HeapVector>, ...>? Note that the compound heap collections have caused many complicated issues in the past, although I think all the issues have been fixed. I'm wondering if it would be nicer to remove them. - Do you think that we can use the bitmap mechanism to extend the compaction target to other normal objects? Or is the bitmap going to be only for interior heap collections?
On 2016/11/30 11:01:35, haraken wrote: > On 2016/11/30 10:47:41, sof wrote: > > > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > > File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): > > > > > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > > third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); > > On 2016/11/30 06:52:44, sof wrote: > > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > > > - Is there any reason you put compact() after pre-finalization and eager > > > > sweeping? I'm just curious. > > > > > > > > > > I had stability problems doing it a step or two earlier here, I'm fairly > sure > > - > > > but I can't recall the details just now. > > > > > > > Ah, now I remember the issue: if you finalize container objects before > > prefinalizers or eager sweeping have run, you upset the expect finalization > > ordering -- none of these can now access collections about to be swept, which > > they previously could. > > That makes sense. > > So the heap compaction must happen after pre-finalization. The pre-finalization > must happen after the thread is resumed. Therefore things must happen in the > following order: > > 1) Thread resuming > 2) Pre-finalization > 3) heap compaction > > This causes a problem because another thread might be accessing the heap > collection during the heap compaction. Or another thread might mutate the object > graph before the heap compaction happens. Hmm. > > My middle-term proposal is: > > 1) Ship the per-thread heap completely. > 2) Move everything of preSweep into postGC. This puts the pre-finalization and > the heap compaction in the stop-the-world phase. > > Then the problem will be resolved. > > The question is how we can work around the problem before shipping the per-frame > scheduler. Or would it be better to try to ship the per-frame scheduler asap > (e.g., in two weeks)? > > +keishi@ HTMLParserThread is the only thread left that's not per thread heap. I'll aim to turn it on by the end of the week.
On 2016/11/30 12:12:12, keishi wrote: > On 2016/11/30 11:01:35, haraken wrote: > > On 2016/11/30 10:47:41, sof wrote: > > > > > > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > > > File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): > > > > > > > > > https://codereview.chromium.org/2531973002/diff/20001/third_party/WebKit/Sour... > > > third_party/WebKit/Source/platform/heap/ThreadState.cpp:1169: compact(); > > > On 2016/11/30 06:52:44, sof wrote: > > > > On 2016/11/30 06:29:53, haraken wrote: > > > > > > > > > > - Is there any reason you put compact() after pre-finalization and eager > > > > > sweeping? I'm just curious. > > > > > > > > > > > > > I had stability problems doing it a step or two earlier here, I'm fairly > > sure > > > - > > > > but I can't recall the details just now. > > > > > > > > > > Ah, now I remember the issue: if you finalize container objects before > > > prefinalizers or eager sweeping have run, you upset the expect finalization > > > ordering -- none of these can now access collections about to be swept, > which > > > they previously could. > > > > That makes sense. > > > > So the heap compaction must happen after pre-finalization. The > pre-finalization > > must happen after the thread is resumed. Therefore things must happen in the > > following order: > > > > 1) Thread resuming > > 2) Pre-finalization > > 3) heap compaction > > > > This causes a problem because another thread might be accessing the heap > > collection during the heap compaction. Or another thread might mutate the > object > > graph before the heap compaction happens. Hmm. > > > > My middle-term proposal is: > > > > 1) Ship the per-thread heap completely. > > 2) Move everything of preSweep into postGC. This puts the pre-finalization and > > the heap compaction in the stop-the-world phase. > > > > Then the problem will be resolved. > > > > The question is how we can work around the problem before shipping the > per-frame > > scheduler. Or would it be better to try to ship the per-frame scheduler asap > > (e.g., in two weeks)? > > > > +keishi@ > > HTMLParserThread is the only thread left that's not per thread heap. I'll aim to > turn it on by the end of the week. Great :) My proposal without this would be to add ThreadCondition synchronization in HeapCompact::finishedCompacting().
On 2016/11/30 11:05:13, haraken wrote: > Questions about the bitmap: > > - Are there many interior heap collections? Do you think it's an option to > rewrite HeapHashMap<HeapVector<>, ...> to HeapHashMap<Member<HeapVector>, ...>? > Note that the compound heap collections have caused many complicated issues in > the past, although I think all the issues have been fixed. I'm wondering if it > would be nicer to remove them. > There are some, I don't have an accurate count with the current codebase. From my position, that wasn't an option that was on the table when doing this -- keeping local patches to fast-changing Source/{core,modules} is a recipe for trouble & tedious work when rebasing up. And imposing constraints for internal implementation reasons, is not something you do gladly regardless. i'm open though to considering making that change to the Blink codebase now, but I don't think the downside of implementing the interior pointer handling is huge. Some extra complexity & bookkeeping, sure. > - Do you think that we can use the bitmap mechanism to extend the compaction > target to other normal objects? Or is the bitmap going to be only for interior > heap collections? The abstraction used here is specialized, something that suits the sparse ranges where interior pointers now exist. Which could continue to be used for the container arenas if compaction is also implemented elsewhere, I suppose. (The compaction for those heaps would have to handle general fixups, and bitmap-based compaction algorithms is a "popular" way to implement that.)
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
On 2016/11/30 12:14:59, sof wrote: > On 2016/11/30 12:12:12, keishi wrote: > > On 2016/11/30 11:01:35, haraken wrote: > > > On 2016/11/30 10:47:41, sof wrote: > > > > ... > > > > > > So the heap compaction must happen after pre-finalization. The > > pre-finalization > > > must happen after the thread is resumed. Therefore things must happen in the > > > following order: > > > > > > 1) Thread resuming > > > 2) Pre-finalization > > > 3) heap compaction > > > > > > This causes a problem because another thread might be accessing the heap > > > collection during the heap compaction. Or another thread might mutate the > > object > > > graph before the heap compaction happens. Hmm. > > > > > > My middle-term proposal is: > > > > > > 1) Ship the per-thread heap completely. > > > 2) Move everything of preSweep into postGC. This puts the pre-finalization > and > > > the heap compaction in the stop-the-world phase. > > > > > > Then the problem will be resolved. > > > > > > The question is how we can work around the problem before shipping the > > per-frame > > > scheduler. Or would it be better to try to ship the per-frame scheduler asap > > > (e.g., in two weeks)? > > > > > > +keishi@ > > > > HTMLParserThread is the only thread left that's not per thread heap. I'll aim > to > > turn it on by the end of the week. > > Great :) My proposal without this would be to add ThreadCondition > synchronization in HeapCompact::finishedCompacting(). I've added this for now.
On 2016/11/30 13:21:39, sof wrote: > On 2016/11/30 11:05:13, haraken wrote: > > Questions about the bitmap: > > > > - Are there many interior heap collections? Do you think it's an option to > > rewrite HeapHashMap<HeapVector<>, ...> to HeapHashMap<Member<HeapVector>, > ...>? > > Note that the compound heap collections have caused many complicated issues in > > the past, although I think all the issues have been fixed. I'm wondering if it > > would be nicer to remove them. > > > > There are some, I don't have an accurate count with the current codebase. From > my position, that wasn't an option that was on the table when doing this -- > keeping local patches to fast-changing Source/{core,modules} is a recipe for > trouble & tedious work when rebasing up. And imposing constraints for internal > implementation reasons, is not something you do gladly regardless. > > i'm open though to considering making that change to the Blink codebase now, but > I don't think the downside of implementing the interior pointer handling is > huge. Some extra complexity & bookkeeping, sure. Yeah, my main concern is the complexity. If it's not that hard to add an assert to detect HeapCollection<HeapCollection> and rewrite it to HeapCollection<Member<HeapCollection>>, I might prefer that direction. If it's not easy for some reason, the bitmap approach makes sense to me. What do you think? > > > - Do you think that we can use the bitmap mechanism to extend the compaction > > target to other normal objects? Or is the bitmap going to be only for interior > > heap collections? > > The abstraction used here is specialized, something that suits the sparse ranges > where interior pointers now exist. Which could continue to be used for the > container arenas if compaction is also implemented elsewhere, I suppose. (The > compaction for those heaps would have to handle general fixups, and bitmap-based > compaction algorithms is a "popular" way to implement that.) Thanks. So, at a high level, the only concerns about this CL are 1) the complexity of the bitmap and 2) thread-safety of the heap compaction. Regarding 2), it's already resolved by ThreadCondition and will be resolved in a better way by shipping the per-thread heap completely.
On 2016/11/30 14:55:58, haraken wrote: > On 2016/11/30 13:21:39, sof wrote: > > On 2016/11/30 11:05:13, haraken wrote: > > > Questions about the bitmap: > > > > > > - Are there many interior heap collections? Do you think it's an option to > > > rewrite HeapHashMap<HeapVector<>, ...> to HeapHashMap<Member<HeapVector>, > > ...>? > > > Note that the compound heap collections have caused many complicated issues > in > > > the past, although I think all the issues have been fixed. I'm wondering if > it > > > would be nicer to remove them. > > > > > > > There are some, I don't have an accurate count with the current codebase. From > > my position, that wasn't an option that was on the table when doing this -- > > keeping local patches to fast-changing Source/{core,modules} is a recipe for > > trouble & tedious work when rebasing up. And imposing constraints for internal > > implementation reasons, is not something you do gladly regardless. > > > > i'm open though to considering making that change to the Blink codebase now, > but > > I don't think the downside of implementing the interior pointer handling is > > huge. Some extra complexity & bookkeeping, sure. > > Yeah, my main concern is the complexity. If it's not that hard to add an assert > to detect HeapCollection<HeapCollection> and rewrite it to > HeapCollection<Member<HeapCollection>>, I might prefer that direction. If it's > not easy for some reason, the bitmap approach makes sense to me. > > What do you think? > Let me look into details and think it over a bit.
On 2016/11/30 15:10:55, sof wrote: > On 2016/11/30 14:55:58, haraken wrote: > > On 2016/11/30 13:21:39, sof wrote: > > > On 2016/11/30 11:05:13, haraken wrote: > > > > Questions about the bitmap: > > > > > > > > - Are there many interior heap collections? Do you think it's an option to > > > > rewrite HeapHashMap<HeapVector<>, ...> to HeapHashMap<Member<HeapVector>, > > > ...>? > > > > Note that the compound heap collections have caused many complicated > issues > > in > > > > the past, although I think all the issues have been fixed. I'm wondering > if > > it > > > > would be nicer to remove them. > > > > > > > > > > There are some, I don't have an accurate count with the current codebase. > From > > > my position, that wasn't an option that was on the table when doing this -- > > > keeping local patches to fast-changing Source/{core,modules} is a recipe for > > > trouble & tedious work when rebasing up. And imposing constraints for > internal > > > implementation reasons, is not something you do gladly regardless. > > > > > > i'm open though to considering making that change to the Blink codebase now, > > but > > > I don't think the downside of implementing the interior pointer handling is > > > huge. Some extra complexity & bookkeeping, sure. > > > > Yeah, my main concern is the complexity. If it's not that hard to add an > assert > > to detect HeapCollection<HeapCollection> and rewrite it to > > HeapCollection<Member<HeapCollection>>, I might prefer that direction. If it's > > not easy for some reason, the bitmap approach makes sense to me. > > > > What do you think? > > > > Let me look into details and think it over a bit. I'm concerned it isn't workable, as part objects wouldn't by inference be able to contain container types..if the part object is used in a container. Those would have to be located and GC plugin support would be needed to prevent their introduction.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
On 2016/11/30 15:27:23, sof wrote: > On 2016/11/30 15:10:55, sof wrote: ... > > > > > > > Let me look into details and think it over a bit. > > I'm concerned it isn't workable, as part objects wouldn't by inference be able > to contain container types..if the part object is used in a container. Those > would have to be located and GC plugin support would be needed to prevent their > introduction. PerformanceEntryMap and InspectorDOMAgent::SearchResults and one in IDBObserver are examples where direct nesting appears of slots, but I don't have a handle on what hides wrt part objects and hash tables.
On 2016/11/30 22:10:43, sof wrote: > On 2016/11/30 15:27:23, sof wrote: > > On 2016/11/30 15:10:55, sof wrote: > ... > > > > > > > > > > Let me look into details and think it over a bit. > > > > I'm concerned it isn't workable, as part objects wouldn't by inference be able > > to contain container types..if the part object is used in a container. Those > > would have to be located and GC plugin support would be needed to prevent > their > > introduction. > > PerformanceEntryMap and InspectorDOMAgent::SearchResults and one in IDBObserver > are examples where direct nesting appears of slots, but I don't have a handle on > what hides wrt part objects and hash tables. Yeah, I was thinking of some runtime check (instead of a clang plugin check) that can detect the following pattern: class X { DISALLOW_ALLOCATION(); HeapVector<Member<>> ...; }; HeapVector<X> ...; // Want to detect this and rewrite it to HeapVector<Member<X>>. HeapVector<X> does not make much sense because the fact that X has a HeapVector already means that X is not a lightweight structure. So whether we use HeapVector<X> or HeapVector<Member<X>> won't make any significant difference. However, yes, as you mentioned, I cannot come up with any runtime check that can detect the pattern.
On 2016/12/01 09:10:05, haraken wrote: > On 2016/11/30 22:10:43, sof wrote: > > On 2016/11/30 15:27:23, sof wrote: > > > On 2016/11/30 15:10:55, sof wrote: > > ... > > > > > > > > > > > > > Let me look into details and think it over a bit. > > > > > > I'm concerned it isn't workable, as part objects wouldn't by inference be > able > > > to contain container types..if the part object is used in a container. Those > > > would have to be located and GC plugin support would be needed to prevent > > their > > > introduction. > > > > PerformanceEntryMap and InspectorDOMAgent::SearchResults and one in > IDBObserver > > are examples where direct nesting appears of slots, but I don't have a handle > on > > what hides wrt part objects and hash tables. > > Yeah, I was thinking of some runtime check (instead of a clang plugin check) > that can detect the following pattern: > > class X { > DISALLOW_ALLOCATION(); > HeapVector<Member<>> ...; > }; > > HeapVector<X> ...; // Want to detect this and rewrite it to > HeapVector<Member<X>>. > > HeapVector<X> does not make much sense because the fact that X has a HeapVector > already means that X is not a lightweight structure. So whether we use > HeapVector<X> or HeapVector<Member<X>> won't make any significant difference. > > However, yes, as you mentioned, I cannot come up with any runtime check that can > detect the pattern. It sounds a bit familiar, we may have looked for a static check like this earlier (in another context.) In any case, I'm not convinced of merit.
On 2016/12/01 21:41:34, sof wrote: > On 2016/12/01 09:10:05, haraken wrote: > > On 2016/11/30 22:10:43, sof wrote: > > > On 2016/11/30 15:27:23, sof wrote: > > > > On 2016/11/30 15:10:55, sof wrote: > > > ... > > > > > > > > > > > > > > > > Let me look into details and think it over a bit. > > > > > > > > I'm concerned it isn't workable, as part objects wouldn't by inference be > > able > > > > to contain container types..if the part object is used in a container. > Those > > > > would have to be located and GC plugin support would be needed to prevent > > > their > > > > introduction. > > > > > > PerformanceEntryMap and InspectorDOMAgent::SearchResults and one in > > IDBObserver > > > are examples where direct nesting appears of slots, but I don't have a > handle > > on > > > what hides wrt part objects and hash tables. > > > > Yeah, I was thinking of some runtime check (instead of a clang plugin check) > > that can detect the following pattern: > > > > class X { > > DISALLOW_ALLOCATION(); > > HeapVector<Member<>> ...; > > }; > > > > HeapVector<X> ...; // Want to detect this and rewrite it to > > HeapVector<Member<X>>. > > > > HeapVector<X> does not make much sense because the fact that X has a > HeapVector > > already means that X is not a lightweight structure. So whether we use > > HeapVector<X> or HeapVector<Member<X>> won't make any significant difference. > > > > However, yes, as you mentioned, I cannot come up with any runtime check that > can > > detect the pattern. > > It sounds a bit familiar, we may have looked for a static check like this > earlier (in another context.) > > In any case, I'm not convinced of merit. Would it be an option to insert DCHECK(isInBackingStorageArena(this)) to the constructor of HeapVector and HeapHashTable?
On 2016/12/02 04:30:53, haraken wrote: > On 2016/12/01 21:41:34, sof wrote: > > On 2016/12/01 09:10:05, haraken wrote: > > > On 2016/11/30 22:10:43, sof wrote: > > > > On 2016/11/30 15:27:23, sof wrote: > > > > > On 2016/11/30 15:10:55, sof wrote: > > > > ... > > > > > > > > > > > > > > > > > > > Let me look into details and think it over a bit. > > > > > > > > > > I'm concerned it isn't workable, as part objects wouldn't by inference > be > > > able > > > > > to contain container types..if the part object is used in a container. > > Those > > > > > would have to be located and GC plugin support would be needed to > prevent > > > > their > > > > > introduction. > > > > > > > > PerformanceEntryMap and InspectorDOMAgent::SearchResults and one in > > > IDBObserver > > > > are examples where direct nesting appears of slots, but I don't have a > > handle > > > on > > > > what hides wrt part objects and hash tables. > > > > > > Yeah, I was thinking of some runtime check (instead of a clang plugin check) > > > that can detect the following pattern: > > > > > > class X { > > > DISALLOW_ALLOCATION(); > > > HeapVector<Member<>> ...; > > > }; > > > > > > HeapVector<X> ...; // Want to detect this and rewrite it to > > > HeapVector<Member<X>>. > > > > > > HeapVector<X> does not make much sense because the fact that X has a > > HeapVector > > > already means that X is not a lightweight structure. So whether we use > > > HeapVector<X> or HeapVector<Member<X>> won't make any significant > difference. > > > > > > However, yes, as you mentioned, I cannot come up with any runtime check that > > can > > > detect the pattern. > > > > It sounds a bit familiar, we may have looked for a static check like this > > earlier (in another context.) > > > > In any case, I'm not convinced of merit. > > Would it be an option to insert DCHECK(isInBackingStorageArena(this)) to the > constructor of HeapVector and HeapHashTable? To my mind, a run-time vs static check isn't where this one pivots - the Oilpan-specific restriction doesn't need to be there and wouldn't make the user happier.
I have just reviewed 50% of the CL -- let me take a look at the rest on Monday. Maybe I have added too many nit-picky comments about naming. Feel free to ignore :) > > > > > > > > > > > > > > Let me look into details and think it over a bit. > > > > > > > > > > > > I'm concerned it isn't workable, as part objects wouldn't by inference > > be > > > > able > > > > > > to contain container types..if the part object is used in a container. > > > Those > > > > > > would have to be located and GC plugin support would be needed to > > prevent > > > > > their > > > > > > introduction. > > > > > > > > > > PerformanceEntryMap and InspectorDOMAgent::SearchResults and one in > > > > IDBObserver > > > > > are examples where direct nesting appears of slots, but I don't have a > > > handle > > > > on > > > > > what hides wrt part objects and hash tables. > > > > > > > > Yeah, I was thinking of some runtime check (instead of a clang plugin > check) > > > > that can detect the following pattern: > > > > > > > > class X { > > > > DISALLOW_ALLOCATION(); > > > > HeapVector<Member<>> ...; > > > > }; > > > > > > > > HeapVector<X> ...; // Want to detect this and rewrite it to > > > > HeapVector<Member<X>>. > > > > > > > > HeapVector<X> does not make much sense because the fact that X has a > > > HeapVector > > > > already means that X is not a lightweight structure. So whether we use > > > > HeapVector<X> or HeapVector<Member<X>> won't make any significant > > difference. > > > > > > > > However, yes, as you mentioned, I cannot come up with any runtime check > that > > > can > > > > detect the pattern. > > > > > > It sounds a bit familiar, we may have looked for a static check like this > > > earlier (in another context.) > > > > > > In any case, I'm not convinced of merit. > > > > Would it be an option to insert DCHECK(isInBackingStorageArena(this)) to the > > constructor of HeapVector and HeapHashTable? > > To my mind, a run-time vs static check isn't where this one pivots - the > Oilpan-specific restriction doesn't need to be there and wouldn't make the user > happier. My concern is the complexity (i.e., technical debt). For example, if we lose you and me (we had actually lost you for a couple of months :-), it will make it difficult to maintain the code base. So I'd prefer not introducing complexity if the complexity is something we can avoid without adding too much trouble. Actually I'm on the fence about this. If you really want the bitmap, I'm okay. BTW, can the GC plugin check the tracing validity for interior HeapVectors/HeapHashMaps embedded in a complicated way (through a part-of-object)? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in:115: HeapCompaction status=experimental BlinkGCCompaction ? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/Heap.cpp (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/Heap.cpp:423: compaction()->registerMovingObjectReference(slot); DCHECK(*slot) ? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/Heap.cpp:434: DCHECK(slot); Do you want to mean DCHECK(*slot) ? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:498: #if DEBUG_LOG_HEAP_COMPACTION_RUNNING_TIME if (!m_doCompact) return; ? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:73: #endif Consider cleaning up these macros before landing to ToT :) https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:85: return std::unique_ptr<HeapCompact>(new HeapCompact); wrapUnique https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:102: // Returns true if the ongoing GC will perform compaction. Update the comment. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:104: return m_doCompact && (m_compactableHeaps & (0x1u << arenaIndex)); Avoid hard-coding 0x1u. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:156: size_t freeSize, totalArenaSize totalFreeListSize to be consistent with the name in ThreadState.cpp. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:170: static bool scheduleCompactionGCForTesting(bool); Shall we simply forbid the heap compaction on conservative GCs? If we enable the heap compaction on a conservative GC, we need to fix up a backing storage pointer on the stack. This will be problematic because the backing storage pointer might be a false-positive pointer. In any case, given that conservative GCs are rare in UMAs, I'd prefer just disabling the heap compaction and avoid the complexity. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:180: static const int kCompactIntervalThreshold = 10; kGCIntervalThresholdSinceLastCompaction (c.f., m_gcCountSinceLastCompaction) https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:184: static const size_t kFreeThreshold = 512 * 1024; kFreeListSizeThreashold https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:203: int m_threadCount; m_mutex => m_threadSyncronizationMutex m_finished => m_threadSyncronizationCondition m_threadCount => m_threadSyncronizationCount https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:209: unsigned m_compactableHeaps; m_compactableHeaps => m_compactingArenas ? (c.f., isCompactingArena()) https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapPage.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.h:518: size_t sweepCompact(NormalPage*&, BasePage**, size_t); sweepAndCompact ? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.h:673: size_t size() const; size => freeListSize ? (since size() is used in various meanings in Oilpan.) https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.h:774: void sweepCompact(); sweepAndCompact ? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/InlinedGlobalMarkingVisitor.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/InlinedGlobalMarkingVisitor.h:65: const Visitor::MarkingMode m_markingMode; Would you help me understand why you need to add this member whereas Visitor already has m_markingMode? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h:143: visitor->registerBackingStoreReference(reference); Would it be possible to refactor code so that HeapHashTable (more specifically HeapHashTable with Weak members) directly calls markNoTracing and registerBackingStoreReference? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1038: if (isMainThread()) { Maybe can we move this code into checkIfCompacting()? See my comment on Visitor::create in this file. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1044: size_t freeSize = 0; heapSize => totalArenaSize freeSize => totalFreeListSize https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1049: ++i) { Add an assertion somewhere to check that all backing store arenas are in [Vector1ArenaIndex, HashTableArenaIndex]. Someone might change the enum order in the future. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1063: void ThreadState::compact() { Add a TODO that the heap compaction should probably be enabled only on idle GCs. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1064: if (!heap().compaction()->isCompacting()) When can this happen? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1066: Let's enter SweepForbiddenScope and ScriptForbiddenIfMainThreadScope. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1079: ->sweepCompact(); Can we move BlinkGC::HashTableArenaIndex into the following loop? https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1723: std::unique_ptr<Visitor> visitor = Visitor::create(this, gcType); Maybe can we use this visitor creation down to line 1734, so that we can set the proper marking mode when we create the visitor? This CL is introducing a couple of setXXX() methods, which makes the oilpan infrastructure more state-dependent (hard to maintain). For example, checkIfCompacting changes the marking mode of the visitor by setMarkingMode. makeConsistentForGC registers the heap state by setHeapResidency. I'd be happy if we could remove such state-dependent code. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/Visitor.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/Visitor.h:303: GlobalMarkCompacting, Add a comment.
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: chromeos_amd64-generic_chromium_compile_only_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromeos_amd64-...)
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in:115: HeapCompaction status=experimental On 2016/12/02 12:43:19, haraken wrote: > > BlinkGCCompaction ? I've kept the feature name. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/Heap.cpp (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/Heap.cpp:423: compaction()->registerMovingObjectReference(slot); On 2016/12/02 12:43:19, haraken wrote: > > DCHECK(*slot) ? Extended to (slot && *slot). https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/Heap.cpp:434: DCHECK(slot); On 2016/12/02 12:43:19, haraken wrote: > > Do you want to mean DCHECK(*slot) ? Done. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:498: #if DEBUG_LOG_HEAP_COMPACTION_RUNNING_TIME On 2016/12/02 12:43:19, haraken wrote: > > if (!m_doCompact) > return; > > ? Added, along with renaming the method to startThreadCompaction() (similarly for the finish counterpart.) https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:73: #endif On 2016/12/02 12:43:20, haraken wrote: > > Consider cleaning up these macros before landing to ToT :) Added TODO as an additional reminder to do so. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:85: return std::unique_ptr<HeapCompact>(new HeapCompact); On 2016/12/02 12:43:20, haraken wrote: > > wrapUnique Done, not sure I "get" the benefits wrapUnique(). https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:102: // Returns true if the ongoing GC will perform compaction. On 2016/12/02 12:43:20, haraken wrote: > > Update the comment. Done. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:104: return m_doCompact && (m_compactableHeaps & (0x1u << arenaIndex)); On 2016/12/02 12:43:20, haraken wrote: > > Avoid hard-coding 0x1u. That one-liner & idiom is as clear as can be; I don't understand what style rule it falls short of by using "0x1u" to directly represent a bit. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:156: size_t freeSize, On 2016/12/02 12:43:20, haraken wrote: > > totalArenaSize > totalFreeListSize > > to be consistent with the name in ThreadState.cpp. Done. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:170: static bool scheduleCompactionGCForTesting(bool); On 2016/12/02 12:43:20, haraken wrote: > > Shall we simply forbid the heap compaction on conservative GCs? > > If we enable the heap compaction on a conservative GC, we need to fix up a > backing storage pointer on the stack. This will be problematic because the > backing storage pointer might be a false-positive pointer. > > In any case, given that conservative GCs are rare in UMAs, I'd prefer just > disabling the heap compaction and avoid the complexity. Yes, we absolutely must disable compaction during conservative GCs - supporting that is another level of complexity, which fortunately we don't need to take on (now or with more extensive heap compaction.) HeapCompact::checkIfCompacting() currently checks and prevents compaction if any thread has heap pointers on the stack (or someone requested a conservative GC.) https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:180: static const int kCompactIntervalThreshold = 10; On 2016/12/02 12:43:20, haraken wrote: > > kGCIntervalThresholdSinceLastCompaction (c.f., m_gcCountSinceLastCompaction) kGCCountSinceLastCompactionThreshold https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:184: static const size_t kFreeThreshold = 512 * 1024; On 2016/12/02 12:43:20, haraken wrote: > > kFreeListSizeThreashold Done. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:203: int m_threadCount; On 2016/12/02 12:43:19, haraken wrote: > > m_mutex => m_threadSyncronizationMutex > m_finished => m_threadSyncronizationCondition > m_threadCount => m_threadSyncronizationCount "threadSynchronization" is implied for these abstractions, so using it in these field names seems superfluous. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.h:209: unsigned m_compactableHeaps; On 2016/12/02 12:43:20, haraken wrote: > > m_compactableHeaps => m_compactingArenas ? (c.f., isCompactingArena()) Done, replaced the use of (sub)heaps with arenas, here and elsewhere. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapPage.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.h:673: size_t size() const; On 2016/12/02 12:43:20, haraken wrote: > > size => freeListSize ? (since size() is used in various meanings in Oilpan.) > That's better, done. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.h:774: void sweepCompact(); On 2016/12/02 12:43:20, haraken wrote: > > sweepAndCompact ? Done. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/InlinedGlobalMarkingVisitor.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/InlinedGlobalMarkingVisitor.h:65: const Visitor::MarkingMode m_markingMode; On 2016/12/02 12:43:20, haraken wrote: > > Would you help me understand why you need to add this member whereas Visitor > already has m_markingMode? InlinedGlobalMarkingVisitor isn't a Visitor, and the marking mode is needed to correctly implement MarkingVisitorImpl::registerMoving*(). https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h:143: visitor->registerBackingStoreReference(reference); On 2016/12/02 12:43:20, haraken wrote: > > Would it be possible to refactor code so that HeapHashTable (more specifically > HeapHashTable with Weak members) directly calls markNoTracing and > registerBackingStoreReference? I don't think that's possible, we don't want to eagerly mark a weak backing store object. Or, to put it differently, I don't plan to work on general weak processing changes in this CL :) https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1038: if (isMainThread()) { On 2016/12/02 12:43:20, haraken wrote: > > Maybe can we move this code into checkIfCompacting()? See my comment on > Visitor::create in this file. Moved the size sampling into the heap compaction code, HeapCompact::updateHeapResidency() which is called while checking if compaction should go ahead. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1044: size_t freeSize = 0; On 2016/12/02 12:43:21, haraken wrote: > > heapSize => totalArenaSize > freeSize => totalFreeListSize Done. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1049: ++i) { On 2016/12/02 12:43:20, haraken wrote: > > Add an assertion somewhere to check that all backing store arenas are in > [Vector1ArenaIndex, HashTableArenaIndex]. Someone might change the enum order in > the future. Added (in HeapCompact::updateHeapResidency()) https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1063: void ThreadState::compact() { On 2016/12/02 12:43:20, haraken wrote: > > Add a TODO that the heap compaction should probably be enabled only on idle GCs. checkIfCompacting() is who decides this; added a TODO there. Looking at run-time behavior, we may have to schedule some extra idle GCs if compaction is overdue, to make that work out & have compaction be effective. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1064: if (!heap().compaction()->isCompacting()) On 2016/12/02 12:43:20, haraken wrote: > > When can this happen? Most of the time? :) compact() is called unconditionally from preSweep(). https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1066: On 2016/12/02 12:43:20, haraken wrote: > > Let's enter SweepForbiddenScope and ScriptForbiddenIfMainThreadScope. Makes good sense, like the other sweeping passes. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1079: ->sweepCompact(); On 2016/12/02 12:43:20, haraken wrote: > > Can we move BlinkGC::HashTableArenaIndex into the following loop? Done. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1723: std::unique_ptr<Visitor> visitor = Visitor::create(this, gcType); On 2016/12/02 12:43:20, haraken wrote: > > Maybe can we use this visitor creation down to line 1734, so that we can set the > proper marking mode when we create the visitor? > > This CL is introducing a couple of setXXX() methods, which makes the oilpan > infrastructure more state-dependent (hard to maintain). For example, > checkIfCompacting changes the marking mode of the visitor by setMarkingMode. > makeConsistentForGC registers the heap state by setHeapResidency. I'd be happy > if we could remove such state-dependent code. > I've teased apart the steps a bit to allow that, but it required making the locking out of GCs explicit via (yet another) scoping object. Perhaps that's slightly better overall, I don't see a clear winner between the two approaches. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/Visitor.h (left): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/Visitor.h:359: ThreadState* m_state; I noticed that this field is unused; now removed. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/Visitor.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/Visitor.h:303: GlobalMarkCompacting, On 2016/12/02 12:43:21, haraken wrote: > > Add a comment. Done.
On 2016/12/02 12:43:21, haraken wrote: > I have just reviewed 50% of the CL -- let me take a look at the rest on Monday. > Maybe I have added too many nit-picky comments about naming. Feel free to ignore > :) > > > > > > > > > > > > > > > > > Let me look into details and think it over a bit. > > > > > > > > > > > > > > I'm concerned it isn't workable, as part objects wouldn't by > inference > > > be > > > > > able > > > > > > > to contain container types..if the part object is used in a > container. > > > > Those > > > > > > > would have to be located and GC plugin support would be needed to > > > prevent > > > > > > their > > > > > > > introduction. > > > > > > > > > > > > PerformanceEntryMap and InspectorDOMAgent::SearchResults and one in > > > > > IDBObserver > > > > > > are examples where direct nesting appears of slots, but I don't have a > > > > handle > > > > > on > > > > > > what hides wrt part objects and hash tables. > > > > > > > > > > Yeah, I was thinking of some runtime check (instead of a clang plugin > > check) > > > > > that can detect the following pattern: > > > > > > > > > > class X { > > > > > DISALLOW_ALLOCATION(); > > > > > HeapVector<Member<>> ...; > > > > > }; > > > > > > > > > > HeapVector<X> ...; // Want to detect this and rewrite it to > > > > > HeapVector<Member<X>>. > > > > > > > > > > HeapVector<X> does not make much sense because the fact that X has a > > > > HeapVector > > > > > already means that X is not a lightweight structure. So whether we use > > > > > HeapVector<X> or HeapVector<Member<X>> won't make any significant > > > difference. > > > > > > > > > > However, yes, as you mentioned, I cannot come up with any runtime check > > that > > > > can > > > > > detect the pattern. > > > > > > > > It sounds a bit familiar, we may have looked for a static check like this > > > > earlier (in another context.) > > > > > > > > In any case, I'm not convinced of merit. > > > > > > Would it be an option to insert DCHECK(isInBackingStorageArena(this)) to the > > > constructor of HeapVector and HeapHashTable? > > > > To my mind, a run-time vs static check isn't where this one pivots - the > > Oilpan-specific restriction doesn't need to be there and wouldn't make the > user > > happier. > > My concern is the complexity (i.e., technical debt). For example, if we lose you > and me (we had actually lost you for a couple of months :-), it will make it > difficult to maintain the code base. So I'd prefer not introducing complexity if > the complexity is something we can avoid without adding too much trouble. > Fair concerns, my reasoning is as follows: - the tracking of interior pointers is not hard to follow implementation-wise. - it happens rarely, and doesn't add much overhead otherwise. - if all arenas will be compacted eventually, the kind of nested relocations we're seeing here will need to be handled regardless. - consequently, this representation restriction for containers would eventually not be needed. - supporting such a restriction safely&completely, is not trivial. > Actually I'm on the fence about this. If you really want the bitmap, I'm okay. > I would prefer to support interior/nested relocations initially. > BTW, can the GC plugin check the tracing validity for interior > HeapVectors/HeapHashMaps embedded in a complicated way (through a > part-of-object)? > The part object declarations will be checked wrt valid trace handling, but there's no extra checking done by the GC plugin for heap collection values. HeapVector does have a static assert to disallow non-traceable part objects, however.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:29: return std::unique_ptr<MovableObjectFixups>(new MovableObjectFixups); wrapUnique https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:54: // Slot resides on a compactable heap's page. Add DCHECK(slotPage->contains(slotAddress)). https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:68: if (HeapCompact::isCompactableArena(slotPage->arena()->arenaIndex())) When can this return false? m_relocatablePages contains only pages of compacting arenas, doesn't it? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:74: void addFixupCallback(MovableReference reference, I think you're assuming that addFixupCallback() is coupled with add(). Then can we add DCHECK(m_fixups.contains(reference))? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:82: size_t size() const { return m_fixups.size(); } size() => numberOfFixups() https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:84: void relocateInteriorFixups(Address from, Address to, size_t size) { size => payloadSize https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:94: for (size_t i = 0; i < size; i += sizeof(void*)) { i => offset https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:97: MovableReference* fromRef = reinterpret_cast<MovableReference*>(from + i); fromRef => slot This is not a reference. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:110: continue; Can we check that it->value is equal to |to+i|? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:131: m_interiorFixups.set(slot, to); Why do we need to update m_interiorFixups here? I was assuming that all interior pointers should be handled by relocateInteriorFixups. In other words, is it possible to remove line 126 - 136? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:133: LOG_HEAP_COMPACTION("Redirected slot: %p => %p\n", slot, slotLocation); Can we add DCHECK(slotLocation == to)? In the first place, I'm wondering how it's possible to hit the 'else' branch. How is it possible that relocate() is called multiple times for the same |from| address? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:140: // and referenced. Specifically, how is it possible that destructor or pre-finalizer mutates the slot? Expansion/Shrinking should not change the slot... What happens if a destructor allocates a new HeapVector/HeapHashSet? Is it correctly handled? In general, I'm a bit afraid that this (i.e., the case where *slot is updated before the heap compaction runs) may lead to security exploits. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:144: if (UNLIKELY(*slot != from)) { Does '*slot != from' happen only when the *slot is weakly processed? Then can we add DCHECK(!*slot)? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:152: size_t size = 0; size => payloadSize https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:172: if (UNLIKELY(it != m_interiorFixups.end() && !it->value)) Would you help me understand this branch? I don't fully understand what '!it->value' means. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:175: addInteriorMapping(interior); You can inline the method. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:187: void addRelocation(MovableReference* slot) { I'm wondering why we need to have so different code path between the "first" slot (=add()) and a second or later slots (=addRelocation()). I think this is because add() is heavily assuming that there is a 1:1 mapping between MovableReference and MovableReference*. So we need a totally different code path for a 1:N case. Maybe can we consider supporting the 1:N case by default and removing the code duplication? (Do you think it will slow down the heap compaction unnecessarily?) https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:286: // around to be able to support this should the need arise.. You can say it's needed in Opera. Otherwise, someone might remove the code :) https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:296: const char* gcReasonString(BlinkGC::GCReason reason) { We should share this function with ThreadState.cpp. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:405: m_freeListSize > kFreeListSizeThreshold); Is it enough to look at only m_freeListSize? Even if m_freeListSize is huge, it's not a problem if the number of free lists is small (i.e., not fragmented). I guess we need to take into account the number of free lists, because the fragmentation is a key factor for the heap compaction. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:439: if (!*slot) This should not happen, right? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:462: size_t freeArenaSize = 0; totalFreeListSize https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:468: for (int i = BlinkGC::Vector1ArenaIndex; i <= BlinkGC::HashTableArenaIndex; I'd prefer encapsulating the details of arenas in ThreadState.cpp. Instead of iterating this loop in HeapCompact, can we introduce ThreadState::getArenaStats(size_t* arenaSize, size_t* freeListSize)? Also I might want to move m_compactableArenas to ThreadState.h. Remember that until we ship the per-thread heap, arenas are per-ThreadState things, not per-HeapCompact things (per-ThreadHeap things). https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:472: size_t arenaSize = arena->arenaSize(); It looks like arenaSize is unused except debug printing. Can we remove it? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:473: size_t freeSize = arena->freeListSize(); freeListSize https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:488: m_freeListSize = freeArenaSize; Instead of making m_freeListSize a member variable, can we just make it a return value of updateHeapResidency? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:457: BasePage* p = m_firstPage; page https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:476: Add DCHECK(!hasCurrentAllocationArea()). https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:490: continue; Don't we need to call: page->unlink(&m_firstUnsweptPage); page->link(&m_firstPage); ? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:506: normalPage->sweepAndCompact(nextPage, &m_firstPage, allocationPoint); Honestly speaking, it's very hard to understand what sweepAndCompact is doing... For example, why do we need the |nextPage|'s chain? I guess |allocationPoint| is enough to keep track of where we're allocating. If we want to allocate from the next page, we just need to find the next page in the |m_firstPage|'s chain (i.e., pageFromAddress(allocationPoint)->nextPage())? If the performance permits, we might want to consider doing the heap compaction in two paths. In the first path, we just sweep the pages using existing code. In the second path, we scan the chain of m_firstPage and compact the pages. The other thing we might want to consider is if we really need to compact objects over pages (which is making things a lot complicated). Maybe would it be enough to compact objects inside one page and decommit unused system pages in the page? Given that the mutator will anyway allocate new objects after the heap compaction, we won't necessarily need to compact perfectly. Maybe a better idea is: 1) In the first path, just sweep the objects. But while sweeping, measure how much each page is fragmented. 2) In the second path, compact only pages that are fragmented a lot. (In any case, my point is not that I want to make the compaction smarter but that I want to simplify the compaction logic.) https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:627: return page; return new... https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1480: compact->movedObject(header->payload(), movedObject => relocate ? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1721: heap().compaction()->checkIfCompacting(this, gcType, reason); I'd prefer reorganizing this as follows: bool enableHeapCompaction = heap().compaction()->shouldEnableHeapCompaction(); // This doesn't have a side effect on heap().compaction(). if (enableHeapCompaction) { gcType = GCWithSweepCompaction; heap().compaction()->initialize(); // Set up the compaction }
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Thanks for detailed review; flushing my buffer with followups to most. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:29: return std::unique_ptr<MovableObjectFixups>(new MovableObjectFixups); On 2016/12/05 11:27:46, haraken wrote: > > wrapUnique Done. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:54: // Slot resides on a compactable heap's page. On 2016/12/05 11:27:46, haraken wrote: > > Add DCHECK(slotPage->contains(slotAddress)). Done. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:68: if (HeapCompact::isCompactableArena(slotPage->arena()->arenaIndex())) On 2016/12/05 11:27:46, haraken wrote: > > When can this return false? m_relocatablePages contains only pages of compacting > arenas, doesn't it? > addRelocatablePage() doesn't currently take m_compactableArenas into account before extending m_relocatablePages, the check is here instead. Better if addRelocatablePage() is discriminating; now done. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:74: void addFixupCallback(MovableReference reference, On 2016/12/05 11:27:47, haraken wrote: > > I think you're assuming that addFixupCallback() is coupled with add(). Then can > we add DCHECK(m_fixups.contains(reference))? > Not readily as that map is over slots, this is a reference. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:82: size_t size() const { return m_fixups.size(); } On 2016/12/05 11:27:46, haraken wrote: > > size() => numberOfFixups() A generic name like that makes it harder to see that it is unused.. :) Removed. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:84: void relocateInteriorFixups(Address from, Address to, size_t size) { On 2016/12/05 11:27:47, haraken wrote: > > size => payloadSize "size" is accurate as-is; we're not using "payload" terminology here. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:94: for (size_t i = 0; i < size; i += sizeof(void*)) { On 2016/12/05 11:27:47, haraken wrote: > > i => offset Renamed. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:97: MovableReference* fromRef = reinterpret_cast<MovableReference*>(from + i); On 2016/12/05 11:27:47, haraken wrote: > > fromRef => slot > > This is not a reference. Done. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:110: continue; On 2016/12/05 11:27:46, haraken wrote: > > Can we check that it->value is equal to |to+i|? No, it->value would contain the (relocated) reference that the slot has been updated with already, in which case there is no relocation forwarding to be done for this pointer. If not, we proceed to write |to + i| into this map, which the redirect handling in relocate() will make use of when later on the backing store object for the interior slot has been moved & compacted. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:131: m_interiorFixups.set(slot, to); On 2016/12/05 11:27:46, haraken wrote: > > Why do we need to update m_interiorFixups here? I was assuming that all interior > pointers should be handled by relocateInteriorFixups. In other words, is it > possible to remove line 126 - 136? > No, we don't know which is processed first -- the interior backing store or the backing stores that the slots point to. This is a key step. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:133: LOG_HEAP_COMPACTION("Redirected slot: %p => %p\n", slot, slotLocation); On 2016/12/05 11:27:46, haraken wrote: > > Can we add DCHECK(slotLocation == to)? > > In the first place, I'm wondering how it's possible to hit the 'else' branch. > How is it possible that relocate() is called multiple times for the same |from| > address? That wouldn't be appropriate; if m_interiorFixups is set here, it points to where the parent/interior backing store for the slot ended up (a "redirection"), so we adjust the effective slot location right here. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:140: // and referenced. On 2016/12/05 11:27:46, haraken wrote: > > Specifically, how is it possible that destructor or pre-finalizer mutates the > slot? Expansion/Shrinking should not change the slot... > The slot contains the pointer to the backing store; why wouldn't it be updated upon re-allocation of the hash table? > What happens if a destructor allocates a new HeapVector/HeapHashSet? Is it > correctly handled? > This CL doesn't bring a difference in behavior; no compaction has yet to happen, and it works off the chain of unswept pages. New allocations will trigger a new page allocation (which will be put on m_firstPage.) > In general, I'm a bit afraid that this (i.e., the case where *slot is updated > before the heap compaction runs) may lead to security exploits. > Given the above, have your concerns been reduced? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:144: if (UNLIKELY(*slot != from)) { On 2016/12/05 11:27:46, haraken wrote: > > Does '*slot != from' happen only when the *slot is weakly processed? Then can we > add DCHECK(!*slot)? > > The first half of the comment block explains why that isn't the only case (that case may also include explicit clearing of a slot also, via clear() usage.) https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:152: size_t size = 0; On 2016/12/05 11:27:46, haraken wrote: > > size => payloadSize "size" is fine as-is. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:172: if (UNLIKELY(it != m_interiorFixups.end() && !it->value)) On 2016/12/05 11:27:46, haraken wrote: > > Would you help me understand this branch? I don't fully understand what > '!it->value' means. nullptr is what we initially add (see line just below); if the ephemeron fix point ends up running multiple times, it'll run the delayed marking & registration, which is what will trigger this. Very rarely. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:175: addInteriorMapping(interior); On 2016/12/05 11:27:46, haraken wrote: > > You can inline the method. Done. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:187: void addRelocation(MovableReference* slot) { On 2016/12/05 11:27:46, haraken wrote: > > I'm wondering why we need to have so different code path between the "first" > slot (=add()) and a second or later slots (=addRelocation()). > > I think this is because add() is heavily assuming that there is a 1:1 mapping > between MovableReference and MovableReference*. So we need a totally different > code path for a 1:N case. Maybe can we consider supporting the 1:N case by > default and removing the code duplication? (Do you think it will slow down the > heap compaction unnecessarily?) Your assumption is correct, backing stores are "linearly" referenced by their container wrapper.. if we can assume no stack®ister references when running the compaction. The representation & approach for "external relocations" has too much overhead for it to be used for heap compaction across all arenas. The mechanism was something I added mainly to have a backup, should there be some (but not many) extra & external relocations that had to be tracked and updated, where a bit of extra overhead is acceptable. It has turned out not to be needed so far, but I've kept the support around just in case. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:286: // around to be able to support this should the need arise.. On 2016/12/05 11:27:46, haraken wrote: > > You can say it's needed in Opera. Otherwise, someone might remove the code :) It's not, no one uses it. I'm fine with letting it go for now, but if ScriptWrappableVisitor starts lazily marking backing stores also, a mechanism like this will be needed. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:296: const char* gcReasonString(BlinkGC::GCReason reason) { On 2016/12/05 11:27:46, haraken wrote: > > We should share this function with ThreadState.cpp. Re-added, it was dropped there as a public method at some point. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:405: m_freeListSize > kFreeListSizeThreshold); On 2016/12/05 11:27:46, haraken wrote: > > Is it enough to look at only m_freeListSize? Even if m_freeListSize is huge, > it's not a problem if the number of free lists is small (i.e., not fragmented). > I guess we need to take into account the number of free lists, because the > fragmentation is a key factor for the heap compaction. It is possible to have a different & more discriminating policy, I hope we can derive an effective one over time. But you have to start somewhere reasonable (and not block progress figuring out the "right" policy.) Improving & expanding the compaction policy is one reason why it makes more sense to keep the m_freeListSize state and other compaction details local to HeapCompact, and not let that be represented & controlled via ThreadState. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:439: if (!*slot) On 2016/12/05 11:27:46, haraken wrote: > > This should not happen, right? No, we now insist - removed. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:462: size_t freeArenaSize = 0; On 2016/12/05 11:27:46, haraken wrote: > > totalFreeListSize Done, but there's arguably over-focus on naming here/now. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:468: for (int i = BlinkGC::Vector1ArenaIndex; i <= BlinkGC::HashTableArenaIndex; On 2016/12/05 11:27:46, haraken wrote: > > I'd prefer encapsulating the details of arenas in ThreadState.cpp. Instead of > iterating this loop in HeapCompact, can we introduce > ThreadState::getArenaStats(size_t* arenaSize, size_t* freeListSize)? > > Also I might want to move m_compactableArenas to ThreadState.h. > This loop & sampling was moved from ThreadState in the last patchset, so this is kind of frustrating feedback. I fail to see the substantive improvement in design by shuffling it back somehow - the compactor should know (more) what it takes for something to be compactable, so relying on ThreadState to communicate all that info across instead, doesn't reduce coupling. > Remember that until we ship the per-thread heap, arenas are per-ThreadState > things, not per-HeapCompact things (per-ThreadHeap things). Given that this is called per-ThreadState, there is not much chance for confusion. ThreadState has already far too much state, it doesn't need more. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:472: size_t arenaSize = arena->arenaSize(); On 2016/12/05 11:27:46, haraken wrote: > > It looks like arenaSize is unused except debug printing. Can we remove it? ? Not so, it is added to totalArenaSize as well. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:473: size_t freeSize = arena->freeListSize(); On 2016/12/05 11:27:46, haraken wrote: > > freeListSize Alright. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:488: m_freeListSize = freeArenaSize; On 2016/12/05 11:27:47, haraken wrote: > > Instead of making m_freeListSize a member variable, can we just make it a return > value of updateHeapResidency? I would prefer to keep it, see above TODO and remarks about compaction policy. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:457: BasePage* p = m_firstPage; On 2016/12/05 11:27:47, haraken wrote: > > page Done. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:476: On 2016/12/05 11:27:47, haraken wrote: > > Add DCHECK(!hasCurrentAllocationArea()). Done. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:490: continue; On 2016/12/05 11:27:47, haraken wrote: > > Don't we need to call: > > page->unlink(&m_firstUnsweptPage); > page->link(&m_firstPage); > > ? Good catch; the large object case is dead code, however. Added a DCHECK() instead. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:506: normalPage->sweepAndCompact(nextPage, &m_firstPage, allocationPoint); On 2016/12/05 11:27:47, haraken wrote: > > Honestly speaking, it's very hard to understand what sweepAndCompact is doing... > For example, why do we need the |nextPage|'s chain? I guess |allocationPoint| is > enough to keep track of where we're allocating. If we want to allocate from the > next page, we just need to find the next page in the |m_firstPage|'s chain > (i.e., pageFromAddress(allocationPoint)->nextPage())? > > If the performance permits, we might want to consider doing the heap compaction > in two paths. In the first path, we just sweep the pages using existing code. In > the second path, we scan the chain of m_firstPage and compact the pages. > > The other thing we might want to consider is if we really need to compact > objects over pages (which is making things a lot complicated). Maybe would it be > enough to compact objects inside one page and decommit unused system pages in > the page? Given that the mutator will anyway allocate new objects after the heap > compaction, we won't necessarily need to compact perfectly. > > Maybe a better idea is: > > 1) In the first path, just sweep the objects. But while sweeping, measure how > much each page is fragmented. > > 2) In the second path, compact only pages that are fragmented a lot. > > (In any case, my point is not that I want to make the compaction smarter but > that I want to simplify the compaction logic.) We do want to perform in-place compaction of these pages, which is what it accomplishes. Let me try to add some comments and elucidate the code some. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:627: return page; On 2016/12/05 11:27:47, haraken wrote: > > return new... Done. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1480: compact->movedObject(header->payload(), On 2016/12/05 11:27:47, haraken wrote: > > movedObject => relocate ? Alright.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: chromeos_daisy_chromium_compile_only_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromeos_daisy_...)
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapPage.cpp:506: normalPage->sweepAndCompact(nextPage, &m_firstPage, allocationPoint); On 2016/12/05 19:30:07, sof wrote: > On 2016/12/05 11:27:47, haraken wrote: > > > > Honestly speaking, it's very hard to understand what sweepAndCompact is > doing... > > For example, why do we need the |nextPage|'s chain? I guess |allocationPoint| > is > > enough to keep track of where we're allocating. If we want to allocate from > the > > next page, we just need to find the next page in the |m_firstPage|'s chain > > (i.e., pageFromAddress(allocationPoint)->nextPage())? > > > > If the performance permits, we might want to consider doing the heap > compaction > > in two paths. In the first path, we just sweep the pages using existing code. > In > > the second path, we scan the chain of m_firstPage and compact the pages. > > > > The other thing we might want to consider is if we really need to compact > > objects over pages (which is making things a lot complicated). Maybe would it > be > > enough to compact objects inside one page and decommit unused system pages in > > the page? Given that the mutator will anyway allocate new objects after the > heap > > compaction, we won't necessarily need to compact perfectly. > > > > Maybe a better idea is: > > > > 1) In the first path, just sweep the objects. But while sweeping, measure how > > much each page is fragmented. > > > > 2) In the second path, compact only pages that are fragmented a lot. > > > > (In any case, my point is not that I want to make the compaction smarter but > > that I want to simplify the compaction logic.) > > We do want to perform in-place compaction of these pages, which is what it > accomplishes. Let me try to add some comments and elucidate the code some. Done; refreshed the code + added comments. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1721: heap().compaction()->checkIfCompacting(this, gcType, reason); On 2016/12/05 11:27:47, haraken wrote: > > I'd prefer reorganizing this as follows: > > bool enableHeapCompaction = heap().compaction()->shouldEnableHeapCompaction(); > // This doesn't have a side effect on heap().compaction(). > if (enableHeapCompaction) { > gcType = GCWithSweepCompaction; > heap().compaction()->initialize(); // Set up the compaction > } Divided it up into predicate + config parts.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
I hope the next round will be the final round of review. I might want to have a simple unittest for SparseHeapBitmap. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:74: void addFixupCallback(MovableReference reference, On 2016/12/05 19:30:06, sof wrote: > On 2016/12/05 11:27:47, haraken wrote: > > > > I think you're assuming that addFixupCallback() is coupled with add(). Then > can > > we add DCHECK(m_fixups.contains(reference))? > > > > Not readily as that map is over slots, this is a reference. Hmm, what do you mean? DCHECK(m_fixups.contains(reference)) must hold here, right? Otherwise, you will end up with registering callbacks without registering the reference to the fixup mapping. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:172: if (UNLIKELY(it != m_interiorFixups.end() && !it->value)) On 2016/12/05 19:30:06, sof wrote: > On 2016/12/05 11:27:46, haraken wrote: > > > > Would you help me understand this branch? I don't fully understand what > > '!it->value' means. > > nullptr is what we initially add (see line just below); if the ephemeron fix > point ends up running multiple times, it'll run the delayed marking & > registration, which is what will trigger this. Very rarely. So what we really want to do is: ... // Drop the DCHECK if (UNLIKELY(it != m_interiorFixups.end())) { DCHECK(!it->value); return; } ? https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:286: // around to be able to support this should the need arise.. On 2016/12/05 19:30:06, sof wrote: > On 2016/12/05 11:27:46, haraken wrote: > > > > You can say it's needed in Opera. Otherwise, someone might remove the code :) > > It's not, no one uses it. I'm fine with letting it go for now, but if > ScriptWrappableVisitor starts lazily marking backing stores also, a mechanism > like this will be needed. If it's unused, I'd prefer not landing the code until we need it. ScriptWrappableVisitor does not yet support HeapVector/HeapHashTable. ScriptWrappableVisitor just marks objects that inherit from TraceWrapperBase. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:468: for (int i = BlinkGC::Vector1ArenaIndex; i <= BlinkGC::HashTableArenaIndex; On 2016/12/05 19:30:06, sof wrote: > On 2016/12/05 11:27:46, haraken wrote: > > > > I'd prefer encapsulating the details of arenas in ThreadState.cpp. Instead of > > iterating this loop in HeapCompact, can we introduce > > ThreadState::getArenaStats(size_t* arenaSize, size_t* freeListSize)? > > > > Also I might want to move m_compactableArenas to ThreadState.h. > > > > This loop & sampling was moved from ThreadState in the last patchset, so this is > kind of frustrating feedback. I fail to see the substantive improvement in > design by shuffling it back somehow - the compactor should know (more) what it > takes for something to be compactable, so relying on ThreadState to communicate > all that info across instead, doesn't reduce coupling. > > > Remember that until we ship the per-thread heap, arenas are per-ThreadState > > things, not per-HeapCompact things (per-ThreadHeap things). > > Given that this is called per-ThreadState, there is not much chance for > confusion. > > ThreadState has already far too much state, it doesn't need more. In the first review, I just wanted to say that we should remove setXXX() methods and make the code more stateless. (But this CL is too huge for me to post consistent comments.) I still think this should be a method of ThreadState -- see my comment on the new patchset. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:472: size_t arenaSize = arena->arenaSize(); On 2016/12/05 19:30:06, sof wrote: > On 2016/12/05 11:27:46, haraken wrote: > > > > It looks like arenaSize is unused except debug printing. Can we remove it? > > ? Not so, it is added to totalArenaSize as well. But totalArenaSize is only for debug printing. You can clean up debug-only things before landing. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:61: if (!m_relocatablePages.contains(slotPage)) Add LIKELY (or a comment) for documentation purpose? https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:63: #if ENABLE(ASSERT) Remove? https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:113: if (it->value) Maybe this should not happen if we apply my comment at line 131? https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:122: m_interiorFixups.set(slot, fixup); If we apply my comment at line 131, we should call: *slot = fixup; https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:126: void relocate(Address from, Address to) { Can we add an assert to check that |from| is in m_relocatablePages? https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:131: if (interior != m_interiorFixups.end()) { On 2016/12/05 19:30:05, sof wrote: > On 2016/12/05 11:27:46, haraken wrote: > > > > Why do we need to update m_interiorFixups here? I was assuming that all > interior > > pointers should be handled by relocateInteriorFixups. In other words, is it > > possible to remove line 126 - 136? > > > > No, we don't know which is processed first -- the interior backing store or the > backing stores that the slots point to. This is a key step. Hmm, I still don't quite understand this. Can we reorganize code to guarantee the following facts? (a) relocate(from, to) is always called with |from| pointing to the outer-most backing store. sweepAndCompact() does not call relocate(from, to) for interior backing stores. (b) If (a) is guaranteed, the branch at line 131 should not hit. (c) It's a responsibility of relocateInteriorFixups to fix up pointers to interior backing stores. In other words, we need to call '*slot = fixup' at line 122. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:148: if (UNLIKELY(*slot != from)) { Just in case, can we add the following assert? DCHECK(!*slot || slotIsNotPointingToAddressInRelocatablePages()); ? If clear() or weak processing runs, DCHECK(!*slot) should hold. If rehashing happens, *slot must be pointing to an address in m_firstPage. Otherwise, something wrong should be going on. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:163: if (LIKELY(!m_interiors)) I'm just curious but is this really LIKELY? I guess it would be common that we have at least one heap collection that contains another heap collection. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:171: void addInteriorFixup(Address interior, MovableReference* slot) { Move this function up to just below add(). https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:171: void addInteriorFixup(Address interior, MovableReference* slot) { Maybe can we drop the |interior| parameter? |interior| should be equal to |slot|. We can make m_interiors a hash set of MovableReference*'s. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:368: updateHeapResidency(state); I still think that updateHeapResidency should be a method of ThreadState; i.e., ThreadState::updateHeapResidency. Then we should call something like: size_t totalFreeListSize = 0; for (ThreadState* state : heap.threads()) { // Iterate all threads totalFreeListSize += state->updateHeapResidency(); // Maybe should be renamed to state->getHeapFragmentationStats() } https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:389: // is required. Both in terms of frequency and freelist residency. Remove the comment. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:500: // can be released back to the OS. This comment looks awesome! https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:521: // into. Hmm. What's an issue of just doing normalPage->link(&availablePages)? IIUC, the only point is that when we exhaust the allocation point on the current available page, we need to pick up *any* available page. Even if we do normalPage->link(&availablePages), we can pick up a new available page from availablePages. If that is the case, we won't need the chain of availablePages; we can just call normalPage->link(&m_firstPage) at line 514 and pick up m_firstPage when we need a new available page. When we finish the compaction, we just need to scan the chain of m_firstPage and remove pages where page->isEmpty() returns true. What do you think? https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:550: if (availablePages && allocationPoint != availablePages->payloadSize()) { If we rewrite this branch as follows: if (allocationPoint) { DCHECK(availablePages); if (allocationPoint != availablePages->payloadSize()) { ...; } BasePage* nextPage; availablePages->unlink(&nextPage); availablePages = static_cast<NormalPage*>(nextPage); } can we remove line 533 - 546 and line 561? https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:555: defined(MEMORY_SANITIZER) Should we use SET_MEMORY_INACCESSIBLE? https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1420: size_t NormalPage::sweepAndCompact(NormalPage*& arena, arena => avalilablePages https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1437: ASAN_UNPOISON_MEMORY_REGION(headerAddress, size); SET_MEMORY_INACCESSIBLE + CHECK_MEMORY_INACCESSIBLE Maybe you want to copy things from NormalPage::sweep() on ToT. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1441: #if ENABLE(ASSERT) Remove? In any case, mimic ToT. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1460: #if ENABLE(ASSERT) || defined(LEAK_SANITIZER) || defined(ADDRESS_SANITIZER) || \ SET_MEMORY_INACCESSIBLE https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1467: DCHECK(header->isMarked()); Remove. This is checked in unmark(). https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1469: markedObjectSize += size; I'd move this to line 1509. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1475: BasePage* nextP; nextP => nextAvailablePage https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1481: defined(MEMORY_SANITIZER) This #if wouldn't be needed -- it's covered by SET_MEMORY_INACCESSIBLE. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1484: arena->arenaForNormalPage()->addToFreeList( 'arena->arenaForNormalPage()->' would not be needed. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1491: if (LIKELY(movedObject != headerAddress)) { Just to confirm: movedObject == headerAddress can happen only at the very beginning of the compaction phase (before hitting the first fragmented object), right? https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1495: // Do that by unpoisoning the payload entirely. Would you help me understand why you need the unpoisoning? If it's really needed, we need to poison it again after moving the object. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1498: arena->arenaForNormalPage()->arenaIndex())) { At least, you can move the arenaIndex check outside the sweeping loop. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1517: if (arena != this) Shouldn't this be equal to 'if (allocationPoint == 0)'? If that is the case, this branch won't be needed. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1520: memset(payload() + allocationPoint, 0, payloadSize() - allocationPoint); Just in case, I'd prefer zapping the memory. And check that we're moving object to the zapped memory at line 1506.
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
There's unit test coverage for SparseHeapBitmap already, see HeapCompactTest. I'll have to locally verify ASan status after the latest patchset updates. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:74: void addFixupCallback(MovableReference reference, On 2016/12/06 13:30:38, haraken wrote: > On 2016/12/05 19:30:06, sof wrote: > > On 2016/12/05 11:27:47, haraken wrote: > > > > > > I think you're assuming that addFixupCallback() is coupled with add(). Then > > can > > > we add DCHECK(m_fixups.contains(reference))? > > > > > > > Not readily as that map is over slots, this is a reference. > > Hmm, what do you mean? DCHECK(m_fixups.contains(reference)) must hold here, > right? Otherwise, you will end up with registering callbacks without registering > the reference to the fixup mapping. > Sorry, thought of wrong map. It doesn't hold in general, LinkedHashSet<>s with weak references will not have their fixups registered until they're delayedly marked. (IntersectionObserver usage is an example of this.) https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:172: if (UNLIKELY(it != m_interiorFixups.end() && !it->value)) On 2016/12/06 13:30:38, haraken wrote: > On 2016/12/05 19:30:06, sof wrote: > > On 2016/12/05 11:27:46, haraken wrote: > > > > > > Would you help me understand this branch? I don't fully understand what > > > '!it->value' means. > > > > nullptr is what we initially add (see line just below); if the ephemeron fix > > point ends up running multiple times, it'll run the delayed marking & > > registration, which is what will trigger this. Very rarely. > > So what we really want to do is: > > ... // Drop the DCHECK > if (UNLIKELY(it != m_interiorFixups.end())) { > DCHECK(!it->value); > return; > } > > ? No, ephemeron fix pointing is quite valid. https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:286: // around to be able to support this should the need arise.. On 2016/12/06 13:30:38, haraken wrote: > On 2016/12/05 19:30:06, sof wrote: > > On 2016/12/05 11:27:46, haraken wrote: > > > > > > You can say it's needed in Opera. Otherwise, someone might remove the code > :) > > > > It's not, no one uses it. I'm fine with letting it go for now, but if > > ScriptWrappableVisitor starts lazily marking backing stores also, a mechanism > > like this will be needed. > > If it's unused, I'd prefer not landing the code until we need it. > > ScriptWrappableVisitor does not yet support HeapVector/HeapHashTable. > ScriptWrappableVisitor just marks objects that inherit from TraceWrapperBase. ok, the normal procedure for Blink. Removed. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:61: if (!m_relocatablePages.contains(slotPage)) On 2016/12/06 13:30:39, haraken wrote: > > Add LIKELY (or a comment) for documentation purpose? Yes, LIKELY() is warranted. Added. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:63: #if ENABLE(ASSERT) On 2016/12/06 13:30:38, haraken wrote: > > Remove? The ENABLE(ASSERT) is needed to unbreak CrOS bots (e.g., ps#6.) https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:126: void relocate(Address from, Address to) { On 2016/12/06 13:30:39, haraken wrote: > > Can we add an assert to check that |from| is in m_relocatablePages? Done. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:131: if (interior != m_interiorFixups.end()) { On 2016/12/06 13:30:38, haraken wrote: > On 2016/12/05 19:30:05, sof wrote: > > On 2016/12/05 11:27:46, haraken wrote: > > > > > > Why do we need to update m_interiorFixups here? I was assuming that all > > interior > > > pointers should be handled by relocateInteriorFixups. In other words, is it > > > possible to remove line 126 - 136? > > > > > > > No, we don't know which is processed first -- the interior backing store or > the > > backing stores that the slots point to. This is a key step. > > Hmm, I still don't quite understand this. Can we reorganize code to guarantee > the following facts? > > (a) relocate(from, to) is always called with |from| pointing to the outer-most > backing store. sweepAndCompact() does not call relocate(from, to) for interior > backing stores. > > (b) If (a) is guaranteed, the branch at line 131 should not hit. > > (c) It's a responsibility of relocateInteriorFixups to fix up pointers to > interior backing stores. In other words, we need to call '*slot = fixup' at line > 122. > We can't enforce an object graph heap ordering. i.e., we have no control if a backing store for an interior slot is compacted before or after the backing store the slot happens to reside in; both have to be catered for. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:148: if (UNLIKELY(*slot != from)) { On 2016/12/06 13:30:38, haraken wrote: > > Just in case, can we add the following assert? > > DCHECK(!*slot || slotIsNotPointingToAddressInRelocatablePages()); > > ? > > If clear() or weak processing runs, DCHECK(!*slot) should hold. If rehashing > happens, *slot must be pointing to an address in m_firstPage. Otherwise, > something wrong should be going on. > I think so -- a non-null *slot must be pointing to a page in an arena that's compactable, but not in m_relocatablePages. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:163: if (LIKELY(!m_interiors)) On 2016/12/06 13:30:38, haraken wrote: > > I'm just curious but is this really LIKELY? I guess it would be common that we > have at least one heap collection that contains another heap collection. > You're right, the (un)LIKELY() is used one level too far out here. Retired. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:171: void addInteriorFixup(Address interior, MovableReference* slot) { On 2016/12/06 13:30:38, haraken wrote: > > Maybe can we drop the |interior| parameter? |interior| should be equal to > |slot|. We can make m_interiors a hash set of MovableReference*'s. > Ah yes, quite redundant to pass that separately. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:368: updateHeapResidency(state); On 2016/12/06 13:30:39, haraken wrote: > > I still think that updateHeapResidency should be a method of ThreadState; i.e., > ThreadState::updateHeapResidency. > > Then we should call something like: > > size_t totalFreeListSize = 0; > for (ThreadState* state : heap.threads()) { // Iterate all threads > totalFreeListSize += state->updateHeapResidency(); // Maybe should be > renamed to state->getHeapFragmentationStats() > } I'm keeping it here; ThreadState shouldn't keep compaction details nor expose methods for it via its (considerable already) public interface. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:389: // is required. Both in terms of frequency and freelist residency. On 2016/12/06 13:30:38, haraken wrote: > > Remove the comment. Done. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:521: // into. On 2016/12/06 13:30:39, haraken wrote: > > Hmm. What's an issue of just doing normalPage->link(&availablePages)? IIUC, the > only point is that when we exhaust the allocation point on the current available > page, we need to pick up *any* available page. Even if we do > normalPage->link(&availablePages), we can pick up a new available page from > availablePages. > > If that is the case, we won't need the chain of availablePages; we can just call > normalPage->link(&m_firstPage) at line 514 and pick up m_firstPage when we need > a new available page. > > When we finish the compaction, we just need to scan the chain of m_firstPage and > remove pages where page->isEmpty() returns true. > > What do you think? Thanks for your input, but it seems very complex to add additional scanning/flitering as m_firstPage may at this stage already have allocations (from prefinalizers and whatnot). I'm keeping this scheme, it's clear and follows the normal style of chaining pages. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:550: if (availablePages && allocationPoint != availablePages->payloadSize()) { On 2016/12/06 13:30:39, haraken wrote: > > If we rewrite this branch as follows: > > if (allocationPoint) { > DCHECK(availablePages); > if (allocationPoint != availablePages->payloadSize()) { > ...; > } > BasePage* nextPage; > availablePages->unlink(&nextPage); > availablePages = static_cast<NormalPage*>(nextPage); > } > > can we remove line 533 - 546 and line 561? Thanks for the suggestion, but what's here already is perfectly clear, so keeping it. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:555: defined(MEMORY_SANITIZER) On 2016/12/06 13:30:39, haraken wrote: > > Should we use SET_MEMORY_INACCESSIBLE? Done. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1420: size_t NormalPage::sweepAndCompact(NormalPage*& arena, On 2016/12/06 13:30:39, haraken wrote: > > arena => avalilablePages Renamed. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1437: ASAN_UNPOISON_MEMORY_REGION(headerAddress, size); On 2016/12/06 13:30:39, haraken wrote: > > SET_MEMORY_INACCESSIBLE + CHECK_MEMORY_INACCESSIBLE > > Maybe you want to copy things from NormalPage::sweep() on ToT. No, please see the comment above why that isn't appropriate. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1441: #if ENABLE(ASSERT) On 2016/12/06 13:30:39, haraken wrote: > > Remove? > > In any case, mimic ToT. It currently is (but the other uses tend to still use ASSERT()) https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1460: #if ENABLE(ASSERT) || defined(LEAK_SANITIZER) || defined(ADDRESS_SANITIZER) || \ On 2016/12/06 13:30:39, haraken wrote: > > SET_MEMORY_INACCESSIBLE Not appropriate at this stage, see comment above (which 'git cl format' insists on mis-indenting.) https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1467: DCHECK(header->isMarked()); On 2016/12/06 13:30:39, haraken wrote: > > Remove. This is checked in unmark(). Done. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1469: markedObjectSize += size; On 2016/12/06 13:30:39, haraken wrote: > > I'd move this to line 1509. Moved. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1475: BasePage* nextP; On 2016/12/06 13:30:39, haraken wrote: > > nextP => nextAvailablePage alright.. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1481: defined(MEMORY_SANITIZER) On 2016/12/06 13:30:39, haraken wrote: > > This #if wouldn't be needed -- it's covered by SET_MEMORY_INACCESSIBLE. Done. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1484: arena->arenaForNormalPage()->addToFreeList( On 2016/12/06 13:30:39, haraken wrote: > > 'arena->arenaForNormalPage()->' would not be needed. It's preferable not to subtly build in the assumption that the pages here belong to the same arena. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1491: if (LIKELY(movedObject != headerAddress)) { On 2016/12/06 13:30:39, haraken wrote: > > Just to confirm: movedObject == headerAddress can happen only at the very > beginning of the compaction phase (before hitting the first fragmented object), > right? Yes, pretty much -- it could conceivably also happen on pages later than the first one, if the first allocation on those wouldn't fit the current |arena|. If the first page evolves to contain old & stable allocations, that is a possibility. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1495: // Do that by unpoisoning the payload entirely. On 2016/12/06 13:30:39, haraken wrote: > > Would you help me understand why you need the unpoisoning? > > If it's really needed, we need to poison it again after moving the object. The above comment explains this already. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1498: arena->arenaForNormalPage()->arenaIndex())) { On 2016/12/06 13:30:39, haraken wrote: > > At least, you can move the arenaIndex check outside the sweeping loop. Done; a bug to consult |arena| here, btw. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1517: if (arena != this) On 2016/12/06 13:30:39, haraken wrote: > > Shouldn't this be equal to 'if (allocationPoint == 0)'? If that is the case, > this branch won't be needed. If the compacted page overlaps with where the compaction pointer currently resides, we only clear the remainder. If it doesn't, we clear it all. The logic here is correct and the natural one, afaict. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1520: memset(payload() + allocationPoint, 0, payloadSize() - allocationPoint); On 2016/12/06 13:30:39, haraken wrote: > > Just in case, I'd prefer zapping the memory. And check that we're moving object > to the zapped memory at line 1506. > Now zapped/memset(), as appropriate.
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:172: if (UNLIKELY(it != m_interiorFixups.end() && !it->value)) On 2016/12/06 21:39:34, sof wrote: > On 2016/12/06 13:30:38, haraken wrote: > > On 2016/12/05 19:30:06, sof wrote: > > > On 2016/12/05 11:27:46, haraken wrote: > > > > > > > > Would you help me understand this branch? I don't fully understand what > > > > '!it->value' means. > > > > > > nullptr is what we initially add (see line just below); if the ephemeron fix > > > point ends up running multiple times, it'll run the delayed marking & > > > registration, which is what will trigger this. Very rarely. > > > > So what we really want to do is: > > > > ... // Drop the DCHECK > > if (UNLIKELY(it != m_interiorFixups.end())) { > > DCHECK(!it->value); > > return; > > } > > > > ? > > No, ephemeron fix pointing is quite valid. Ephemeron fix pointing is valid, but how is it possible that 'it != m_interiorFixups.end()' but it->value is not nullptr? I mean: ... // Drop the DCHECK <--- it's redundant. if (UNLIKELY(it != m_interiorFixups.end())) { <---- ephemeron may hit this branch. DCHECK(!it->value); <---- but then it->value should be nullptr return; } https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:63: #if ENABLE(ASSERT) On 2016/12/06 21:39:34, sof wrote: > On 2016/12/06 13:30:38, haraken wrote: > > > > Remove? > > The ENABLE(ASSERT) is needed to unbreak CrOS bots (e.g., ps#6.) Then can we use #if DCHECK_IS_ON()? I guess ENABLE(ASSERT) is deprecated. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:131: if (interior != m_interiorFixups.end()) { On 2016/12/06 21:39:34, sof wrote: > On 2016/12/06 13:30:38, haraken wrote: > > On 2016/12/05 19:30:05, sof wrote: > > > On 2016/12/05 11:27:46, haraken wrote: > > > > > > > > Why do we need to update m_interiorFixups here? I was assuming that all > > > interior > > > > pointers should be handled by relocateInteriorFixups. In other words, is > it > > > > possible to remove line 126 - 136? > > > > > > > > > > No, we don't know which is processed first -- the interior backing store or > > the > > > backing stores that the slots point to. This is a key step. > > > > Hmm, I still don't quite understand this. Can we reorganize code to guarantee > > the following facts? > > > > (a) relocate(from, to) is always called with |from| pointing to the outer-most > > backing store. sweepAndCompact() does not call relocate(from, to) for interior > > backing stores. > > > > (b) If (a) is guaranteed, the branch at line 131 should not hit. > > > > (c) It's a responsibility of relocateInteriorFixups to fix up pointers to > > interior backing stores. In other words, we need to call '*slot = fixup' at > line > > 122. > > > > We can't enforce an object graph heap ordering. i.e., we have no control if a > backing store for an interior slot is compacted before or after the backing > store the slot happens to reside in; both have to be catered for. If a backing store A is contained in a backing store B, the following invariant holds: Page's begin address <= A's begin address <= B's begin address < B's end address <= A's end address <= Page's end address Given that you're sweeping and compacting from Page's begin address to Page's end address, isn't it guaranteed that A's begin address is found before B's begin address? In other words, isn't it guaranteed that the outer-most backing store is fixed up before any interior backing store is fixed up. (Even if it's not guaranteed for some reason, I don't quite understand why we don't need to call '*slot = fixup' at line 122.) https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:521: // into. On 2016/12/06 21:39:35, sof wrote: > On 2016/12/06 13:30:39, haraken wrote: > > > > Hmm. What's an issue of just doing normalPage->link(&availablePages)? IIUC, > the > > only point is that when we exhaust the allocation point on the current > available > > page, we need to pick up *any* available page. Even if we do > > normalPage->link(&availablePages), we can pick up a new available page from > > availablePages. > > > > If that is the case, we won't need the chain of availablePages; we can just > call > > normalPage->link(&m_firstPage) at line 514 and pick up m_firstPage when we > need > > a new available page. > > > > When we finish the compaction, we just need to scan the chain of m_firstPage > and > > remove pages where page->isEmpty() returns true. > > > > What do you think? > > Thanks for your input, but it seems very complex to add additional > scanning/flitering as m_firstPage may at this stage already have allocations > (from prefinalizers and whatnot). > > I'm keeping this scheme, it's clear and follows the normal style of chaining > pages. Okay. But even if we want to use the availablePage's chain, why can't we call normalPage->link(&availablePages)? i.e., why we need the complexity of |secondPage|? https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:550: if (availablePages && allocationPoint != availablePages->payloadSize()) { On 2016/12/06 21:39:35, sof wrote: > On 2016/12/06 13:30:39, haraken wrote: > > > > If we rewrite this branch as follows: > > > > if (allocationPoint) { > > DCHECK(availablePages); > > if (allocationPoint != availablePages->payloadSize()) { > > ...; > > } > > BasePage* nextPage; > > availablePages->unlink(&nextPage); > > availablePages = static_cast<NormalPage*>(nextPage); > > } > > > > can we remove line 533 - 546 and line 561? > > Thanks for the suggestion, but what's here already is perfectly clear, so > keeping it. It took me a while to understand why line 533 - 546 is needed. It seems worth to me removing the block.
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/80001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:172: if (UNLIKELY(it != m_interiorFixups.end() && !it->value)) On 2016/12/07 08:55:11, haraken wrote: > On 2016/12/06 21:39:34, sof wrote: > > On 2016/12/06 13:30:38, haraken wrote: > > > On 2016/12/05 19:30:06, sof wrote: > > > > On 2016/12/05 11:27:46, haraken wrote: > > > > > > > > > > Would you help me understand this branch? I don't fully understand what > > > > > '!it->value' means. > > > > > > > > nullptr is what we initially add (see line just below); if the ephemeron > fix > > > > point ends up running multiple times, it'll run the delayed marking & > > > > registration, which is what will trigger this. Very rarely. > > > > > > So what we really want to do is: > > > > > > ... // Drop the DCHECK > > > if (UNLIKELY(it != m_interiorFixups.end())) { > > > DCHECK(!it->value); > > > return; > > > } > > > > > > ? > > > > No, ephemeron fix pointing is quite valid. > > Ephemeron fix pointing is valid, but how is it possible that 'it != > m_interiorFixups.end()' but it->value is not nullptr? > > I mean: > > ... // Drop the DCHECK <--- it's redundant. > if (UNLIKELY(it != m_interiorFixups.end())) { <---- ephemeron may hit this > branch. > DCHECK(!it->value); <---- but then it->value should be nullptr > return; > } Tuned the assert logic. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:63: #if ENABLE(ASSERT) On 2016/12/07 08:55:11, haraken wrote: > On 2016/12/06 21:39:34, sof wrote: > > On 2016/12/06 13:30:38, haraken wrote: > > > > > > Remove? > > > > The ENABLE(ASSERT) is needed to unbreak CrOS bots (e.g., ps#6.) > > Then can we use #if DCHECK_IS_ON()? I guess ENABLE(ASSERT) is deprecated. That would be redundant; the situation is a relative mess. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:131: if (interior != m_interiorFixups.end()) { On 2016/12/07 08:55:11, haraken wrote: > On 2016/12/06 21:39:34, sof wrote: > > On 2016/12/06 13:30:38, haraken wrote: > > > On 2016/12/05 19:30:05, sof wrote: > > > > On 2016/12/05 11:27:46, haraken wrote: > > > > > > > > > > Why do we need to update m_interiorFixups here? I was assuming that all > > > > interior > > > > > pointers should be handled by relocateInteriorFixups. In other words, is > > it > > > > > possible to remove line 126 - 136? > > > > > > > > > > > > > No, we don't know which is processed first -- the interior backing store > or > > > the > > > > backing stores that the slots point to. This is a key step. > > > > > > Hmm, I still don't quite understand this. Can we reorganize code to > guarantee > > > the following facts? > > > > > > (a) relocate(from, to) is always called with |from| pointing to the > outer-most > > > backing store. sweepAndCompact() does not call relocate(from, to) for > interior > > > backing stores. > > > > > > (b) If (a) is guaranteed, the branch at line 131 should not hit. > > > > > > (c) It's a responsibility of relocateInteriorFixups to fix up pointers to > > > interior backing stores. In other words, we need to call '*slot = fixup' at > > line > > > 122. > > > > > > > We can't enforce an object graph heap ordering. i.e., we have no control if a > > backing store for an interior slot is compacted before or after the backing > > store the slot happens to reside in; both have to be catered for. > > If a backing store A is contained in a backing store B, the following invariant > holds: > > Page's begin address <= A's begin address <= B's begin address < B's end > address <= A's end address <= Page's end address > > Given that you're sweeping and compacting from Page's begin address to Page's > end address, isn't it guaranteed that A's begin address is found before B's > begin address? In other words, isn't it guaranteed that the outer-most backing > store is fixed up before any interior backing store is fixed up. > > (Even if it's not guaranteed for some reason, I don't quite understand why we > don't need to call '*slot = fixup' at line 122.) I don't think your assumptions hold, we're compacting backing store objects not slots, and those we have no control over where they live in the heap. You also need to consider that backing stores can live across arenas, in general. https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:521: // into. On 2016/12/07 08:55:11, haraken wrote: > On 2016/12/06 21:39:35, sof wrote: > > On 2016/12/06 13:30:39, haraken wrote: > > > > > > Hmm. What's an issue of just doing normalPage->link(&availablePages)? IIUC, > > the > > > only point is that when we exhaust the allocation point on the current > > available > > > page, we need to pick up *any* available page. Even if we do > > > normalPage->link(&availablePages), we can pick up a new available page from > > > availablePages. > > > > > > If that is the case, we won't need the chain of availablePages; we can just > > call > > > normalPage->link(&m_firstPage) at line 514 and pick up m_firstPage when we > > need > > > a new available page. > > > > > > When we finish the compaction, we just need to scan the chain of m_firstPage > > and > > > remove pages where page->isEmpty() returns true. > > > > > > What do you think? > > > > Thanks for your input, but it seems very complex to add additional > > scanning/flitering as m_firstPage may at this stage already have allocations > > (from prefinalizers and whatnot). > > > > I'm keeping this scheme, it's clear and follows the normal style of chaining > > pages. > > Okay. But even if we want to use the availablePage's chain, why can't we call > normalPage->link(&availablePages)? i.e., why we need the complexity of > |secondPage|? As the comment above explains, we're allocating from the head of |availablePages|, so we have to inject it past that.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R= BUG= ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R= BUG=672030 ==========
(I might be busy tomorrow -- will do the final scanning on Friday.) https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:131: if (interior != m_interiorFixups.end()) { On 2016/12/07 10:45:08, sof wrote: > On 2016/12/07 08:55:11, haraken wrote: > > On 2016/12/06 21:39:34, sof wrote: > > > On 2016/12/06 13:30:38, haraken wrote: > > > > On 2016/12/05 19:30:05, sof wrote: > > > > > On 2016/12/05 11:27:46, haraken wrote: > > > > > > > > > > > > Why do we need to update m_interiorFixups here? I was assuming that > all > > > > > interior > > > > > > pointers should be handled by relocateInteriorFixups. In other words, > is > > > it > > > > > > possible to remove line 126 - 136? > > > > > > > > > > > > > > > > No, we don't know which is processed first -- the interior backing store > > or > > > > the > > > > > backing stores that the slots point to. This is a key step. > > > > > > > > Hmm, I still don't quite understand this. Can we reorganize code to > > guarantee > > > > the following facts? > > > > > > > > (a) relocate(from, to) is always called with |from| pointing to the > > outer-most > > > > backing store. sweepAndCompact() does not call relocate(from, to) for > > interior > > > > backing stores. > > > > > > > > (b) If (a) is guaranteed, the branch at line 131 should not hit. > > > > > > > > (c) It's a responsibility of relocateInteriorFixups to fix up pointers to > > > > interior backing stores. In other words, we need to call '*slot = fixup' > at > > > line > > > > 122. > > > > > > > > > > We can't enforce an object graph heap ordering. i.e., we have no control if > a > > > backing store for an interior slot is compacted before or after the backing > > > store the slot happens to reside in; both have to be catered for. > > > > If a backing store A is contained in a backing store B, the following > invariant > > holds: > > > > Page's begin address <= A's begin address <= B's begin address < B's end > > address <= A's end address <= Page's end address > > > > Given that you're sweeping and compacting from Page's begin address to Page's > > end address, isn't it guaranteed that A's begin address is found before B's > > begin address? In other words, isn't it guaranteed that the outer-most backing > > store is fixed up before any interior backing store is fixed up. > > > > (Even if it's not guaranteed for some reason, I don't quite understand why we > > don't need to call '*slot = fixup' at line 122.) > > I don't think your assumptions hold, we're compacting backing store objects not > slots, and those we have no control over where they live in the heap. You also > need to consider that backing stores can live across arenas, in general. Ah, thanks, now I understand! https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:521: // into. On 2016/12/07 10:45:08, sof wrote: > On 2016/12/07 08:55:11, haraken wrote: > > On 2016/12/06 21:39:35, sof wrote: > > > On 2016/12/06 13:30:39, haraken wrote: > > > > > > > > Hmm. What's an issue of just doing normalPage->link(&availablePages)? > IIUC, > > > the > > > > only point is that when we exhaust the allocation point on the current > > > available > > > > page, we need to pick up *any* available page. Even if we do > > > > normalPage->link(&availablePages), we can pick up a new available page > from > > > > availablePages. > > > > > > > > If that is the case, we won't need the chain of availablePages; we can > just > > > call > > > > normalPage->link(&m_firstPage) at line 514 and pick up m_firstPage when we > > > need > > > > a new available page. > > > > > > > > When we finish the compaction, we just need to scan the chain of > m_firstPage > > > and > > > > remove pages where page->isEmpty() returns true. > > > > > > > > What do you think? > > > > > > Thanks for your input, but it seems very complex to add additional > > > scanning/flitering as m_firstPage may at this stage already have allocations > > > (from prefinalizers and whatnot). > > > > > > I'm keeping this scheme, it's clear and follows the normal style of chaining > > > pages. > > > > Okay. But even if we want to use the availablePage's chain, why can't we call > > normalPage->link(&availablePages)? i.e., why we need the complexity of > > |secondPage|? > > As the comment above explains, we're allocating from the head of > |availablePages|, so we have to inject it past that. Ah, ok, now I understand the logic. In addition to the below comment, I want to simplify the chaining logic a bit more. I think the complexity comes from the fact that you're using availablePages for two meanings: the head of the available page's chain and the page which you're allocating from. - availablePages is the head of the available page's chain - allocatingPage points to the page which you're allocating from. Then you won't need the secondPage magic.
https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:521: // into. On 2016/12/07 12:44:33, haraken wrote: > On 2016/12/07 10:45:08, sof wrote: > > On 2016/12/07 08:55:11, haraken wrote: > > > On 2016/12/06 21:39:35, sof wrote: > > > > On 2016/12/06 13:30:39, haraken wrote: > > > > > > > > > > Hmm. What's an issue of just doing normalPage->link(&availablePages)? > > IIUC, > > > > the > > > > > only point is that when we exhaust the allocation point on the current > > > > available > > > > > page, we need to pick up *any* available page. Even if we do > > > > > normalPage->link(&availablePages), we can pick up a new available page > > from > > > > > availablePages. > > > > > > > > > > If that is the case, we won't need the chain of availablePages; we can > > just > > > > call > > > > > normalPage->link(&m_firstPage) at line 514 and pick up m_firstPage when > we > > > > need > > > > > a new available page. > > > > > > > > > > When we finish the compaction, we just need to scan the chain of > > m_firstPage > > > > and > > > > > remove pages where page->isEmpty() returns true. > > > > > > > > > > What do you think? > > > > > > > > Thanks for your input, but it seems very complex to add additional > > > > scanning/flitering as m_firstPage may at this stage already have > allocations > > > > (from prefinalizers and whatnot). > > > > > > > > I'm keeping this scheme, it's clear and follows the normal style of > chaining > > > > pages. > > > > > > Okay. But even if we want to use the availablePage's chain, why can't we > call > > > normalPage->link(&availablePages)? i.e., why we need the complexity of > > > |secondPage|? > > > > As the comment above explains, we're allocating from the head of > > |availablePages|, so we have to inject it past that. > > Ah, ok, now I understand the logic. > > In addition to the below comment, I want to simplify the chaining logic a bit > more. I think the complexity comes from the fact that you're using > availablePages for two meanings: the head of the available page's chain and the > page which you're allocating from. > > - availablePages is the head of the available page's chain > - allocatingPage points to the page which you're allocating from. > > Then you won't need the secondPage magic. Stating the obvious perhaps, but notice that the head (i.e., |allocatingPage|) will clearly have to be updated & advanced to the next while compacting a page & we're out of space on the current one. Which is why the pages are chained like here, rather than kept as two separately updatable pointers. (I don't find that a magic/inscrutable representation, but I might well be partial.)
https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:521: // into. On 2016/12/07 13:01:59, sof wrote: > On 2016/12/07 12:44:33, haraken wrote: > > On 2016/12/07 10:45:08, sof wrote: > > > On 2016/12/07 08:55:11, haraken wrote: > > > > On 2016/12/06 21:39:35, sof wrote: > > > > > On 2016/12/06 13:30:39, haraken wrote: > > > > > > > > > > > > Hmm. What's an issue of just doing normalPage->link(&availablePages)? > > > IIUC, > > > > > the > > > > > > only point is that when we exhaust the allocation point on the current > > > > > available > > > > > > page, we need to pick up *any* available page. Even if we do > > > > > > normalPage->link(&availablePages), we can pick up a new available page > > > from > > > > > > availablePages. > > > > > > > > > > > > If that is the case, we won't need the chain of availablePages; we can > > > just > > > > > call > > > > > > normalPage->link(&m_firstPage) at line 514 and pick up m_firstPage > when > > we > > > > > need > > > > > > a new available page. > > > > > > > > > > > > When we finish the compaction, we just need to scan the chain of > > > m_firstPage > > > > > and > > > > > > remove pages where page->isEmpty() returns true. > > > > > > > > > > > > What do you think? > > > > > > > > > > Thanks for your input, but it seems very complex to add additional > > > > > scanning/flitering as m_firstPage may at this stage already have > > allocations > > > > > (from prefinalizers and whatnot). > > > > > > > > > > I'm keeping this scheme, it's clear and follows the normal style of > > chaining > > > > > pages. > > > > > > > > Okay. But even if we want to use the availablePage's chain, why can't we > > call > > > > normalPage->link(&availablePages)? i.e., why we need the complexity of > > > > |secondPage|? > > > > > > As the comment above explains, we're allocating from the head of > > > |availablePages|, so we have to inject it past that. > > > > Ah, ok, now I understand the logic. > > > > In addition to the below comment, I want to simplify the chaining logic a bit > > more. I think the complexity comes from the fact that you're using > > availablePages for two meanings: the head of the available page's chain and > the > > page which you're allocating from. > > > > - availablePages is the head of the available page's chain > > - allocatingPage points to the page which you're allocating from. > > > > Then you won't need the secondPage magic. > > Stating the obvious perhaps, but notice that the head (i.e., |allocatingPage|) > will clearly have to be updated & advanced to the next while compacting a page & > we're out of space on the current one. Which is why the pages are chained like > here, rather than kept as two separately updatable pointers. > > (I don't find that a magic/inscrutable representation, but I might well be > partial.) You're right, but if we keep |availablePages| and |allocatingPage|: - When we want to add a new available page, we just need to call page->link(&availablePages). - When we exhausted the current |allocatingPage|, we just need to call allocatingPage=availablePages; allocatingPage->unlink(&availablePages); which looks more straightforward and consistent with what we're doing for m_firstPage & m_firstUnsweptPage to me personally.
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/140001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:521: // into. On 2016/12/07 15:45:53, haraken wrote: > On 2016/12/07 13:01:59, sof wrote: > > On 2016/12/07 12:44:33, haraken wrote: > > > On 2016/12/07 10:45:08, sof wrote: > > > > On 2016/12/07 08:55:11, haraken wrote: > > > > > On 2016/12/06 21:39:35, sof wrote: > > > > > > On 2016/12/06 13:30:39, haraken wrote: > > > > > > > > > > > > > > Hmm. What's an issue of just doing > normalPage->link(&availablePages)? > > > > IIUC, > > > > > > the > > > > > > > only point is that when we exhaust the allocation point on the > current > > > > > > available > > > > > > > page, we need to pick up *any* available page. Even if we do > > > > > > > normalPage->link(&availablePages), we can pick up a new available > page > > > > from > > > > > > > availablePages. > > > > > > > > > > > > > > If that is the case, we won't need the chain of availablePages; we > can > > > > just > > > > > > call > > > > > > > normalPage->link(&m_firstPage) at line 514 and pick up m_firstPage > > when > > > we > > > > > > need > > > > > > > a new available page. > > > > > > > > > > > > > > When we finish the compaction, we just need to scan the chain of > > > > m_firstPage > > > > > > and > > > > > > > remove pages where page->isEmpty() returns true. > > > > > > > > > > > > > > What do you think? > > > > > > > > > > > > Thanks for your input, but it seems very complex to add additional > > > > > > scanning/flitering as m_firstPage may at this stage already have > > > allocations > > > > > > (from prefinalizers and whatnot). > > > > > > > > > > > > I'm keeping this scheme, it's clear and follows the normal style of > > > chaining > > > > > > pages. > > > > > > > > > > Okay. But even if we want to use the availablePage's chain, why can't we > > > call > > > > > normalPage->link(&availablePages)? i.e., why we need the complexity of > > > > > |secondPage|? > > > > > > > > As the comment above explains, we're allocating from the head of > > > > |availablePages|, so we have to inject it past that. > > > > > > Ah, ok, now I understand the logic. > > > > > > In addition to the below comment, I want to simplify the chaining logic a > bit > > > more. I think the complexity comes from the fact that you're using > > > availablePages for two meanings: the head of the available page's chain and > > the > > > page which you're allocating from. > > > > > > - availablePages is the head of the available page's chain > > > - allocatingPage points to the page which you're allocating from. > > > > > > Then you won't need the secondPage magic. > > > > Stating the obvious perhaps, but notice that the head (i.e., |allocatingPage|) > > will clearly have to be updated & advanced to the next while compacting a page > & > > we're out of space on the current one. Which is why the pages are chained like > > here, rather than kept as two separately updatable pointers. > > > > (I don't find that a magic/inscrutable representation, but I might well be > > partial.) > > You're right, but if we keep |availablePages| and |allocatingPage|: > > - When we want to add a new available page, we just need to call > page->link(&availablePages). > - When we exhausted the current |allocatingPage|, we just need to call > allocatingPage=availablePages; allocatingPage->unlink(&availablePages); > > which looks more straightforward and consistent with what we're doing for > m_firstPage & m_firstUnsweptPage to me personally. given the state threading that needs to happen, it makes for something a bit too verbose, which is why I preferred this. I've spelt it out and split up the state now though.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: android_n5x_swarming_rel on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/android_n5x_...)
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_android_rel_ng on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/linux_androi...)
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: mac_chromium_rel_ng on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_...)
LGTM! https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/InlinedGlobalMarkingVisitor.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/InlinedGlobalMarkingVisitor.h:65: const Visitor::MarkingMode m_markingMode; On 2016/12/04 14:55:38, sof wrote: > On 2016/12/02 12:43:20, haraken wrote: > > > > Would you help me understand why you need to add this member whereas Visitor > > already has m_markingMode? > > InlinedGlobalMarkingVisitor isn't a Visitor, and the marking mode is needed to > correctly implement MarkingVisitorImpl::registerMoving*(). Maybe can we move m_markingMode to VisitorHelper? Then we won't need to duplicate m_markingMode in two classes. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/Heap.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/Heap.cpp:423: DCHECK(slot && *slot); DCHECK(slot); DCHECK(*slot); It would help to identify which dcheck failed. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:39: if (!HeapCompact::isCompactableArena(page->arena()->arenaIndex())) Maybe do you want to mean isCompact"ing"Arena? Also this check should not be needed because you're already checking isCompactingArena in BaseArena::makeConsistentForGC(), right? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:49: if (refPage->isLargeObjectPage()) Maybe we also want to check if the refPage is in a compact"ing" arena. We can check m_relocatablePages.contains(refPage) but need to be careful about the performance impact. This method should be performance sensitive. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:60: blinkPageAddress(slotAddress) + blinkGuardPageSize); It might be better to add a helper function to HeapPage.h. This is used in a couple of places. Address HeapPage::pagePayloadAddress(Address address); https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:66: // Unlikely case, the slot resides on a compactable arena's page. compact"ing" https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:132: DCHECK(m_relocatablePages.contains(fromPage)); Add DCHECK(m_relocatablePages.contains(toPage)); https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:140: m_interiorFixups.set(slot, to); I'll stop asking a question about this, but I still think line 140 is not needed. This slot => to mapping will never be used in the future. Note that this case happens only when the slot is in a not-yet compacted page (including somewhere outside oilpan). If we remove this, we can add DCHECK(!it->value) at line 113. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:164: BasePage* refPage = reinterpret_cast<BasePage*>( refPage => slotPage https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:169: m_relocatablePages.contains(refPage))); Shouldn't this be: DCHECK(!*slot || !m_relocatablePages.contains(refPage)); ? m_relocatablePages should NOT contain the slot page, I think. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:190: void addInteriorFixup(MovableReference* slot) { Move this method up to just below add(). https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:241: }; Would you add an assert to ScriptWrappableVisitor to check that ScriptWrappableVisitor::m_headersToUnmark is not pointing to compactable arenas? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:263: m_fixups = MovableObjectFixups::create(); Nit: It wouldn't make much sense to create the fixup object lazily. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:283: reason != BlinkGC::ForcedGC) Do we need this check? I guess the following BlinkGC::HeapPointersOnStack check is sufficient. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:334: return BlinkGC::GCWithSweepCompaction; It doesn't really make sense to return GCWithSweepCompaction from this method though. The caller side can just use GCWithSweepCompaction. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:387: if (!arenaSize) Actually this is not doing a meaningful check... arenaSize is just the size of all system pages in the arena. It wouldn't be zero in almost all cases. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:414: void HeapCompact::startThreadCompaction(ThreadState*) { Drop ThreadState*. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:419: m_startCompactionTimeMS = WTF::currentTimeMS(); Slightly simpler: MutexLocker locker; if (!m_startCompactionTimeMS) m_startCompactionTimeMS = currentTimeMS(); We can remove m_startCompaction. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:423: void HeapCompact::finishedThreadCompaction(ThreadState*) { Drop ThreadState*. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapCompact.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:91: void finishedThreadCompaction(ThreadState*); finishThreadCompaction (c.f., startThreadCompaction) https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:94: // the given arena. The number of pages that were freed together with the Remove the first sentence. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:98: size_t freedSize); Nit: If we just want to track of freedPages and freedSize, it might be simpler to introduce increaseFreedPages(size_t) and increaseFreedSize(size_t). I don't have any strong opinion. Or you can just compare values of Heap::allocatedSpace before and after the compaction. Then you won't need to track m_freedPages and m_freedSize. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:102: // for a GC. // All pages in compactable arenas must be added. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:105: // Notify heap compaction that object at |from| has been relocated to.. |to|. relocated to |to|. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:122: // is on order (checkIfCompacting()). checkIfCompacting => shouldCompact https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:155: size_t m_freeListSize; I'm not sure if this needs to be a member variable though. It's used only in shouldCompact(). https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:160: unsigned m_compactableArenas; m_compact"able"Arenas => m_compact"ing"Arenas (c.f., isCompact"able"Arena returns true for all backing store arenas.) Also see my comment in MovableObjectFixup::addCompactablePage. I guess there's some confusion around it. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:177: // Logging macros activated by debug switches. Clean them up before landing. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:478: heap.compaction()->finishedArenaCompaction(this, 0, 0); This looks unnecessary. We can just immediately return. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:534: // If the current page hasn't been allocated into (empty heap?), add It should be empty heap, right? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:538: context.m_currentPage->link(&context.m_availablePages); Add DCHECK(context.m_currentPage->isEmpty()). Or can't we easily add the dcheck because we're not clearing up the available page? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:552: // Release available page back to the OS. pages https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:555: size_t pageSize = availablePages->size(); Can we add DCHECK(availablePages->isEmpty())? Maybe we can't easily. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:570: heap.compaction()->finishedArenaCompaction(this, freedPageCount, freedSize); freedPageCount looks redundant. It can be calculated by freedSize / blinkPageSize. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1438: DCHECK(header->checkHeader()); This is checked in the below isMarked(). https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1450: ASAN_UNPOISON_MEMORY_REGION(headerAddress, size); This could be ASAN_UNPOISON_MEMORY_REGION(payload, payloadSize). c.f., See what NormalPage::sweep is doing. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1525: memset(unusedStart, 0, unusedSize); This would not be needed (though I don't know if this matters for performance). https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.h:311: FreeListEntry** prevNext() { return &m_next; } This is unused. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.h:520: class CompactionContext { Thanks, this is much more readable :) https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.h:796: void allocateAndAddPage(); What's the point of splitting the method into the two? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapTest.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapTest.cpp:3774: ThreadState::GCForbiddenScope gcScope(ThreadState::current()); Would you help me understand why these GCForbiddenScopes are needed? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h:143: visitor->registerBackingStoreReference(reference); How about simply calling registerBackingStoreReference when HashTable calls registerDelayedMarkNoTracing? We need to delay calling markNoTracing but won't need to delay calling registerBackingStoreReference. Then we won't need this hack. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/SparseHeapBitmap.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/SparseHeapBitmap.cpp:50: return bitmap->size() == 1; Would you help me understand when this can happen? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/SparseHeapBitmap.cpp:88: Address oldBase = swapBase(address); Why is this left expansion safe? What happens if there is some address set in the range that will be lost by expanding the range to left? If the left expansion is uncommon, would there any option to just not support it for simplicity? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/SparseHeapBitmap.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/SparseHeapBitmap.h:40: class PLATFORM_EXPORT SparseHeapBitmap { Maybe can we replace RegionTree (in PageMemory.h) with SparseHeapBitmap in a follow up CL? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/SparseHeapBitmap.h:83: SparseHeapBitmap(Address base) : m_base(base), m_size(1) { Add explicit. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:508: GCForbiddenScope gcForbiddenScope(this); I'm curious why you need this. Isn't SweepForbiddenScope enough for preventing GCs from getting triggered during the weak processing? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1050: ScriptForbiddenIfMainThreadScope scriptForbiddenScope; Also shall we add NoAllocationScope just in case? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1155: compact(); Add a comment and mention why this must be put after pre-finalization and eager sweeping. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1709: GCForbiddenScope gcForbiddenScope(this); I'm just curious but why did you add this? (I'm fine with adding it.) https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1722: visitorType = heap().compaction()->initialize(this); I'd prefer updating |gcType| instead of using |visitorType|, since the rest of this function uses |gcType|. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/ThreadState.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.h:331: GCForbiddenScope(ThreadState* threadState) : m_threadState(threadState) { Add explicit. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/Visitor.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/Visitor.h:309: GlobalMarkCompacting, GlobalMarkingWithCompaction ? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/wtf/LinkedHashSet.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/wtf/LinkedHashSet.h:359: if (node.m_next >= fromStart && node.m_next <= fromEnd) { node.m_next < fromEnd ? The same comment for other comparisons below.
(I think this is the most complicated CL I've reviewed this year :-)
On 2016/12/09 07:26:46, haraken wrote: > (I think this is the most complicated CL I've reviewed this year :-) As always, thanks very much for the detailed reviewing & getting to grips with the substance (or lack of it, at times.) :) I'll followup on the latest round of comments & hopefully it's ready to go as a result.
If you want me to run a Finch experiment (enable the feature on a certain set of users), I can do that. However, I'm fine with just enabling the flag and see how it goes. Then you just need to send an Intent-to-ship (or just ping that Intent-to-implement thread).
On 2016/12/09 07:39:13, haraken wrote: > If you want me to run a Finch experiment (enable the feature on a certain set of > users), I can do that. However, I'm fine with just enabling the flag and see how > it goes. Then you just need to send an Intent-to-ship (or just ping that > Intent-to-implement thread). Thanks, let's return to that in a couple of weeks? It needs to be exposed to all bots & targets first, and settle down. Natural to expect a bump or two, given the scope of the change. (The design has proven itself though by having shipped with Opera for a while, so I'm relatively confident about the state of it all.)
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: cast_shell_android on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/cast_shell_a...)
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_chromium_x64_rel_ng on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_x64_...)
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
That's about it, I hope. https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... File third_party/WebKit/Source/platform/heap/InlinedGlobalMarkingVisitor.h (right): https://codereview.chromium.org/2531973002/diff/40001/third_party/WebKit/Sour... third_party/WebKit/Source/platform/heap/InlinedGlobalMarkingVisitor.h:65: const Visitor::MarkingMode m_markingMode; On 2016/12/09 07:25:54, haraken wrote: > On 2016/12/04 14:55:38, sof wrote: > > On 2016/12/02 12:43:20, haraken wrote: > > > > > > Would you help me understand why you need to add this member whereas Visitor > > > already has m_markingMode? > > > > InlinedGlobalMarkingVisitor isn't a Visitor, and the marking mode is needed to > > correctly implement MarkingVisitorImpl::registerMoving*(). > > Maybe can we move m_markingMode to VisitorHelper? Then we won't need to > duplicate m_markingMode in two classes. > Possibly, but it's a more complex than that given that we also need to accommodate MarkingVisitor<Mode> const-Mode visitors. I've added a TODO. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/Heap.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/Heap.cpp:423: DCHECK(slot && *slot); On 2016/12/09 07:25:54, haraken wrote: > > DCHECK(slot); > DCHECK(*slot); > > It would help to identify which dcheck failed. Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapCompact.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:39: if (!HeapCompact::isCompactableArena(page->arena()->arenaIndex())) On 2016/12/09 07:25:55, haraken wrote: > > Maybe do you want to mean isCompact"ing"Arena? > > Also this check should not be needed because you're already checking > isCompactingArena in BaseArena::makeConsistentForGC(), right? I've converted the assumption into a DCHECK() (not here, but in HeapCompact::addCompactablePage(), as we need access to the "compacting arena" state.) Also renamed the method to addCompactingPage() to be consistent with terminology. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:49: if (refPage->isLargeObjectPage()) On 2016/12/09 07:25:55, haraken wrote: > > Maybe we also want to check if the refPage is in a compact"ing" arena. > > We can check m_relocatablePages.contains(refPage) but need to be careful about > the performance impact. This method should be performance sensitive. I think that's too fine grained a check, i.e., we don't need sanity checks right here to verify that a compacting arena hasn't properly registered all of its pages. I've added a DCHECK() to verify that |refPage| is "compactable" though, which approximates. Also added a TODO about when it would be appropriate to add an early-bailout check wrt isCompactingArena(). https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:60: blinkPageAddress(slotAddress) + blinkGuardPageSize); On 2016/12/09 07:25:55, haraken wrote: > > It might be better to add a helper function to HeapPage.h. This is used in a > couple of places. > > Address HeapPage::pagePayloadAddress(Address address); pageFromObject() does just this already, but this particular Address -> BasePage translation actually hides a special case which I'd forgotten about, hence it needs to stay just like this. I've added a comment which explains; sorry for failing to document earlier. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:66: // Unlikely case, the slot resides on a compactable arena's page. On 2016/12/09 07:25:55, haraken wrote: > > compact"ing" Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:132: DCHECK(m_relocatablePages.contains(fromPage)); On 2016/12/09 07:25:55, haraken wrote: > > Add DCHECK(m_relocatablePages.contains(toPage)); Hmm, that's a bit unnatural a check - why would we care/constrain it so? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:140: m_interiorFixups.set(slot, to); On 2016/12/09 07:25:55, haraken wrote: > > I'll stop asking a question about this, but I still think line 140 is not > needed. This slot => to mapping will never be used in the future. Note that this > case happens only when the slot is in a not-yet compacted page (including > somewhere outside oilpan). > > If we remove this, we can add DCHECK(!it->value) at line 113. We're here if an interior slot's backing store is relocated before the backing store holding the interior slot is. In that case, we update m_interiorFixups for the benefit of relocateInteriorFixups(). It must consult m_interiorFixups regardless, so it is tidier to have marked the slot has having been relocated already by writing down a value here. To my mind. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:164: BasePage* refPage = reinterpret_cast<BasePage*>( On 2016/12/09 07:25:55, haraken wrote: > > refPage => slotPage Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:169: m_relocatablePages.contains(refPage))); On 2016/12/09 07:25:54, haraken wrote: > > Shouldn't this be: > > DCHECK(!*slot || !m_relocatablePages.contains(refPage)); > > ? > m_relocatablePages should NOT contain the slot page, I think. Indeed, thanks - it should check what the comment says it should :) https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:190: void addInteriorFixup(MovableReference* slot) { On 2016/12/09 07:25:55, haraken wrote: > > Move this method up to just below add(). Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:241: }; On 2016/12/09 07:25:55, haraken wrote: > > Would you add an assert to ScriptWrappableVisitor to check that > ScriptWrappableVisitor::m_headersToUnmark is not pointing to compactable arenas? Done, but should I be concerned about perf fallout? https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:283: reason != BlinkGC::ForcedGC) On 2016/12/09 07:25:55, haraken wrote: > > Do we need this check? I guess the following BlinkGC::HeapPointersOnStack check > is sufficient. I think it is fine to have it here; if a page navigation GC comes along (or when precise GCs are outlawed), we want to return false right away. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:334: return BlinkGC::GCWithSweepCompaction; On 2016/12/09 07:25:55, haraken wrote: > > It doesn't really make sense to return GCWithSweepCompaction from this method > though. The caller side can just use GCWithSweepCompaction. The caller shouldn't have to know such details, I think. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:387: if (!arenaSize) On 2016/12/09 07:25:55, haraken wrote: > > Actually this is not doing a meaningful check... arenaSize is just the size of > all system pages in the arena. It wouldn't be zero in almost all cases. > Some of the vector arenas do run into this, when operating in stress test mode esp. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:414: void HeapCompact::startThreadCompaction(ThreadState*) { On 2016/12/09 07:25:54, haraken wrote: > > Drop ThreadState*. Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:419: m_startCompactionTimeMS = WTF::currentTimeMS(); On 2016/12/09 07:25:55, haraken wrote: > > Slightly simpler: > > MutexLocker locker; > if (!m_startCompactionTimeMS) > m_startCompactionTimeMS = currentTimeMS(); > > We can remove m_startCompaction. Done. I thought I'd sneak in the first user of atomicTestAndSetToOne() :) https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.cpp:423: void HeapCompact::finishedThreadCompaction(ThreadState*) { On 2016/12/09 07:25:55, haraken wrote: > > Drop ThreadState*. Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:478: heap.compaction()->finishedArenaCompaction(this, 0, 0); On 2016/12/09 07:25:55, haraken wrote: > > This looks unnecessary. We can just immediately return. > Keeping it, no need to cut corners for this case & leave out the "finished arena" notification. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:534: // If the current page hasn't been allocated into (empty heap?), add On 2016/12/09 07:25:56, haraken wrote: > > It should be empty heap, right? Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:552: // Release available page back to the OS. On 2016/12/09 07:25:55, haraken wrote: > > pages Done, made clear that they're handed back to the free page pool. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:555: size_t pageSize = availablePages->size(); On 2016/12/09 07:25:56, haraken wrote: > > Can we add DCHECK(availablePages->isEmpty())? Maybe we can't easily. An "available page" won't be marked as freed here if everything is compacted out of it. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:570: heap.compaction()->finishedArenaCompaction(this, freedPageCount, freedSize); On 2016/12/09 07:25:56, haraken wrote: > > freedPageCount looks redundant. It can be calculated by freedSize / > blinkPageSize. Yes, assuming equal sized pages and you round down, but it seems like a less-than-important thing to optimize. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1438: DCHECK(header->checkHeader()); On 2016/12/09 07:25:56, haraken wrote: > > This is checked in the below isMarked(). Gone https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1450: ASAN_UNPOISON_MEMORY_REGION(headerAddress, size); On 2016/12/09 07:25:56, haraken wrote: > > This could be ASAN_UNPOISON_MEMORY_REGION(payload, payloadSize). > > c.f., See what NormalPage::sweep is doing. No, we need to unpoison the whole allocation as it will be compacted into afterwards, so we do it all in one go. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:1525: memset(unusedStart, 0, unusedSize); On 2016/12/09 07:25:55, haraken wrote: > > This would not be needed (though I don't know if this matters for performance). I'm just being very careful while it is "semi free" & available; i'm fine with not clearing these available pages. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.h:311: FreeListEntry** prevNext() { return &m_next; } On 2016/12/09 07:25:56, haraken wrote: > > This is unused. Thanks, removed the leftover. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.h:796: void allocateAndAddPage(); On 2016/12/09 07:25:56, haraken wrote: > > What's the point of splitting the method into the two? Thanks for catching that, that's a remnant of support for non-inplace compaction, which isn't used/supported now. Restored back to previous. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapTest.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapTest.cpp:3774: ThreadState::GCForbiddenScope gcScope(ThreadState::current()); On 2016/12/09 07:25:56, haraken wrote: > > Would you help me understand why these GCForbiddenScopes are needed? As long as there's a DCHECK() in the Visitor constructor that GC is forbidden during its lifetime, they by definition are needed here. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/MarkingVisitorImpl.h:143: visitor->registerBackingStoreReference(reference); On 2016/12/09 07:25:56, haraken wrote: > > How about simply calling registerBackingStoreReference when HashTable calls > registerDelayedMarkNoTracing? We need to delay calling markNoTracing but won't > need to delay calling registerBackingStoreReference. Then we won't need this > hack. It would reduce the footprint of this change, but separate marking from registration of backing stores. I think they better belong together. I've uncoupled them now though, so as to address this. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/SparseHeapBitmap.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/SparseHeapBitmap.cpp:50: return bitmap->size() == 1; On 2016/12/09 07:25:56, haraken wrote: > > Would you help me understand when this can happen? This case will happen if |this| represents a single address bitmap and |address| is not > bitmap->end()), and it will always hold. It might be clearer if we DCHECK() the size() constraint instead and return |true|; rephrased it. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/SparseHeapBitmap.cpp:88: Address oldBase = swapBase(address); On 2016/12/09 07:25:56, haraken wrote: > > Why is this left expansion safe? What happens if there is some address set in > the range that will be lost by expanding the range to left? > > If the left expansion is uncommon, would there any option to just not support it > for simplicity? It is safe because |this| is a single address bitmap (size=1), so your assumed case won't apply here. This just helps avoid left-chained single bitmap buildup, should the insertion pattern be such that we do end up doing these left extensions. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/SparseHeapBitmap.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/SparseHeapBitmap.h:40: class PLATFORM_EXPORT SparseHeapBitmap { On 2016/12/09 07:25:56, haraken wrote: > > Maybe can we replace RegionTree (in PageMemory.h) with SparseHeapBitmap in a > follow up CL? Both handles memory regions, but in different ways that I don't see the ready overlap. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/SparseHeapBitmap.h:83: SparseHeapBitmap(Address base) : m_base(base), m_size(1) { On 2016/12/09 07:25:56, haraken wrote: > > Add explicit. Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:508: GCForbiddenScope gcForbiddenScope(this); On 2016/12/09 07:25:56, haraken wrote: > > I'm curious why you need this. Isn't SweepForbiddenScope enough for preventing > GCs from getting triggered during the weak processing? > See the Visitor constructor, rather than having it enter a GC forbidden scope during its lifetime, we instead DCHECK() that we are when creating. Hence all Visitor instances must be created while in such a scope - it seems best not to try to overturn that setup here, and perhaps be more discriminating. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1050: ScriptForbiddenIfMainThreadScope scriptForbiddenScope; On 2016/12/09 07:25:56, haraken wrote: > > Also shall we add NoAllocationScope just in case? Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1155: compact(); On 2016/12/09 07:25:56, haraken wrote: > > Add a comment and mention why this must be put after pre-finalization and eager > sweeping. Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1709: GCForbiddenScope gcForbiddenScope(this); On 2016/12/09 07:25:56, haraken wrote: > > I'm just curious but why did you add this? (I'm fine with adding it.) See the Visitor constructor comment, we had to introduce a GC forbidden lock this early on to avoid some thread interference in the past. (I don't have the bug references at hand). This just preserves that; making changes in this area is high risk. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1722: visitorType = heap().compaction()->initialize(this); On 2016/12/09 07:25:56, haraken wrote: > > I'd prefer updating |gcType| instead of using |visitorType|, since the rest of > this function uses |gcType|. Notice that we do lose the with/without sweeping distinction for |gcType| when creating the visitor via |visitorType|. We need to preserve that for later |gcType| uses, which is why it is done this way. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/ThreadState.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.h:331: GCForbiddenScope(ThreadState* threadState) : m_threadState(threadState) { On 2016/12/09 07:25:56, haraken wrote: > > Add explicit. Done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/Visitor.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/Visitor.h:309: GlobalMarkCompacting, On 2016/12/09 07:25:56, haraken wrote: > > GlobalMarkingWithCompaction ? Twiddles, done. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/wtf/LinkedHashSet.h (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/wtf/LinkedHashSet.h:359: if (node.m_next >= fromStart && node.m_next <= fromEnd) { On 2016/12/09 07:25:56, haraken wrote: > > node.m_next < fromEnd ? > > The same comment for other comparisons below. Yes, the range of these internal pointers is [fromStart, fromEnd), not [fromStart, fromEnd]. Good catch.
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
https://codereview.chromium.org/2531973002/diff/400001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapCompact.h (right): https://codereview.chromium.org/2531973002/diff/400001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapCompact.h:1: // Copyright 2016 Opera Software AS. All rights reserved. Updated to use "The Chromium Authors".
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R= BUG=672030 ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 ==========
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_android_rel_ng on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/linux_androi...)
The CQ bit was checked by sigbjornf@opera.com
The patchset sent to the CQ was uploaded after l-g-t-m from haraken@chromium.org Link to the patchset: https://codereview.chromium.org/2531973002/#ps420001 (title: "adjust copyright attributions")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_android_rel_ng on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/linux_androi...)
Still LGTM https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapTest.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapTest.cpp:3774: ThreadState::GCForbiddenScope gcScope(ThreadState::current()); On 2016/12/09 21:44:04, sof wrote: > On 2016/12/09 07:25:56, haraken wrote: > > > > Would you help me understand why these GCForbiddenScopes are needed? > > As long as there's a DCHECK() in the Visitor constructor that GC is forbidden > during its lifetime, they by definition are needed here. Makes sense. I'm wondering if we can/should unify GCForbiddenScope and SweepForbiddenScope somehow. It's getting unclearer when we should use which one or both. https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/ThreadState.cpp (right): https://codereview.chromium.org/2531973002/diff/280001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/ThreadState.cpp:1722: visitorType = heap().compaction()->initialize(this); On 2016/12/09 21:44:05, sof wrote: > On 2016/12/09 07:25:56, haraken wrote: > > > > I'd prefer updating |gcType| instead of using |visitorType|, since the rest of > > this function uses |gcType|. > > Notice that we do lose the with/without sweeping distinction for |gcType| when > creating the visitor via |visitorType|. We need to preserve that for later > |gcType| uses, which is why it is done this way. Ah, got it. Would it be tidier to use a boolean flag (or enum) to specify if the visitor should do the compaction or not? Visitor::create(this, gcType, ShouldCompactHeap);
The CQ bit was checked by sigbjornf@opera.com
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 420001, "attempt_start_ts": 1481523663007500, "parent_rev": "2bb90a2972f4aa22d0bee4bbfe47393ebf913a5c", "commit_rev": "b2c7e3d95c423ace3900f114beae9567cc530b6f"}
Message was sent while issue was closed.
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Review-Url: https://codereview.chromium.org/2531973002 ==========
Message was sent while issue was closed.
Committed patchset #22 (id:420001)
Message was sent while issue was closed.
A revert of this CL (patchset #22 id:420001) has been created in https://codereview.chromium.org/2570483002/ by yhirano@chromium.org. The reason for reverting is: Speculative revert for a layout test breakage. https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.11/build....
Message was sent while issue was closed.
On 2016/12/12 08:29:26, yhirano wrote: > A revert of this CL (patchset #22 id:420001) has been created in > https://codereview.chromium.org/2570483002/ by mailto:yhirano@chromium.org. > > The reason for reverting is: Speculative revert for a layout test breakage. > https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.11/build.... I suspect this for causing the benchmarks.benchmark_smoke_unittest.BenchmarkSmokeTest.blob_storage.blob_storage failures on Mac builders as well.
Message was sent while issue was closed.
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Review-Url: https://codereview.chromium.org/2531973002 ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Review-Url: https://codereview.chromium.org/2531973002 ==========
On 2016/12/12 08:35:56, grt (UTC plus 1) wrote: > On 2016/12/12 08:29:26, yhirano wrote: > > A revert of this CL (patchset #22 id:420001) has been created in > > https://codereview.chromium.org/2570483002/ by mailto:yhirano@chromium.org. > > > > The reason for reverting is: Speculative revert for a layout test breakage. > > > https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.11/build.... > > I suspect this for causing the > benchmarks.benchmark_smoke_unittest.BenchmarkSmokeTest.blob_storage.blob_storage > failures on Mac builders as well. https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.9/builds... confirms that it was behind the layout test failures. There's some difference in setup between mac_chromium_rel_ng and these mac release bots, it seems.
On 2016/12/12 09:18:25, sof wrote: > On 2016/12/12 08:35:56, grt (UTC plus 1) wrote: > > On 2016/12/12 08:29:26, yhirano wrote: > > > A revert of this CL (patchset #22 id:420001) has been created in > > > https://codereview.chromium.org/2570483002/ by mailto:yhirano@chromium.org. > > > > > > The reason for reverting is: Speculative revert for a layout test breakage. > > > > > > https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.11/build.... > > > > I suspect this for causing the > > > benchmarks.benchmark_smoke_unittest.BenchmarkSmokeTest.blob_storage.blob_storage > > failures on Mac builders as well. > > https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.9/builds... > confirms that it was behind the layout test failures. > > There's some difference in setup between mac_chromium_rel_ng and these mac > release bots, it seems. Trying to figure out a way to make progress with this one; Mac release & only those bots, which are unreachable for tryjobs. non-informative crash stacks.
Message was sent while issue was closed.
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Review-Url: https://codereview.chromium.org/2531973002 ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Committed: https://crrev.com/c55a2886f98771ade1d73b99d21464c517b5e11e Cr-Commit-Position: refs/heads/master@{#437829} ==========
Message was sent while issue was closed.
Patchset 22 (id:??) landed as https://crrev.com/c55a2886f98771ade1d73b99d21464c517b5e11e Cr-Commit-Position: refs/heads/master@{#437829}
Message was sent while issue was closed.
On 2016/12/12 10:51:04, sof wrote: > On 2016/12/12 09:18:25, sof wrote: > > On 2016/12/12 08:35:56, grt (UTC plus 1) wrote: > > > On 2016/12/12 08:29:26, yhirano wrote: > > > > A revert of this CL (patchset #22 id:420001) has been created in > > > > https://codereview.chromium.org/2570483002/ by > mailto:yhirano@chromium.org. > > > > > > > > The reason for reverting is: Speculative revert for a layout test > breakage. > > > > > > > > > > https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.11/build.... > > > > > > I suspect this for causing the > > > > > > benchmarks.benchmark_smoke_unittest.BenchmarkSmokeTest.blob_storage.blob_storage > > > failures on Mac builders as well. > > > > > https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.9/builds... > > confirms that it was behind the layout test failures. > > > > There's some difference in setup between mac_chromium_rel_ng and these mac > > release bots, it seems. > > Trying to figure out a way to make progress with this one; Mac release & only > those bots, which are unreachable for tryjobs. non-informative crash stacks. https://codereview.chromium.org/2566973002/patch/80001/90002 fixes the crashes; I don't understand why yet.
Message was sent while issue was closed.
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Committed: https://crrev.com/c55a2886f98771ade1d73b99d21464c517b5e11e Cr-Commit-Position: refs/heads/master@{#437829} ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Committed: https://crrev.com/c55a2886f98771ade1d73b99d21464c517b5e11e Cr-Commit-Position: refs/heads/master@{#437829} ==========
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
On 2016/12/12 15:31:29, sof wrote: > On 2016/12/12 10:51:04, sof wrote: > > On 2016/12/12 09:18:25, sof wrote: > > > On 2016/12/12 08:35:56, grt (UTC plus 1) wrote: > > > > On 2016/12/12 08:29:26, yhirano wrote: > > > > > A revert of this CL (patchset #22 id:420001) has been created in > > > > > https://codereview.chromium.org/2570483002/ by > > mailto:yhirano@chromium.org. > > > > > > > > > > The reason for reverting is: Speculative revert for a layout test > > breakage. > > > > > > > > > > > > > > > https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.11/build.... > > > > > > > > I suspect this for causing the > > > > > > > > > > benchmarks.benchmark_smoke_unittest.BenchmarkSmokeTest.blob_storage.blob_storage > > > > failures on Mac builders as well. > > > > > > > > > https://build.chromium.org/p/chromium.webkit/builders/WebKit%20Mac10.9/builds... > > > confirms that it was behind the layout test failures. > > > > > > There's some difference in setup between mac_chromium_rel_ng and these mac > > > release bots, it seems. > > > > Trying to figure out a way to make progress with this one; Mac release & only > > those bots, which are unreachable for tryjobs. non-informative crash stacks. > > https://codereview.chromium.org/2566973002/patch/80001/90002 fixes the crashes; > I don't understand why yet. Now I do. NormalPageArena::sweepAndCompact() must clear out the unused pages it puts them onto the free page pool (and decommits.) Earlier patchsets did this on a per-page basis, but that's a little bit too eager than needs be, should the pages be reused & compacted into rather than released. OS X is lazier about releasing pages "decommitted" via madvise(..., MADV_FREE), which is why it showed up there. i.e., Linux will release the pages and upon the pages be re-committed, zeroed pages will be mapped in again. Not so with OS X, where dirty free page memory was recommitted, which broke code assuming that new allocations was zeroed. We make no such decommit assumptions wrt zeroing; our invariant is that freed pages must be zeroed-out before being released, which the CL failed to do - latest patchset addresses.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
Nice detective work! LGTM. https://codereview.chromium.org/2531973002/diff/440001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/440001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:572: memset(unusedPage->payload(), 0, unusedPage->payloadSize()); This should happen only on Release builds, right? It's fine (and expected) to zap the pages on Debug builds.
The CQ bit was checked by sigbjornf@opera.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
https://codereview.chromium.org/2531973002/diff/440001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): https://codereview.chromium.org/2531973002/diff/440001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/heap/HeapPage.cpp:572: memset(unusedPage->payload(), 0, unusedPage->payloadSize()); On 2016/12/13 02:29:41, haraken wrote: > > This should happen only on Release builds, right? It's fine (and expected) to > zap the pages on Debug builds. done, they're already in that state, so leave out the memset().
On 2016/12/13 06:18:31, sof wrote: > https://codereview.chromium.org/2531973002/diff/440001/third_party/WebKit/Sou... > File third_party/WebKit/Source/platform/heap/HeapPage.cpp (right): > > https://codereview.chromium.org/2531973002/diff/440001/third_party/WebKit/Sou... > third_party/WebKit/Source/platform/heap/HeapPage.cpp:572: > memset(unusedPage->payload(), 0, unusedPage->payloadSize()); > On 2016/12/13 02:29:41, haraken wrote: > > > > This should happen only on Release builds, right? It's fine (and expected) to > > zap the pages on Debug builds. > > done, they're already in that state, so leave out the memset(). LGTM
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Committed: https://crrev.com/c55a2886f98771ade1d73b99d21464c517b5e11e Cr-Commit-Position: refs/heads/master@{#437829} ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 ==========
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
Relanding. trybot results are fine + verified mac release (no dchecks) status via https://codereview.chromium.org/2566973002/
The CQ bit was checked by sigbjornf@opera.com
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Failed to apply patch for third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in: While running git apply --index -p1; error: patch failed: third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in:115 error: third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in: patch does not apply Patch: third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in Index: third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in diff --git a/third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in b/third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in index d38e2f5d5ad7bb2039ca61c8ec706f841117c085..3268ffc7cb74af5a217e534a75c229cc136f9c0e 100644 --- a/third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in +++ b/third_party/WebKit/Source/platform/RuntimeEnabledFeatures.in @@ -115,6 +115,7 @@ GamepadExtensions origin_trial_feature_name=WebVR GeometryInterfaces status=experimental, implied_by=CompositorWorker GetUserMedia status=stable GlobalCacheStorage status=stable +HeapCompaction status=experimental IDBObserver status=experimental ImageCapture status=experimental, origin_trial_feature_name=ImageCapture ImageOrientation status=test
The CQ bit was checked by sigbjornf@opera.com
The patchset sent to the CQ was uploaded after l-g-t-m from haraken@chromium.org Link to the patchset: https://codereview.chromium.org/2531973002/#ps480001 (title: "rebased upto r438110")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 480001, "attempt_start_ts": 1481621834445470, "parent_rev": "13465a4f38f81d8c176cc17414750bebb1411609", "commit_rev": "983ec3b41520ffbf32bd7666ce99f6cfeb2d13a2"}
Message was sent while issue was closed.
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Review-Url: https://codereview.chromium.org/2531973002 ==========
Message was sent while issue was closed.
Committed patchset #25 (id:480001)
Message was sent while issue was closed.
Description was changed from ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Review-Url: https://codereview.chromium.org/2531973002 ========== to ========== Simple BlinkGC heap compaction. This implements heap compaction for the Blink GC infrastructure (Oilpan), compacting the arenas of the BlinkGC heap which are most susceptible to becoming fragmented during actual use. Fragmentation is a real problem and a growing one while browsing anything but static pages: the amount of unused, but allocated, memory is fluctuating higher over time. To avoid leaving increasing amounts of unused holes in our heaps, heap compaction will periodically squeeze out the unused portions, packing together the live objects. The heap pages that are then left as unused, are subsequently released and returned to the OS. Due to a fortunate property of Blink heap collection types, providing such compaction is within relatively easy reach. Experiments show that the arenas which hold such collection objects ("backing stores") are the ones that develop fragmentation the most & persistently. While not a complete heap compactor of all Blink GC arenas, it addresses the fragmentation problem where it is most pressing. More can be done, later. Explainer / design document: https://docs.google.com/document/d/1k-vivOinomDXnScw8Ew5zpsYCXiYqj76OCOYZSvHkaU R=haraken BUG=672030 Committed: https://crrev.com/2c7d13f7694fa208d996947731eacd152902ebdd Cr-Commit-Position: refs/heads/master@{#438125} ==========
Message was sent while issue was closed.
Patchset 25 (id:??) landed as https://crrev.com/2c7d13f7694fa208d996947731eacd152902ebdd Cr-Commit-Position: refs/heads/master@{#438125}
Message was sent while issue was closed.
On 2016/12/13 11:36:21, commit-bot: I haz the power wrote: > Patchset 25 (id:??) landed as > https://crrev.com/2c7d13f7694fa208d996947731eacd152902ebdd > Cr-Commit-Position: refs/heads/master@{#438125} Note: This increased libchrome.so on Android by 180kb (not saying any action is required, but wanted to have this information attached to this review) Graph link: https://chromeperf.appspot.com/report?sid=473d6bf7347eaebc0a34c09b2f564f4d205...
Message was sent while issue was closed.
jbroman@ pointed out that the heap compaction will make code that caches an inteator unsafe: http://pastebin.com/MDNXtW76 Would it be possible to add a runtime verification to detect a pointer that is directly pointing to backing stores other than from HeapVector/HeapHashTable etc?
Message was sent while issue was closed.
On 2016/12/14 04:29:05, haraken wrote: > jbroman@ pointed out that the heap compaction will make code that caches an > inteator unsafe: > > http://pastebin.com/MDNXtW76 > > Would it be possible to add a runtime verification to detect a pointer that is > directly pointing to backing stores other than from HeapVector/HeapHashTable > etc? Two ways spring to mind: - require that such on-heap iterators are traced and doesn't implicitly depend on something else keeping the backing store live; that would be consistent with how we handle heap references elsewhere. If they were, we could detect the condition and perform the fixups as needed. I wonder how real this code snippet is though. - in case of hash tables, have the hash table backing store compaction register a modification on the hash table. That would catch out potential trouble.. but only if compaction hits, which isn't ideal.
Message was sent while issue was closed.
On 2016/12/14 07:58:44, sof wrote: > On 2016/12/14 04:29:05, haraken wrote: > > jbroman@ pointed out that the heap compaction will make code that caches an > > inteator unsafe: > > > > http://pastebin.com/MDNXtW76 > > > > Would it be possible to add a runtime verification to detect a pointer that is > > directly pointing to backing stores other than from HeapVector/HeapHashTable > > etc? > > Two ways spring to mind: > > - require that such on-heap iterators are traced and doesn't implicitly depend > on something else keeping the backing store live; that would be consistent with > how we handle heap references elsewhere. If they were, we could detect the > condition and perform the fixups as needed. I wonder how real this code snippet > is though. This sounds reasonable. Also I don't think we need to support those cases -- we can just assert. Specifically: - Ask developers to trace the iterator. - If the backing store is traced in a way other than from HeapVector/HeapHashTable, assert it. (I'd prefer not increasing the code complexity for hypothetical use cases :-) > - in case of hash tables, have the hash table backing store compaction register > a modification on the hash table. That would catch out potential trouble.. but > only if compaction hits, which isn't ideal.
Message was sent while issue was closed.
FWIW, a holiday season would be a good timing to experiment with enabling this kind of risky change on ToT.
Message was sent while issue was closed.
On 2016/12/14 08:11:59, haraken wrote: > On 2016/12/14 07:58:44, sof wrote: > > On 2016/12/14 04:29:05, haraken wrote: > > > jbroman@ pointed out that the heap compaction will make code that caches an > > > inteator unsafe: > > > > > > http://pastebin.com/MDNXtW76 > > > > > > Would it be possible to add a runtime verification to detect a pointer that > is > > > directly pointing to backing stores other than from HeapVector/HeapHashTable > > > etc? > > > > Two ways spring to mind: > > > > - require that such on-heap iterators are traced and doesn't implicitly > depend > > on something else keeping the backing store live; that would be consistent > with > > how we handle heap references elsewhere. If they were, we could detect the > > condition and perform the fixups as needed. I wonder how real this code > snippet > > is though. > > This sounds reasonable. > > Also I don't think we need to support those cases -- we can just assert. > Specifically: > > - Ask developers to trace the iterator. > - If the backing store is traced in a way other than from > HeapVector/HeapHashTable, assert it. > > (I'd prefer not increasing the code complexity for hypothetical use cases :-) > > > - in case of hash tables, have the hash table backing store compaction > register > > a modification on the hash table. That would catch out potential trouble.. but > > only if compaction hits, which isn't ideal. For now, addressed this by disallowing most on-heap iterators at compile-time (https://codereview.chromium.org/2588943002/ ) Blink doesn't currently have any such problematic iterators, and it is unlikely some will be added before the next clang & GC plugin is rolled. Should any/some of these turn out to be must-haves, we can implement (heap compaction) support for them without too much trouble. (..but let's be lazy about seeking out trouble :) )
Message was sent while issue was closed.
On 2016/12/19 04:48:37, haraken wrote: > FWIW, a holiday season would be a good timing to experiment with enabling this > kind of risky change on ToT. That seems like the natural next step. afaik, bots and fuzzers haven't shown up stability issues that can be attributed to the feature. perf bots don't run with experimental features enabled (right..?), so that's an unknown.
Message was sent while issue was closed.
On 2016/12/20 13:15:23, sof wrote: > On 2016/12/19 04:48:37, haraken wrote: > > FWIW, a holiday season would be a good timing to experiment with enabling this > > kind of risky change on ToT. > > That seems like the natural next step. > > afaik, bots and fuzzers haven't shown up stability issues that can be attributed > to the feature. perf bots don't run with experimental features enabled > (right..?), so that's an unknown. Yeah. I think you just ping on the intent-to-implement thread and try to enable it.
Message was sent while issue was closed.
On 2016/12/21 05:17:46, haraken wrote: > On 2016/12/20 13:15:23, sof wrote: > > On 2016/12/19 04:48:37, haraken wrote: > > > FWIW, a holiday season would be a good timing to experiment with enabling > this > > > kind of risky change on ToT. > > > > That seems like the natural next step. > > > > afaik, bots and fuzzers haven't shown up stability issues that can be > attributed > > to the feature. perf bots don't run with experimental features enabled > > (right..?), so that's an unknown. > > Yeah. I think you just ping on the intent-to-implement thread and try to enable > it. Hmm, on balance, I feel it wouldn't be responsible to do this without being attentively around, should there be issues. So, will wait until the holiday period is over and try to enable the feature towards the end of next week.
Message was sent while issue was closed.
On 2016/12/23 10:02:37, sof wrote: > On 2016/12/21 05:17:46, haraken wrote: > > On 2016/12/20 13:15:23, sof wrote: > > > On 2016/12/19 04:48:37, haraken wrote: > > > > FWIW, a holiday season would be a good timing to experiment with enabling > > this > > > > kind of risky change on ToT. > > > > > > That seems like the natural next step. > > > > > > afaik, bots and fuzzers haven't shown up stability issues that can be > > attributed > > > to the feature. perf bots don't run with experimental features enabled > > > (right..?), so that's an unknown. > > > > Yeah. I think you just ping on the intent-to-implement thread and try to > enable > > it. > > Hmm, on balance, I feel it wouldn't be responsible to do this without being > attentively around, should there be issues. > > So, will wait until the holiday period is over and try to enable the feature > towards the end of next week. Let's do this near or during the upcoming weekend, moving the feature to stable for a couple of Canary releases and then revert & assess status. i.e., I hear there's a dev release being rolled this week and I don't want it to include heap compaction, as it's late'ish in the release cycle, so let's wait on it to be readied first. I'll ping the intent thread to announce the plan, to give people chance to comment & give feedback on the Grand Plan.
Message was sent while issue was closed.
On 2016/12/13 18:46:39, agrieve wrote: > On 2016/12/13 11:36:21, commit-bot: I haz the power wrote: > > Patchset 25 (id:??) landed as > > https://crrev.com/2c7d13f7694fa208d996947731eacd152902ebdd > > Cr-Commit-Position: refs/heads/master@{#438125} > > Note: This increased libchrome.so on Android by 180kb (not saying any action is > required, but wanted to have this information attached to this review) > > Graph link: > https://chromeperf.appspot.com/report?sid=473d6bf7347eaebc0a34c09b2f564f4d205... FYI - going to investigate size bloat in https://bugs.chromium.org/p/chromium/issues/detail?id=681991 |