Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(17)

Unified Diff: runtime/vm/scavenger.h

Issue 320463003: Make unused semispace available to other isolates. (Closed) Base URL: http://dart.googlecode.com/svn/branches/bleeding_edge/dart/
Patch Set: Created 6 years, 6 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View side-by-side diff with in-line comments
Download patch
Index: runtime/vm/scavenger.h
===================================================================
--- runtime/vm/scavenger.h (revision 37032)
+++ runtime/vm/scavenger.h (working copy)
@@ -25,6 +25,35 @@
DECLARE_FLAG(bool, gc_at_alloc);
+
+// Wrapper around VirtualMemory that adds caching and handles the empty case.
+class SemiSpace {
+ public:
Ivan Posva 2014/06/05 22:27:29 Please use vertical space for easier reading.
koda 2014/06/05 23:04:23 Done.
+ static void InitOnce();
+ // Get a space of the given size. Returns NULL on out of memory. If size is 0,
+ // returns an empty space: pointer(), start() and end() all return NULL/0.
+ static SemiSpace* New(intptr_t size);
+ // Hand back an unused space. Safe to call on NULL.
Ivan Posva 2014/06/05 22:27:29 Why would we ever want to call Delete on NULL?
koda 2014/06/05 23:04:23 Allowing NULL keeps it consistent with the 'delete
koda 2014/06/05 23:08:29 Actually, the commit failure check in this very CL
+ void Delete();
+ void* pointer() const { return region_.pointer(); }
+ uword start() const { return region_.start(); }
+ uword end() const { return region_.end(); }
+ intptr_t size() const { return static_cast<intptr_t>(region_.size()); }
+ bool Contains(uword address) const { return region_.Contains(address); }
+ // Set write protection mode for this space. The space must not be protected
+ // when Delete is called.
+ // TODO(koda): Remember protection mode in VirtualMemory and assert this.
+ void WriteProtect(bool read_only);
Ivan Posva 2014/06/05 22:27:29 Do we ever write protect a new gen?
koda 2014/06/05 23:04:23 Heap::WriteProtect includes new gen, but it's curr
+ private:
+ explicit SemiSpace(VirtualMemory* reserved);
+ ~SemiSpace();
+ VirtualMemory* reserved_; // NULL for an emtpy space.
+ MemoryRegion region_;
+ static SemiSpace* cache_;
+ static Mutex* mutex_;
+};
+
+
class Scavenger {
public:
Scavenger(Heap* heap, intptr_t max_capacity_in_words, uword object_alignment);
@@ -38,7 +67,7 @@
// No reasonable algorithm should be checking for objects in from space. At
// least unless it is debugging code. This might need to be relaxed later,
// but currently it helps prevent dumb bugs.
- ASSERT(!from_->Contains(addr));
+ ASSERT(from_ == NULL || !from_->Contains(addr));
return to_->Contains(addr);
}
@@ -183,9 +212,8 @@
void ProcessWeakTables();
- VirtualMemory* space_;
- MemoryRegion* to_;
- MemoryRegion* from_;
+ SemiSpace* from_;
+ SemiSpace* to_;
Heap* heap_;
« no previous file with comments | « runtime/vm/memory_region.h ('k') | runtime/vm/scavenger.cc » ('j') | runtime/vm/scavenger.cc » ('J')

Powered by Google App Engine
This is Rietveld 408576698