Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(381)

Side by Side Diff: src/mark-compact.cc

Issue 509035: Compact map space when doing mark-sweep if after collection size of map space... (Closed) Base URL: http://v8.googlecode.com/svn/branches/bleeding_edge/
Patch Set: '' Created 10 years, 11 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
« no previous file with comments | « src/heap.h ('k') | src/spaces.h » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 // Copyright 2006-2008 the V8 project authors. All rights reserved. 1 // Copyright 2006-2008 the V8 project authors. All rights reserved.
2 // Redistribution and use in source and binary forms, with or without 2 // Redistribution and use in source and binary forms, with or without
3 // modification, are permitted provided that the following conditions are 3 // modification, are permitted provided that the following conditions are
4 // met: 4 // met:
5 // 5 //
6 // * Redistributions of source code must retain the above copyright 6 // * Redistributions of source code must retain the above copyright
7 // notice, this list of conditions and the following disclaimer. 7 // notice, this list of conditions and the following disclaimer.
8 // * Redistributions in binary form must reproduce the above 8 // * Redistributions in binary form must reproduce the above
9 // copyright notice, this list of conditions and the following 9 // copyright notice, this list of conditions and the following
10 // disclaimer in the documentation and/or other materials provided 10 // disclaimer in the documentation and/or other materials provided
(...skipping 773 matching lines...) Expand 10 before | Expand all | Expand 10 after
784 void MarkCompactCollector::ClearNonLiveTransitions() { 784 void MarkCompactCollector::ClearNonLiveTransitions() {
785 HeapObjectIterator map_iterator(Heap::map_space(), &CountMarkedCallback); 785 HeapObjectIterator map_iterator(Heap::map_space(), &CountMarkedCallback);
786 // Iterate over the map space, setting map transitions that go from 786 // Iterate over the map space, setting map transitions that go from
787 // a marked map to an unmarked map to null transitions. At the same time, 787 // a marked map to an unmarked map to null transitions. At the same time,
788 // set all the prototype fields of maps back to their original value, 788 // set all the prototype fields of maps back to their original value,
789 // dropping the back pointers temporarily stored in the prototype field. 789 // dropping the back pointers temporarily stored in the prototype field.
790 // Setting the prototype field requires following the linked list of 790 // Setting the prototype field requires following the linked list of
791 // back pointers, reversing them all at once. This allows us to find 791 // back pointers, reversing them all at once. This allows us to find
792 // those maps with map transitions that need to be nulled, and only 792 // those maps with map transitions that need to be nulled, and only
793 // scan the descriptor arrays of those maps, not all maps. 793 // scan the descriptor arrays of those maps, not all maps.
794 // All of these actions are carried out only on maps of JSObects 794 // All of these actions are carried out only on maps of JSObjects
795 // and related subtypes. 795 // and related subtypes.
796 while (map_iterator.has_next()) { 796 while (map_iterator.has_next()) {
797 Map* map = reinterpret_cast<Map*>(map_iterator.next()); 797 Map* map = reinterpret_cast<Map*>(map_iterator.next());
798 if (!map->IsMarked() && map->IsByteArray()) continue; 798 if (!map->IsMarked() && map->IsByteArray()) continue;
799 799
800 ASSERT(SafeIsMap(map)); 800 ASSERT(SafeIsMap(map));
801 // Only JSObject and subtypes have map transitions and back pointers. 801 // Only JSObject and subtypes have map transitions and back pointers.
802 if (map->instance_type() < FIRST_JS_OBJECT_TYPE) continue; 802 if (map->instance_type() < FIRST_JS_OBJECT_TYPE) continue;
803 if (map->instance_type() > JS_FUNCTION_TYPE) continue; 803 if (map->instance_type() > JS_FUNCTION_TYPE) continue;
804 // Follow the chain of back pointers to find the prototype. 804 // Follow the chain of back pointers to find the prototype.
(...skipping 356 matching lines...) Expand 10 before | Expand all | Expand 10 after
1161 1161
1162 1162
1163 void MarkCompactCollector::DeallocateCodeBlock(Address start, 1163 void MarkCompactCollector::DeallocateCodeBlock(Address start,
1164 int size_in_bytes) { 1164 int size_in_bytes) {
1165 Heap::code_space()->Free(start, size_in_bytes); 1165 Heap::code_space()->Free(start, size_in_bytes);
1166 } 1166 }
1167 1167
1168 1168
1169 void MarkCompactCollector::DeallocateMapBlock(Address start, 1169 void MarkCompactCollector::DeallocateMapBlock(Address start,
1170 int size_in_bytes) { 1170 int size_in_bytes) {
1171 // Objects in map space are frequently assumed to have size Map::kSize and a 1171 // Objects in map space are assumed to have size Map::kSize and a
1172 // valid map in their first word. Thus, we break the free block up into 1172 // valid map in their first word. Thus, we break the free block up into
1173 // chunks and free them separately. 1173 // chunks and free them separately.
1174 ASSERT(size_in_bytes % Map::kSize == 0); 1174 ASSERT(size_in_bytes % Map::kSize == 0);
1175 Heap::ClearRSetRange(start, size_in_bytes); 1175 Heap::ClearRSetRange(start, size_in_bytes);
1176 Address end = start + size_in_bytes; 1176 Address end = start + size_in_bytes;
1177 for (Address a = start; a < end; a += Map::kSize) { 1177 for (Address a = start; a < end; a += Map::kSize) {
1178 Heap::map_space()->Free(a); 1178 Heap::map_space()->Free(a);
1179 } 1179 }
1180 } 1180 }
1181 1181
(...skipping 53 matching lines...) Expand 10 before | Expand all | Expand 10 after
1235 // done after promoting objects from the new space so we get the correct 1235 // done after promoting objects from the new space so we get the correct
1236 // allocation top. 1236 // allocation top.
1237 Heap::old_pointer_space()->MCWriteRelocationInfoToPage(); 1237 Heap::old_pointer_space()->MCWriteRelocationInfoToPage();
1238 Heap::old_data_space()->MCWriteRelocationInfoToPage(); 1238 Heap::old_data_space()->MCWriteRelocationInfoToPage();
1239 Heap::code_space()->MCWriteRelocationInfoToPage(); 1239 Heap::code_space()->MCWriteRelocationInfoToPage();
1240 Heap::map_space()->MCWriteRelocationInfoToPage(); 1240 Heap::map_space()->MCWriteRelocationInfoToPage();
1241 Heap::cell_space()->MCWriteRelocationInfoToPage(); 1241 Heap::cell_space()->MCWriteRelocationInfoToPage();
1242 } 1242 }
1243 1243
1244 1244
1245 class MapIterator : public HeapObjectIterator {
1246 public:
1247 MapIterator() : HeapObjectIterator(Heap::map_space(), &SizeCallback) { }
1248
1249 explicit MapIterator(Address start)
1250 : HeapObjectIterator(Heap::map_space(), start, &SizeCallback) { }
1251
1252 private:
1253 static int SizeCallback(HeapObject* unused) {
1254 USE(unused);
1255 return Map::kSize;
1256 }
1257 };
1258
1259
1260 class MapCompact {
1261 public:
1262 explicit MapCompact(int live_maps)
1263 : live_maps_(live_maps),
1264 to_evacuate_start_(Heap::map_space()->TopAfterCompaction(live_maps)),
1265 map_to_evacuate_it_(to_evacuate_start_),
1266 first_map_to_evacuate_(
1267 reinterpret_cast<Map*>(HeapObject::FromAddress(to_evacuate_start_))) {
1268 }
1269
1270 void CompactMaps() {
1271 // As we know the number of maps to evacuate beforehand,
1272 // we stop then there is no more vacant maps.
1273 for (Map* next_vacant_map = NextVacantMap();
1274 next_vacant_map;
1275 next_vacant_map = NextVacantMap()) {
1276 EvacuateMap(next_vacant_map, NextMapToEvacuate());
1277 }
1278
1279 #ifdef DEBUG
1280 CheckNoMapsToEvacuate();
1281 #endif
1282 }
1283
1284 void UpdateMapPointersInRoots() {
1285 Heap::IterateRoots(&map_updating_visitor_, VISIT_ONLY_STRONG);
1286 GlobalHandles::IterateWeakRoots(&map_updating_visitor_);
1287 }
1288
1289 void FinishMapSpace() {
1290 // Iterate through to space and finish move.
1291 MapIterator it;
1292 HeapObject* o = it.next();
1293 for (; o != first_map_to_evacuate_; o = it.next()) {
1294 Map* map = reinterpret_cast<Map*>(o);
1295 ASSERT(!map->IsMarked());
1296 ASSERT(!map->IsOverflowed());
1297 ASSERT(map->IsMap());
1298 Heap::UpdateRSet(map);
1299 }
1300 }
1301
1302 void UpdateMapPointersInPagedSpace(PagedSpace* space) {
1303 ASSERT(space != Heap::map_space());
1304
1305 PageIterator it(space, PageIterator::PAGES_IN_USE);
1306 while (it.has_next()) {
1307 Page* p = it.next();
1308 UpdateMapPointersInRange(p->ObjectAreaStart(), p->AllocationTop());
1309 }
1310 }
1311
1312 void UpdateMapPointersInNewSpace() {
1313 NewSpace* space = Heap::new_space();
1314 UpdateMapPointersInRange(space->bottom(), space->top());
1315 }
1316
1317 void UpdateMapPointersInLargeObjectSpace() {
1318 LargeObjectIterator it(Heap::lo_space());
1319 while (true) {
1320 if (!it.has_next()) break;
1321 UpdateMapPointersInObject(it.next());
1322 }
1323 }
1324
1325 void Finish() {
1326 Heap::map_space()->FinishCompaction(to_evacuate_start_, live_maps_);
1327 }
1328
1329 private:
1330 int live_maps_;
1331 Address to_evacuate_start_;
1332 MapIterator vacant_map_it_;
1333 MapIterator map_to_evacuate_it_;
1334 Map* first_map_to_evacuate_;
1335
1336 // Helper class for updating map pointers in HeapObjects.
1337 class MapUpdatingVisitor: public ObjectVisitor {
1338 public:
1339 void VisitPointer(Object** p) {
1340 UpdateMapPointer(p);
1341 }
1342
1343 void VisitPointers(Object** start, Object** end) {
1344 for (Object** p = start; p < end; p++) UpdateMapPointer(p);
1345 }
1346
1347 private:
1348 void UpdateMapPointer(Object** p) {
1349 if (!(*p)->IsHeapObject()) return;
1350 HeapObject* old_map = reinterpret_cast<HeapObject*>(*p);
1351
1352 // Moved maps are tagged with overflowed map word. They are the only
1353 // objects those map word is overflowed as marking is already complete.
1354 MapWord map_word = old_map->map_word();
1355 if (!map_word.IsOverflowed()) return;
1356
1357 *p = GetForwardedMap(map_word);
1358 }
1359 };
1360
1361 static MapUpdatingVisitor map_updating_visitor_;
1362
1363 static Map* NextMap(MapIterator* it, HeapObject* last, bool live) {
1364 while (true) {
1365 ASSERT(it->has_next());
1366 HeapObject* next = it->next();
1367 if (next == last)
1368 return NULL;
1369 ASSERT(!next->IsOverflowed());
1370 ASSERT(!next->IsMarked());
1371 ASSERT(next->IsMap() || FreeListNode::IsFreeListNode(next));
1372 if (next->IsMap() == live)
1373 return reinterpret_cast<Map*>(next);
1374 }
1375 }
1376
1377 Map* NextVacantMap() {
1378 Map* map = NextMap(&vacant_map_it_, first_map_to_evacuate_, false);
1379 ASSERT(map == NULL || FreeListNode::IsFreeListNode(map));
1380 return map;
1381 }
1382
1383 Map* NextMapToEvacuate() {
1384 Map* map = NextMap(&map_to_evacuate_it_, NULL, true);
1385 ASSERT(map != NULL);
1386 ASSERT(map->IsMap());
1387 return map;
1388 }
1389
1390 static void EvacuateMap(Map* vacant_map, Map* map_to_evacuate) {
1391 ASSERT(FreeListNode::IsFreeListNode(vacant_map));
1392 ASSERT(map_to_evacuate->IsMap());
1393
1394 memcpy(
1395 reinterpret_cast<void*>(vacant_map->address()),
1396 reinterpret_cast<void*>(map_to_evacuate->address()),
1397 Map::kSize);
1398 ASSERT(vacant_map->IsMap()); // Due to memcpy above.
1399
1400 MapWord forwarding_map_word = MapWord::FromMap(vacant_map);
1401 forwarding_map_word.SetOverflow();
1402 map_to_evacuate->set_map_word(forwarding_map_word);
1403
1404 ASSERT(map_to_evacuate->map_word().IsOverflowed());
1405 ASSERT(GetForwardedMap(map_to_evacuate->map_word()) == vacant_map);
1406 }
1407
1408 static Map* GetForwardedMap(MapWord map_word) {
1409 ASSERT(map_word.IsOverflowed());
1410 map_word.ClearOverflow();
1411 Map* new_map = map_word.ToMap();
1412 ASSERT_MAP_ALIGNED(new_map->address());
1413 return new_map;
1414 }
1415
1416 static int UpdateMapPointersInObject(HeapObject* obj) {
1417 ASSERT(!obj->IsMarked());
1418 Map* map = obj->map();
1419 ASSERT(Heap::map_space()->Contains(map));
1420 MapWord map_word = map->map_word();
1421 ASSERT(!map_word.IsMarked());
1422 if (map_word.IsOverflowed()) {
1423 Map* new_map = GetForwardedMap(map_word);
1424 ASSERT(Heap::map_space()->Contains(new_map));
1425 obj->set_map(new_map);
1426
1427 #ifdef DEBUG
1428 if (FLAG_gc_verbose) {
1429 PrintF("update %p : %p -> %p\n", obj->address(),
1430 map, new_map);
1431 }
1432 #endif
1433 }
1434
1435 int size = obj->SizeFromMap(map);
1436 obj->IterateBody(map->instance_type(), size, &map_updating_visitor_);
1437 return size;
1438 }
1439
1440 static void UpdateMapPointersInRange(Address start, Address end) {
1441 HeapObject* object;
1442 int size;
1443 for (Address current = start; current < end; current += size) {
1444 object = HeapObject::FromAddress(current);
1445 size = UpdateMapPointersInObject(object);
1446 ASSERT(size > 0);
1447 }
1448 }
1449
1450 #ifdef DEBUG
1451 void CheckNoMapsToEvacuate() {
1452 if (!FLAG_enable_slow_asserts)
1453 return;
1454
1455 while (map_to_evacuate_it_.has_next())
1456 ASSERT(FreeListNode::IsFreeListNode(map_to_evacuate_it_.next()));
1457 }
1458 #endif
1459 };
1460
1461 MapCompact::MapUpdatingVisitor MapCompact::map_updating_visitor_;
1462
1463
1245 void MarkCompactCollector::SweepSpaces() { 1464 void MarkCompactCollector::SweepSpaces() {
1246 ASSERT(state_ == SWEEP_SPACES); 1465 ASSERT(state_ == SWEEP_SPACES);
1247 ASSERT(!IsCompacting()); 1466 ASSERT(!IsCompacting());
1248 // Noncompacting collections simply sweep the spaces to clear the mark 1467 // Noncompacting collections simply sweep the spaces to clear the mark
1249 // bits and free the nonlive blocks (for old and map spaces). We sweep 1468 // bits and free the nonlive blocks (for old and map spaces). We sweep
1250 // the map space last because freeing non-live maps overwrites them and 1469 // the map space last because freeing non-live maps overwrites them and
1251 // the other spaces rely on possibly non-live maps to get the sizes for 1470 // the other spaces rely on possibly non-live maps to get the sizes for
1252 // non-live objects. 1471 // non-live objects.
1253 SweepSpace(Heap::old_pointer_space(), &DeallocateOldPointerBlock); 1472 SweepSpace(Heap::old_pointer_space(), &DeallocateOldPointerBlock);
1254 SweepSpace(Heap::old_data_space(), &DeallocateOldDataBlock); 1473 SweepSpace(Heap::old_data_space(), &DeallocateOldDataBlock);
1255 SweepSpace(Heap::code_space(), &DeallocateCodeBlock); 1474 SweepSpace(Heap::code_space(), &DeallocateCodeBlock);
1256 SweepSpace(Heap::cell_space(), &DeallocateCellBlock); 1475 SweepSpace(Heap::cell_space(), &DeallocateCellBlock);
1257 SweepSpace(Heap::new_space()); 1476 SweepSpace(Heap::new_space());
1258 SweepSpace(Heap::map_space(), &DeallocateMapBlock); 1477 SweepSpace(Heap::map_space(), &DeallocateMapBlock);
1478 int live_maps = Heap::map_space()->Size() / Map::kSize;
1479 ASSERT(live_map_objects_ == live_maps);
1480
1481 if (Heap::map_space()->NeedsCompaction(live_maps)) {
1482 MapCompact map_compact(live_maps);
1483
1484 map_compact.CompactMaps();
1485 map_compact.UpdateMapPointersInRoots();
1486
1487 map_compact.FinishMapSpace();
1488 PagedSpaces spaces;
1489 while (PagedSpace* space = spaces.next()) {
1490 if (space == Heap::map_space()) continue;
1491 map_compact.UpdateMapPointersInPagedSpace(space);
1492 }
1493 map_compact.UpdateMapPointersInNewSpace();
1494 map_compact.UpdateMapPointersInLargeObjectSpace();
1495
1496 map_compact.Finish();
1497 }
1259 } 1498 }
1260 1499
1261 1500
1262 // Iterate the live objects in a range of addresses (eg, a page or a 1501 // Iterate the live objects in a range of addresses (eg, a page or a
1263 // semispace). The live regions of the range have been linked into a list. 1502 // semispace). The live regions of the range have been linked into a list.
1264 // The first live region is [first_live_start, first_live_end), and the last 1503 // The first live region is [first_live_start, first_live_end), and the last
1265 // address in the range is top. The callback function is used to get the 1504 // address in the range is top. The callback function is used to get the
1266 // size of each live object. 1505 // size of each live object.
1267 int MarkCompactCollector::IterateLiveObjectsInRange( 1506 int MarkCompactCollector::IterateLiveObjectsInRange(
1268 Address start, 1507 Address start,
(...skipping 487 matching lines...) Expand 10 before | Expand all | Expand 10 after
1756 1995
1757 void MarkCompactCollector::RebuildRSets() { 1996 void MarkCompactCollector::RebuildRSets() {
1758 #ifdef DEBUG 1997 #ifdef DEBUG
1759 ASSERT(state_ == RELOCATE_OBJECTS); 1998 ASSERT(state_ == RELOCATE_OBJECTS);
1760 state_ = REBUILD_RSETS; 1999 state_ = REBUILD_RSETS;
1761 #endif 2000 #endif
1762 Heap::RebuildRSets(); 2001 Heap::RebuildRSets();
1763 } 2002 }
1764 2003
1765 } } // namespace v8::internal 2004 } } // namespace v8::internal
OLDNEW
« no previous file with comments | « src/heap.h ('k') | src/spaces.h » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698