Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(56)

Side by Side Diff: src/heap.cc

Issue 6309012: * Complete new store buffer on ia32. The store buffer now covers... (Closed) Base URL: http://v8.googlecode.com/svn/branches/experimental/gc/
Patch Set: '' Created 9 years, 11 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch | Annotate | Revision Log
OLDNEW
1 // Copyright 2010 the V8 project authors. All rights reserved. 1 // Copyright 2010 the V8 project authors. All rights reserved.
2 // Redistribution and use in source and binary forms, with or without 2 // Redistribution and use in source and binary forms, with or without
3 // modification, are permitted provided that the following conditions are 3 // modification, are permitted provided that the following conditions are
4 // met: 4 // met:
5 // 5 //
6 // * Redistributions of source code must retain the above copyright 6 // * Redistributions of source code must retain the above copyright
7 // notice, this list of conditions and the following disclaimer. 7 // notice, this list of conditions and the following disclaimer.
8 // * Redistributions in binary form must reproduce the above 8 // * Redistributions in binary form must reproduce the above
9 // copyright notice, this list of conditions and the following 9 // copyright notice, this list of conditions and the following
10 // disclaimer in the documentation and/or other materials provided 10 // disclaimer in the documentation and/or other materials provided
(...skipping 953 matching lines...) Expand 10 before | Expand all | Expand 10 after
964 LOG(ResourceEvent("scavenge", "begin")); 964 LOG(ResourceEvent("scavenge", "begin"));
965 965
966 // Clear descriptor cache. 966 // Clear descriptor cache.
967 DescriptorLookupCache::Clear(); 967 DescriptorLookupCache::Clear();
968 968
969 // Used for updating survived_since_last_expansion_ at function end. 969 // Used for updating survived_since_last_expansion_ at function end.
970 intptr_t survived_watermark = PromotedSpaceSize(); 970 intptr_t survived_watermark = PromotedSpaceSize();
971 971
972 CheckNewSpaceExpansionCriteria(); 972 CheckNewSpaceExpansionCriteria();
973 973
974 StoreBuffer::Verify();
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 we should move verification of storebuffer into He
Erik Corry 2011/01/24 13:56:00 Done.
975
974 // Flip the semispaces. After flipping, to space is empty, from space has 976 // Flip the semispaces. After flipping, to space is empty, from space has
975 // live objects. 977 // live objects.
976 new_space_.Flip(); 978 new_space_.Flip();
977 new_space_.ResetAllocationInfo(); 979 new_space_.ResetAllocationInfo();
978 980
979 // We need to sweep newly copied objects which can be either in the 981 // We need to sweep newly copied objects which can be either in the
980 // to space or promoted to the old generation. For to-space 982 // to space or promoted to the old generation. For to-space
981 // objects, we treat the bottom of the to space as a queue. Newly 983 // objects, we treat the bottom of the to space as a queue. Newly
982 // copied and unswept objects lie between a 'front' mark and the 984 // copied and unswept objects lie between a 'front' mark and the
983 // allocation pointer. 985 // allocation pointer.
984 // 986 //
985 // Promoted objects can go into various old-generation spaces, and 987 // Promoted objects can go into various old-generation spaces, and
986 // can be allocated internally in the spaces (from the free list). 988 // can be allocated internally in the spaces (from the free list).
987 // We treat the top of the to space as a queue of addresses of 989 // We treat the top of the to space as a queue of addresses of
988 // promoted objects. The addresses of newly promoted and unswept 990 // promoted objects. The addresses of newly promoted and unswept
989 // objects lie between a 'front' mark and a 'rear' mark that is 991 // objects lie between a 'front' mark and a 'rear' mark that is
990 // updated as a side effect of promoting an object. 992 // updated as a side effect of promoting an object.
991 // 993 //
992 // There is guaranteed to be enough room at the top of the to space 994 // There is guaranteed to be enough room at the top of the to space
993 // for the addresses of promoted objects: every object promoted 995 // for the addresses of promoted objects: every object promoted
994 // frees up its size in bytes from the top of the new space, and 996 // frees up its size in bytes from the top of the new space, and
995 // objects are at least one pointer in size. 997 // objects are at least one pointer in size.
996 Address new_space_front = new_space_.ToSpaceLow(); 998 Address new_space_front = new_space_.ToSpaceLow();
997 promotion_queue.Initialize(new_space_.ToSpaceHigh()); 999 promotion_queue.Initialize(new_space_.ToSpaceHigh());
998 1000
999 ScavengeVisitor scavenge_visitor; 1001 ScavengeVisitor scavenge_visitor;
1000 // Copy roots. 1002 // Copy roots.
1001 IterateRoots(&scavenge_visitor, VISIT_ALL_IN_SCAVENGE); 1003 IterateRoots(&scavenge_visitor, VISIT_ALL_IN_SCAVENGE);
1002 1004
1005 #ifdef DEBUG
1006 StoreBuffer::Clean();
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 Why we do this only in debug? Comment?
Erik Corry 2011/01/24 13:56:00 When we actually use the store buffer info it will
1007 #endif
1008
1003 // Copy objects reachable from the old generation. By definition, 1009 // Copy objects reachable from the old generation. By definition,
1004 // there are no intergenerational pointers in code or data spaces. 1010 // there are no intergenerational pointers in code or data spaces.
1005 IterateDirtyRegions(old_pointer_space_, 1011 IterateDirtyRegions(old_pointer_space_,
1006 &IteratePointersInDirtyRegion, 1012 &IteratePointersInDirtyRegion,
1007 &ScavengePointer, 1013 &ScavengePointer,
1008 WATERMARK_CAN_BE_INVALID); 1014 WATERMARK_CAN_BE_INVALID);
1009 1015
1010 IterateDirtyRegions(map_space_, 1016 IterateDirtyRegions(map_space_,
1011 &IteratePointersInDirtyMapsRegion, 1017 &IteratePointersInDirtyMapsRegion,
1012 &ScavengePointer, 1018 &ScavengePointer,
(...skipping 3038 matching lines...) Expand 10 before | Expand all | Expand 10 after
4051 a += kPointerSize) { 4057 a += kPointerSize) {
4052 Memory::Address_at(a) = kFromSpaceZapValue; 4058 Memory::Address_at(a) = kFromSpaceZapValue;
4053 } 4059 }
4054 } 4060 }
4055 #endif // DEBUG 4061 #endif // DEBUG
4056 4062
4057 4063
4058 bool Heap::IteratePointersInDirtyRegion(Address start, 4064 bool Heap::IteratePointersInDirtyRegion(Address start,
4059 Address end, 4065 Address end,
4060 ObjectSlotCallback copy_object_func) { 4066 ObjectSlotCallback copy_object_func) {
4061 Address slot_address = start;
4062 bool pointers_to_new_space_found = false; 4067 bool pointers_to_new_space_found = false;
4063 4068
4064 while (slot_address < end) { 4069 for (Address slot_address = start;
4070 slot_address < end;
4071 slot_address += kPointerSize) {
4065 Object** slot = reinterpret_cast<Object**>(slot_address); 4072 Object** slot = reinterpret_cast<Object**>(slot_address);
4066 if (Heap::InNewSpace(*slot)) { 4073 if (Heap::InNewSpace(*slot)) {
4067 ASSERT((*slot)->IsHeapObject()); 4074 ASSERT((*slot)->IsHeapObject());
4068 copy_object_func(reinterpret_cast<HeapObject**>(slot)); 4075 copy_object_func(reinterpret_cast<HeapObject**>(slot));
4069 if (Heap::InNewSpace(*slot)) { 4076 if (Heap::InNewSpace(*slot)) {
4070 ASSERT((*slot)->IsHeapObject()); 4077 ASSERT((*slot)->IsHeapObject());
4078 StoreBuffer::Mark(reinterpret_cast<Address>(slot));
4071 pointers_to_new_space_found = true; 4079 pointers_to_new_space_found = true;
4072 } 4080 }
4073 } 4081 }
4074 slot_address += kPointerSize;
4075 } 4082 }
4076 return pointers_to_new_space_found; 4083 return pointers_to_new_space_found;
4077 } 4084 }
4078 4085
4079 4086
4080 // Compute start address of the first map following given addr. 4087 // Compute start address of the first map following given addr.
4081 static inline Address MapStartAlign(Address addr) { 4088 static inline Address MapStartAlign(Address addr) {
4082 Address page = Page::FromAddress(addr)->ObjectAreaStart(); 4089 Address page = Page::FromAddress(addr)->ObjectAreaStart();
4083 return page + (((addr - page) + (Map::kSize - 1)) / Map::kSize * Map::kSize); 4090 return page + (((addr - page) + (Map::kSize - 1)) / Map::kSize * Map::kSize);
4084 } 4091 }
(...skipping 84 matching lines...) Expand 10 before | Expand all | Expand 10 after
4169 } 4176 }
4170 4177
4171 return contains_pointers_to_new_space; 4178 return contains_pointers_to_new_space;
4172 } 4179 }
4173 4180
4174 4181
4175 void Heap::IterateAndMarkPointersToFromSpace(Address start, 4182 void Heap::IterateAndMarkPointersToFromSpace(Address start,
4176 Address end, 4183 Address end,
4177 ObjectSlotCallback callback) { 4184 ObjectSlotCallback callback) {
4178 Address slot_address = start; 4185 Address slot_address = start;
4179 Page* page = Page::FromAddress(start);
4180
4181 uint32_t marks = page->GetRegionMarks();
4182
4183 while (slot_address < end) { 4186 while (slot_address < end) {
4184 Object** slot = reinterpret_cast<Object**>(slot_address); 4187 Object** slot = reinterpret_cast<Object**>(slot_address);
4185 if (Heap::InFromSpace(*slot)) { 4188 if (Heap::InFromSpace(*slot)) {
4186 ASSERT((*slot)->IsHeapObject()); 4189 ASSERT((*slot)->IsHeapObject());
4187 callback(reinterpret_cast<HeapObject**>(slot)); 4190 callback(reinterpret_cast<HeapObject**>(slot));
4188 if (Heap::InNewSpace(*slot)) { 4191 if (Heap::InNewSpace(*slot)) {
4189 ASSERT((*slot)->IsHeapObject()); 4192 ASSERT((*slot)->IsHeapObject());
4190 marks |= page->GetRegionMaskForAddress(slot_address); 4193 StoreBuffer::Mark(reinterpret_cast<Address>(slot));
4191 } 4194 }
4192 } 4195 }
4193 slot_address += kPointerSize; 4196 slot_address += kPointerSize;
4194 } 4197 }
4195
4196 page->SetRegionMarks(marks);
4197 } 4198 }
4198 4199
4199 4200
4200 uint32_t Heap::IterateDirtyRegions( 4201 uint32_t Heap::IterateDirtyRegions(
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 If you decided to kill cardmarking wb maybe you wi
Erik Corry 2011/01/24 13:56:00 That is certainly the plan.
4201 uint32_t marks, 4202 uint32_t marks,
4202 Address area_start, 4203 Address area_start,
4203 Address area_end, 4204 Address area_end,
4204 DirtyRegionCallback visit_dirty_region, 4205 DirtyRegionCallback visit_dirty_region,
4205 ObjectSlotCallback copy_object_func) { 4206 ObjectSlotCallback copy_object_func) {
4206 #ifndef ENABLE_CARDMARKING_WRITE_BARRIER
4207 ASSERT(marks == Page::kAllRegionsDirtyMarks); 4207 ASSERT(marks == Page::kAllRegionsDirtyMarks);
4208 visit_dirty_region(area_start, area_end, copy_object_func); 4208 visit_dirty_region(area_start, area_end, copy_object_func);
4209 return Page::kAllRegionsDirtyMarks; 4209 return Page::kAllRegionsDirtyMarks;
4210 #else 4210 }
4211 uint32_t newmarks = 0;
4212 uint32_t mask = 1;
4213 4211
4214 if (area_start >= area_end) {
4215 return newmarks;
4216 }
4217 4212
4218 Address region_start = area_start; 4213 #ifdef DEBUG
4214 // Check that the store buffer contains all intergenerational pointers by
4215 // scanning a page and ensuring that all pointers to young space are in the
4216 // store buffer.
4217 void Heap::OldPointerSpaceCheckStoreBuffer(
4218 ExpectedPageWatermarkState watermark_state) {
4219 OldSpace* space = old_pointer_space();
4220 PageIterator pages(space, PageIterator::PAGES_IN_USE);
4219 4221
4220 // area_start does not necessarily coincide with start of the first region. 4222 space->free_list()->Zap();
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 Why do we need to Zap free_list? Can we zap it wi
Erik Corry 2011/01/24 13:56:00 We zap the free list so that we can walk the whole
4221 // Thus to calculate the beginning of the next region we have to align
4222 // area_start by Page::kRegionSize.
4223 Address second_region =
4224 reinterpret_cast<Address>(
4225 reinterpret_cast<intptr_t>(area_start + Page::kRegionSize) &
4226 ~Page::kRegionAlignmentMask);
4227 4223
4228 // Next region might be beyond area_end. 4224 StoreBuffer::SortUniq();
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 Verifying StoreBuffer has an unfortunate side-effe
Erik Corry 2011/01/24 13:56:00 I think we will be sorting and uniqifying on each
4229 Address region_end = Min(second_region, area_end);
4230 4225
4231 if (marks & mask) { 4226 while (pages.has_next()) {
4232 if (visit_dirty_region(region_start, region_end, copy_object_func)) { 4227 Page* page = pages.next();
4233 newmarks |= mask; 4228 Object** current = reinterpret_cast<Object**>(page->ObjectAreaStart());
4234 }
4235 }
4236 mask <<= 1;
4237 4229
4238 // Iterate subsequent regions which fully lay inside [area_start, area_end[. 4230 // Do not try to visit pointers beyond page allocation watermark.
4239 region_start = region_end; 4231 // Page can contain garbage pointers there.
4240 region_end = region_start + Page::kRegionSize; 4232 Address end;
4241 4233
4242 while (region_end <= area_end) { 4234 if (watermark_state == WATERMARK_SHOULD_BE_VALID ||
4243 if (marks & mask) { 4235 page->IsWatermarkValid()) {
4244 if (visit_dirty_region(region_start, region_end, copy_object_func)) { 4236 end = page->AllocationWatermark();
4245 newmarks |= mask; 4237 } else {
4246 } 4238 end = page->CachedAllocationWatermark();
4247 } 4239 }
4248 4240
4249 region_start = region_end; 4241 Object*** store_buffer_position = StoreBuffer::Start();
4250 region_end = region_start + Page::kRegionSize; 4242 Object*** store_buffer_top = StoreBuffer::Top();
4251 4243
4252 mask <<= 1; 4244 Object** limit = reinterpret_cast<Object**>(end);
4253 } 4245 for ( ; current < limit; current++) {
4254 4246 Object* o = *current;
4255 if (region_start != area_end) { 4247 if (o->IsSmi()) continue;
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 InNewSpace(Object* o) can't return true for a smi.
Erik Corry 2011/01/24 13:56:00 Done.
4256 // A small piece of area left uniterated because area_end does not coincide 4248 // We have to check that the pointer does not point into new space
4257 // with region end. Check whether region covering last part of area is 4249 // without trying to cast it to a heap object since the hash field of
4258 // dirty. 4250 // a string can contain values like 1 and 3 which are tagged null
4259 if (marks & mask) { 4251 // pointers.
4260 if (visit_dirty_region(region_start, area_end, copy_object_func)) { 4252 if (!InNewSpace(o)) continue;
4261 newmarks |= mask; 4253 while (*store_buffer_position < current) {
4254 store_buffer_position++;
4255 ASSERT(store_buffer_position < store_buffer_top);
4256 }
4257 if (*store_buffer_position != current) {
4258 Object** obj_start = current;
4259 while (!(*obj_start)->IsMap()) obj_start--;
4260 UNREACHABLE();
4262 } 4261 }
4263 } 4262 }
4264 } 4263 }
4265
4266 return newmarks;
4267 #endif
4268 } 4264 }
4269 4265
4270 4266
4267 void Heap::MapSpaceCheckStoreBuffer(
4268 ExpectedPageWatermarkState watermark_state) {
4269 MapSpace* space = map_space();
4270 PageIterator pages(space, PageIterator::PAGES_IN_USE);
4271
4272 space->free_list()->Zap();
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 Can we zap it with a special value to detect cases
Erik Corry 2011/01/24 13:56:00 Done.
4273
4274 StoreBuffer::SortUniq();
4275
4276 while (pages.has_next()) {
4277 Page* page = pages.next();
4278
4279 // Do not try to visit pointers beyond page allocation watermark.
4280 // Page can contain garbage pointers there.
4281 Address end;
4282
4283 if (watermark_state == WATERMARK_SHOULD_BE_VALID ||
4284 page->IsWatermarkValid()) {
4285 end = page->AllocationWatermark();
4286 } else {
4287 end = page->CachedAllocationWatermark();
4288 }
4289
4290 Address map_aligned_current = MapStartAlign(page->ObjectAreaStart());
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 both ObjectAreaStart and watermark should be align
Erik Corry 2011/01/24 13:56:00 Done.
4291 Address map_aligned_end = MapEndAlign(end);
4292
4293 Object*** store_buffer_position = StoreBuffer::Start();
4294 Object*** store_buffer_top = StoreBuffer::Top();
4295
4296 for ( ;
4297 map_aligned_current < map_aligned_end;
4298 map_aligned_current += Map::kSize) {
4299 ASSERT(!Heap::InNewSpace(Memory::Object_at(map_aligned_current)));
4300 ASSERT(Memory::Object_at(map_aligned_current)->IsMap());
4301
4302 Object** current = reinterpret_cast<Object**>(
4303 map_aligned_current + Map::kPointerFieldsBeginOffset);
4304 Object** limit = reinterpret_cast<Object**>(
4305 map_aligned_current + Map::kPointerFieldsEndOffset);
4306
4307 for ( ; current < limit; current++) {
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 This loop looks familiar. I saw similar one above
Erik Corry 2011/01/24 13:56:00 Done.
4308 Object* o = *current;
4309 if (o->IsSmi()) continue;
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 InNewSpace(Object* o) can't return true for a smi.
Erik Corry 2011/01/24 13:56:00 Done.
4310 HeapObject* heap_object = HeapObject::cast(o);
4311 if (!InNewSpace(heap_object)) continue;
4312 while (*store_buffer_position < current) {
4313 store_buffer_position++;
4314 ASSERT(store_buffer_position < store_buffer_top);
4315 }
4316 ASSERT(*store_buffer_position == current);
4317 }
4318 }
4319 }
4320 }
4321
4322
4323 void Heap::LargeObjectSpaceCheckStoreBuffer() {
4324 LargeObjectIterator it(lo_space());
4325 for (HeapObject* object = it.next(); object != NULL; object = it.next()) {
4326 // We only have code, sequential strings, or fixed arrays in large
4327 // object space, and only fixed arrays can possibly contain pointers to
4328 // the young generation.
4329 if (object->IsFixedArray()) {
4330 Object*** store_buffer_position = StoreBuffer::Start();
4331 Object*** store_buffer_top = StoreBuffer::Top();
4332 Object** current = reinterpret_cast<Object**>(object->address());
4333 Object** limit =
4334 reinterpret_cast<Object**>(object->address() + object->Size());
4335 for ( ; current < limit; current++) {
Vyacheslav Egorov (Chromium) 2011/01/21 18:18:18 This loop looks familiar. Consider moving it to s
Erik Corry 2011/01/24 13:56:00 Done.
4336 Object* o = *current;
4337 if (o->IsSmi()) continue;
4338 HeapObject* heap_object = HeapObject::cast(o);
4339 if (!InNewSpace(heap_object)) continue;
4340 while (*store_buffer_position < current) {
4341 store_buffer_position++;
4342 ASSERT(store_buffer_position < store_buffer_top);
4343 }
4344 ASSERT(*store_buffer_position == current);
4345 }
4346 }
4347 }
4348 }
4349
4350
4351 #endif
4352
4271 4353
4272 void Heap::IterateDirtyRegions( 4354 void Heap::IterateDirtyRegions(
4273 PagedSpace* space, 4355 PagedSpace* space,
4274 DirtyRegionCallback visit_dirty_region, 4356 DirtyRegionCallback visit_dirty_region,
4275 ObjectSlotCallback copy_object_func, 4357 ObjectSlotCallback copy_object_func,
4276 ExpectedPageWatermarkState expected_page_watermark_state) { 4358 ExpectedPageWatermarkState expected_page_watermark_state) {
4277 4359
4278 PageIterator it(space, PageIterator::PAGES_IN_USE); 4360 PageIterator pages(space, PageIterator::PAGES_IN_USE);
4279 4361
4280 while (it.has_next()) { 4362 while (pages.has_next()) {
4281 Page* page = it.next(); 4363 Page* page = pages.next();
4282 uint32_t marks = page->GetRegionMarks(); 4364 Address start = page->ObjectAreaStart();
4283 4365
4284 if (marks != Page::kAllRegionsCleanMarks) { 4366 // Do not try to visit pointers beyond page allocation watermark.
4285 Address start = page->ObjectAreaStart(); 4367 // Page can contain garbage pointers there.
4368 Address end;
4286 4369
4287 // Do not try to visit pointers beyond page allocation watermark. 4370 if ((expected_page_watermark_state == WATERMARK_SHOULD_BE_VALID) ||
4288 // Page can contain garbage pointers there. 4371 page->IsWatermarkValid()) {
4289 Address end; 4372 end = page->AllocationWatermark();
4373 } else {
4374 end = page->CachedAllocationWatermark();
4375 }
4290 4376
4291 if ((expected_page_watermark_state == WATERMARK_SHOULD_BE_VALID) || 4377 ASSERT(space == old_pointer_space_ ||
4292 page->IsWatermarkValid()) { 4378 (space == map_space_ &&
4293 end = page->AllocationWatermark(); 4379 ((page->ObjectAreaStart() - end) % Map::kSize == 0)));
4294 } else {
4295 end = page->CachedAllocationWatermark();
4296 }
4297 4380
4298 ASSERT(space == old_pointer_space_ || 4381 IterateDirtyRegions(Page::kAllRegionsDirtyMarks,
4299 (space == map_space_ && 4382 start,
4300 ((page->ObjectAreaStart() - end) % Map::kSize == 0))); 4383 end,
4301 4384 visit_dirty_region,
4302 page->SetRegionMarks(IterateDirtyRegions(marks, 4385 copy_object_func);
4303 start,
4304 end,
4305 visit_dirty_region,
4306 copy_object_func));
4307 }
4308 4386
4309 // Mark page watermark as invalid to maintain watermark validity invariant. 4387 // Mark page watermark as invalid to maintain watermark validity invariant.
4310 // See Page::FlipMeaningOfInvalidatedWatermarkFlag() for details. 4388 // See Page::FlipMeaningOfInvalidatedWatermarkFlag() for details.
4311 page->InvalidateWatermark(true); 4389 page->InvalidateWatermark(true);
4312 } 4390 }
4313 } 4391 }
4314 4392
4315 4393
4316 void Heap::IterateRoots(ObjectVisitor* v, VisitMode mode) { 4394 void Heap::IterateRoots(ObjectVisitor* v, VisitMode mode) {
4317 IterateStrongRoots(v, mode); 4395 IterateStrongRoots(v, mode);
(...skipping 1182 matching lines...) Expand 10 before | Expand all | Expand 10 after
5500 void ExternalStringTable::TearDown() { 5578 void ExternalStringTable::TearDown() {
5501 new_space_strings_.Free(); 5579 new_space_strings_.Free();
5502 old_space_strings_.Free(); 5580 old_space_strings_.Free();
5503 } 5581 }
5504 5582
5505 5583
5506 List<Object*> ExternalStringTable::new_space_strings_; 5584 List<Object*> ExternalStringTable::new_space_strings_;
5507 List<Object*> ExternalStringTable::old_space_strings_; 5585 List<Object*> ExternalStringTable::old_space_strings_;
5508 5586
5509 } } // namespace v8::internal 5587 } } // namespace v8::internal
OLDNEW

Powered by Google App Engine
This is Rietveld 408576698