|
|
Created:
7 years, 2 months ago by Philippe Modified:
7 years ago CC:
chromium-reviews, erikwright+watch_chromium.org, gavinp+memory_chromium.org, Feng Qian, klobag.chromium, Ian Vollick, digit1, ppi, jzern, reveman Base URL:
svn://svn.chromium.org/chrome/trunk/src Visibility:
Public. |
DescriptionAdd DiscardableMemoryAllocator to minimize use of FDs with discardable memory.
Discardable memory has the current limitation on Android (at least) that it's
backed by a file (descriptor) thus it is a limited resource.
DiscardableMemoryAllocator provides a way to create instances of
DiscardableMemory in an efficient way. Note that just like regular
DiscardableMemory, the DiscardableMemoryAllocator should not be used for small
allocations since it works with page granularity (4 KBytes).
On Android DiscardableMemory creation can now happen in two different ways:
- Each client-requested allocation is backed by an individual ashmem region.
This allows to delete ashmem regions individually by closing the ashmem file
descriptor. This is the default path that is taken when file descriptor usage
allows us to do so.
- Allocations are performed by the global allocator when file descriptor usage
gets too high. This still allows unpinning but does not allow deleting
individual regions (i.e. immediately releasing the physycal pages backing
them). Unfortunately the physical memory backing an ashmem region is only
freed when the file descriptor is closed.
madvise(MADV_REMOVE)/fallocate(FALLOCATE_FL_PUNCH_HOLE) don't work with
ashmem unfortunately.
When file descriptor usage gets too high (which can be observed on
images.google.com for instance) the allocator starts operating by creating as
less ashmem regions (= consuming as less FDs) as possible:
- Large (32 MBytes) ashmem regions get lazily created. This size might need
some tuning/runtime adjustment as a follow up.
- Allocation works first by trying to recycle previously freed chunks (if any)
by doing a closest size match (implemented with multiset<>.lower_bound() in
O(log n)).
If no free chunks can be recycled then a new chunk in the ashmem region is
returned to the user. If the ashmem region is not big enough then a new
region is created.
- DiscardableMemory instance deletion works by marking the chunk used by the
instance as discardable (i.e. ashmem_region_unpin()) and notifying the
allocator with the metadata (e.g. chunk size) that allows it to later recycle
the chunk efficiently.
Note that free chunks are marked as discardable as mentioned above therefore
they can be reclaimed by the kernel under memory pressure. Since they are not
immediately released unfortunately the allocator still tries to minimize
fragmentation (to avoid causing memory pressure) by merging free contiguous
chunks when possible as well as splitting recycled free chunks that are too
large (and that would cause fragmentation).
BUG=299828
Committed: https://src.chromium.org/viewvc/chrome?view=rev&revision=239763
Patch Set 1 #Patch Set 2 : #Patch Set 3 : #Patch Set 4 : #Patch Set 5 : Fix discardable_memory_allocator.cc #
Total comments: 4
Patch Set 6 : Fix bug when unlocked free chunks are recycled + add thread-safety #Patch Set 7 : Address Przemek's comments #Patch Set 8 : Fix typo #Patch Set 9 : s/Chunk/FreeChunk and make PageAlign more readable #Patch Set 10 : Shorten critical section + add unit test #Patch Set 11 : Rework threading to allow destruction on a different thread as needed by clients #Patch Set 12 : Add new unit test #Patch Set 13 : Close ashmem region when it's completely free #Patch Set 14 : Rebase #Patch Set 15 : Cleanup include list #Patch Set 16 : Split free chunks that are too large #Patch Set 17 : Factor out PageAlign() + make size() return the aligned size #Patch Set 18 : Factor out contiguous chunks merging #Patch Set 19 : Reset |last_allocated_chunk_| in CloseAshmemRegion() #
Total comments: 23
Patch Set 20 : Address reviewers' comments #
Total comments: 16
Patch Set 21 : Make allocator private + address Egor's comments #Patch Set 22 : s/actual_size/recycled_chunk_size #
Total comments: 28
Patch Set 23 : Address Egor's comments + fix Clang warning (note that the change doesn't fix a bug) #
Total comments: 3
Patch Set 24 : Address Egor's comments #Patch Set 25 : s/make_scoped_ptr<DiscardableMemory>(NULL)/scoped_ptr<DiscardableMemory>() :) #Patch Set 26 : Address Egor's offline comment (free chunks merging is not needed on the chunk splitting path) #
Total comments: 6
Patch Set 27 : Address Egor's comments #Patch Set 28 : s/unlink/remove #Patch Set 29 : Add merging comments #
Total comments: 13
Patch Set 30 : Address William's comments #
Total comments: 22
Patch Set 31 : Address William's comments #Patch Set 32 : Move free chunk splitting to its own function #Patch Set 33 : Fix stale comment #Patch Set 34 : Fix comment #
Total comments: 8
Patch Set 35 : Add William's unit test #Patch Set 36 : Remove FreeChunk::previous_chunk #Patch Set 37 : Revert patch set 36 #Patch Set 38 : Address the rest of William's comments (made in patch set 34) #
Total comments: 40
Patch Set 39 : Address William's comments #
Total comments: 36
Patch Set 40 : s/this/|this| #Patch Set 41 : Add back comment that got lost in patch set 39 #Patch Set 42 : Cleanup includes + comment #Patch Set 43 : Address William's comments #Patch Set 44 : Rebase #
Total comments: 14
Patch Set 45 : Address William's comments #Patch Set 46 : Fix comment #
Total comments: 1
Patch Set 47 : Rebase #Patch Set 48 : Fix Clang build #
Total comments: 1
Messages
Total messages: 61 (0 generated)
I'm sending this for review although it's currently depending on https://chromiumcodereview.appspot.com/24988003/. Grace/Feng, this doesn't necessarily mean that we will use this allocator for the resource cache (vs. the approach that you suggested), but I had started this allocator on Friday and thought that it might be nice to use it at least to replace the existing use of DiscardableMemory where we can reach the file descriptor limit (e.g. discardable bitmaps). I'm currently working on a patch to do that and will evaluate the performance/memory cost/improvements. Ian, this shouldn't conflict with your current work.
Nice! Two small remarks below. https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... File base/memory/discardable_memory_allocator_android.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... base/memory/discardable_memory_allocator_android.cc:114: scoped_ptr<DiscardableMemory> Allocate(size_t size) { nit: comment or DCHECK that |size| needs to be page aligned here. https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... File base/memory/discardable_memory_allocator_unittest.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... base/memory/discardable_memory_allocator_unittest.cc:63: EXPECT_EQ(0U, reinterpret_cast<uint64_t>(memory->Memory()) % 4096); Shouldn't we check memory->size() here?
Thanks Przemek! FYI, I also fixed a bug when unlocked free chunks are being recycled and also added support for thread-safety. This is needed to deploy the allocator to Blink (by using it in webkitplatformsupport_child_impl.cc). I also did some quick memory measurements with and without the allocator for discardable bitmaps and did see a few MBytes saved on images.google.com! (I will run them more seriously shortly). Blink requests not less than 300 pieces of discardable memory on images.google.com (while scrolling a bit) so the allocator helps a lot given the current soft limit of 128 file descriptors in discardable_memory_android.cc (making us fall back to heap allocations). https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... File base/memory/discardable_memory_allocator_android.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... base/memory/discardable_memory_allocator_android.cc:114: scoped_ptr<DiscardableMemory> Allocate(size_t size) { On 2013/10/02 10:31:07, ppi wrote: > nit: comment or DCHECK that |size| needs to be page aligned here. I added a comment since I don't have |page_size_| around here (although I would have preferred the DCHECK). https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... File base/memory/discardable_memory_allocator_unittest.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... base/memory/discardable_memory_allocator_unittest.cc:63: EXPECT_EQ(0U, reinterpret_cast<uint64_t>(memory->Memory()) % 4096); On 2013/10/02 10:31:07, ppi wrote: > Shouldn't we check memory->size() here? Hmm, we actually want to keep size alignment as an implementation detail. memory->size() should return the size that was initially requested by the client as opposed to the aligned size. Good catch!
On 2013/10/02 13:51:29, Philippe wrote: > Thanks Przemek! > > FYI, I also fixed a bug when unlocked free chunks are being recycled and also > added support for thread-safety. This is needed to deploy the allocator to Blink > (by using it in webkitplatformsupport_child_impl.cc). > > I also did some quick memory measurements with and without the allocator for > discardable bitmaps and did see a few MBytes saved on images.google.com! (I will > run them more seriously shortly). Blink requests not less than 300 pieces of > discardable memory on http://images.google.com (while scrolling a bit) so the allocator > helps a lot given the current soft limit of 128 file descriptors in > discardable_memory_android.cc (making us fall back to heap allocations). > > https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... > File base/memory/discardable_memory_allocator_android.cc (right): > > https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... > base/memory/discardable_memory_allocator_android.cc:114: > scoped_ptr<DiscardableMemory> Allocate(size_t size) { > On 2013/10/02 10:31:07, ppi wrote: > > nit: comment or DCHECK that |size| needs to be page aligned here. > > I added a comment since I don't have |page_size_| around here (although I would > have preferred the DCHECK). > > https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... > File base/memory/discardable_memory_allocator_unittest.cc (right): > > https://chromiumcodereview.appspot.com/25293002/diff/14001/base/memory/discar... > base/memory/discardable_memory_allocator_unittest.cc:63: EXPECT_EQ(0U, > reinterpret_cast<uint64_t>(memory->Memory()) % 4096); > On 2013/10/02 10:31:07, ppi wrote: > > Shouldn't we check memory->size() here? > > Hmm, we actually want to keep size alignment as an implementation detail. > memory->size() should return the size that was initially requested by the client > as opposed to the aligned size. Good catch! Will and Avi, this should now be in a reviewable state. Thanks
This is well out of my expertise. It looks plausible but I don't see myself as a good reviewer. It's very unfortunate that this kind of machinery can't be rolled into the Android version of DiscardableMemory. Having a simple API for callers would be great rather than DiscardableMemoryAllocator being a layer atop DiscardableMemory. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_allocator.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_allocator.cc:1: // Copyright (c) 2013 The Chromium Authors. All rights reserved. No (c) in new files.
On 2013/10/21 15:17:52, Avi wrote: > This is well out of my expertise. It looks plausible but I don't see myself as a > good reviewer. > > It's very unfortunate that this kind of machinery can't be rolled into the > Android version of DiscardableMemory. Having a simple API for callers would be > great rather than DiscardableMemoryAllocator being a layer atop > DiscardableMemory. > > https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... > File base/memory/discardable_memory_allocator.cc (right): > > https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... > base/memory/discardable_memory_allocator.cc:1: // Copyright (c) 2013 The > Chromium Authors. All rights reserved. > No (c) in new files. Thanks Avi. I removed you from the reviewers list (feel free to add yourself back if you change your mind) and added digit@. The allocator layer on top of DiscardableMemory could disappear by making the allocator global and private to the Android's DiscardableMemory implementation. I would not be fully against it since the allocator already has to be thread-safe due to client's use cases although I would still prefer avoiding global state. In practice I don't expect this layer to be a pain for clients given the very small amount of clients using DiscardableMemory. What do you think?
Who is using DiscardableMemory? Would all of them be using DiscardableMemoryAllocator due to their needs? If it's common, then I'd argue that it makes sense to have a lazy singleton deep in the implementation, and keep the interface simple. Re global state... that is the only thing that doesn't make me happy about it. (BTW, as I'm not a reviewer, I don't expect you to necessarily follow my thoughts; just sharing.)
On 2013/10/21 15:36:25, Avi wrote: > Who is using DiscardableMemory? Would all of them be using > DiscardableMemoryAllocator due to their needs? > > If it's common, then I'd argue that it makes sense to have a lazy singleton deep > in the implementation, and keep the interface simple. Re global state... that is > the only thing that doesn't make me happy about it. > > (BTW, as I'm not a reviewer, I don't expect you to necessarily follow my > thoughts; just sharing.) Yeah thanks for sharing :) Currently the only clients are the skia and webkit platform implementation layers as it can be seen in: https://chromiumcodereview.appspot.com/24988003/ I agree that interfaces should be kept simple. I don't expect the DiscardableMemoryAllocator interface to be super hard to use though :) But yeah it is still an extra interface even though it's small and simple IMO. Let's see what the other reviewers (e.g. Will) think about this trade-off (small extra interface vs. hidden global allocator).
https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory.h:52: // they were constructed on even though they can be deleted on any thread. Why allow them to be deleted on any thread? That's generally confusing, is this necessary for some performance reason? https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory.h:64: size_t size() const { return size_; } The rest of this interface is abstract. Why have this be concrete? I'd prefer it being abstract for consistency. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_allocator.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_allocator.h:33: class BASE_EXPORT DiscardableMemoryAllocator { Why isn't this BASE_EXPORT_PRIVATE and in base::internal?
Thanks William for the initial pass! I will address your comments tomorrow but I wanted to do at least one iteration today on my side for the thread-safety question. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory.h:52: // they were constructed on even though they can be deleted on any thread. On 2013/10/21 18:30:40, willchan wrote: > Why allow them to be deleted on any thread? That's generally confusing, is this > necessary for some performance reason? I agree that this is less than ideal. My initial design was much stricter with regards to threading but it turns out that the image decoding cache can be purged on any thread by design (which made my thread checkers fail). The DiscardableMemory instances get created in the compositor worker pool. The worker pool's threads are now backed by base::SimpleThread for performance reasons (the pool has a single message loop shared across all threads to automatically balance the load across them). This means that they don't have a dedicated message loop unfortunately which doesn't allow us to do message passing as I would have wanted and forces us to use locks and provide support for destruction on any thread. I initially wanted to make the cc worker pool implement its own SingleThreadTaskRunner to enable message passing but David Reveman and I finally agreed that requiring a message loop for the allocator was probably overkill (not to mention that the task runner implementation might not be doable/desirable in practice). Does that make sense? What do you think?
BTW, https://codereview.chromium.org/17106004/, which is also hacking on DiscardableMemory, relies on a singleton. Can you talk to David? Perhaps combine forces?
https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory.h:52: // they were constructed on even though they can be deleted on any thread. On 2013/10/21 19:55:40, pliard wrote: > On 2013/10/21 18:30:40, willchan wrote: > > Why allow them to be deleted on any thread? That's generally confusing, is > this > > necessary for some performance reason? > > I agree that this is less than ideal. My initial design was much stricter with > regards to threading but it turns out that the image decoding cache can be > purged on any thread by design (which made my thread checkers fail). The > DiscardableMemory instances get created in the compositor worker pool. The > worker pool's threads are now backed by base::SimpleThread for performance > reasons (the pool has a single message loop shared across all threads to > automatically balance the load across them). This means that they don't have a > dedicated message loop unfortunately which doesn't allow us to do message > passing as I would have wanted and forces us to use locks and provide support > for destruction on any thread. > I initially wanted to make the cc worker pool implement its own > SingleThreadTaskRunner to enable message passing but David Reveman and I finally > agreed that requiring a message loop for the allocator was probably overkill > (not to mention that the task runner implementation might not be > doable/desirable in practice). > > Does that make sense? What do you think? The multi-threaded raster system needs to be able to access discardable memory objects from multiple threads right now. The raster system will make sure there are no races, though. But having to use DM instances on the same thread they were created on complicates things quite a bit and might have a significant performance impact. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory.h:93: DiscardableMemory(size_t size); nit: explicit
https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory.h:52: // they were constructed on even though they can be deleted on any thread. On 2013/10/21 19:55:40, pliard wrote: > On 2013/10/21 18:30:40, willchan wrote: > > Why allow them to be deleted on any thread? That's generally confusing, is > this > > necessary for some performance reason? > > I agree that this is less than ideal. My initial design was much stricter with > regards to threading but it turns out that the image decoding cache can be > purged on any thread by design (which made my thread checkers fail). The > DiscardableMemory instances get created in the compositor worker pool. The > worker pool's threads are now backed by base::SimpleThread for performance > reasons (the pool has a single message loop shared across all threads to > automatically balance the load across them). This means that they don't have a > dedicated message loop unfortunately which doesn't allow us to do message > passing as I would have wanted and forces us to use locks and provide support > for destruction on any thread. > I initially wanted to make the cc worker pool implement its own > SingleThreadTaskRunner to enable message passing but David Reveman and I finally > agreed that requiring a message loop for the allocator was probably overkill > (not to mention that the task runner implementation might not be > doable/desirable in practice). > > Does that make sense? What do you think? I think saying that the class isn't thread safe is exactly what you want to say. That implies that concurrent access is disallowed. Thread unsafe doesn't imply the stronger guarantee that everything always happens on the same thread. Is there any need to assert that the other operations always operate on the same thread? What I'm thinking is what if they used a SequencedTaskRunner so no operation was ever concurrent, but could run on a different thread. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_allocator_unittest.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_allocator_unittest.cc:5: #include "build/build_config.h" Wrong, ordering. discardable_memory_allocator.h should be first here. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_android.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_android.cc:98: size_t PageAlign(size_t size) { Why isn't this in the anonymous namespace? https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_android.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_android.h:17: size_t PageAlign(size_t size); Why is this here? Shouldn't it be in the .cc file in an anonymous namespace? https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_android.h:19: bool CreateAshmemRegion(const char* name, size_t size, int* fd, void** address); Actually, where are these guys used too? Which of these actually needs to be in base::internal? I don't see anything used in the unittest. Is this shared by another implementation? * If they're used by a test, then BASE_EXPORT_PRIVATE them, or the component build will break. * If they're shared by impl files, then base::internal is fine. * If they're only used in one .cc file, put them in an anonymous namespace there.
Thanks guys! Ideally I would like us to answer Avi's initial question which I elaborated in my reply to Will's comment in discardable_memory_allocator.h before we move forward. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory.h:52: // they were constructed on even though they can be deleted on any thread. On 2013/10/21 21:13:02, willchan wrote: > On 2013/10/21 19:55:40, pliard wrote: > > On 2013/10/21 18:30:40, willchan wrote: > > > Why allow them to be deleted on any thread? That's generally confusing, is > > this > > > necessary for some performance reason? > > > > I agree that this is less than ideal. My initial design was much stricter with > > regards to threading but it turns out that the image decoding cache can be > > purged on any thread by design (which made my thread checkers fail). The > > DiscardableMemory instances get created in the compositor worker pool. The > > worker pool's threads are now backed by base::SimpleThread for performance > > reasons (the pool has a single message loop shared across all threads to > > automatically balance the load across them). This means that they don't have a > > dedicated message loop unfortunately which doesn't allow us to do message > > passing as I would have wanted and forces us to use locks and provide support > > for destruction on any thread. > > I initially wanted to make the cc worker pool implement its own > > SingleThreadTaskRunner to enable message passing but David Reveman and I > finally > > agreed that requiring a message loop for the allocator was probably overkill > > (not to mention that the task runner implementation might not be > > doable/desirable in practice). > > > > Does that make sense? What do you think? > > I think saying that the class isn't thread safe is exactly what you want to say. > That implies that concurrent access is disallowed. Thread unsafe doesn't imply > the stronger guarantee that everything always happens on the same thread. Is > there any need to assert that the other operations always operate on the same > thread? What I'm thinking is what if they used a SequencedTaskRunner so no > operation was ever concurrent, but could run on a different thread. Right. Not thread-safe is what we want here. The client operates on various threads but guarantees that the operations on DiscardableMemory instances are properly sequenced. I updated the DiscardableMemoryChunk implementation accordingly. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory.h:64: size_t size() const { return size_; } On 2013/10/21 18:30:40, willchan wrote: > The rest of this interface is abstract. Why have this be concrete? I'd prefer it > being abstract for consistency. Done. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory.h:93: DiscardableMemory(size_t size); On 2013/10/21 21:11:06, David Reveman wrote: > nit: explicit Yeah, I'm removing this anyway since size() is now abstract. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_allocator.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_allocator.cc:1: // Copyright (c) 2013 The Chromium Authors. All rights reserved. On 2013/10/21 15:17:52, Avi wrote: > No (c) in new files. Didn't know about it, thanks. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_allocator.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_allocator.h:33: class BASE_EXPORT DiscardableMemoryAllocator { On 2013/10/21 18:30:40, willchan wrote: > Why isn't this BASE_EXPORT_PRIVATE and in base::internal? I actually intend(ed) to make this public. I want clients to either directly instantiate DiscardableMemory through the static factory method if they only want a single "long lived" DiscardableMemoryInstance or better instantiate this allocator and create DiscardableMemory instances with it. But this brings up the following question Avi asked which I would like us to discuss here: could we avoid this extra interface and make the allocator private to the Android's discardable memory implementation? And more importantly, could we combine our efforts/be consistent with the discardable memory emulator? The drawbacks of the current design (vs. a global allocator implementation private to the Android DiscardableMemory implementation) are: - This extra interface (although it is quite trivial) makes discardable memory slightly harder to create for clients. In particular the client must ensure that the allocator outlives the returned DiscardableMemory instances. On the other hand there are only two clients at the moment in the Blink and Skia platform implementation layers. - This interface allows multiple allocator instances to exist. This could potentially increase fragmentation (don't panic we are talking about fragmentation in discardable memory). On the other hand clients can have very different allocation patterns and having a single allocator taking care of those allocations could also not be optimal (see the benefits below). The benefits are: - No global state. - Explicit construction and destruction of the allocator. - Flexibility in the sense that the upper layers can choose how many allocators they want to create/share (they could also choose to have a single global allocator). This would allow us in particular to minimize lock contention if a single global allocator proves to suffer from that. David, I hadn't envisioned until now what you actually meant by looking into consolidating the provider code and the allocator code but I realize now that you may have meant to modify/implement the allocator interface (that would allow you to get rid of the global state in particular I suppose). Would there be similar benefits and drawbacks for the provider? Does the provider absolutely need exactly (and not more than) one instance? If the provider implements the allocator interface then discardable memory emulation would move to the allocation layer which means that we should not allow discardable memory to be created in a way other than through the allocator (I would be fine with it). What do you guys think? https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_allocator_unittest.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_allocator_unittest.cc:5: #include "build/build_config.h" On 2013/10/21 21:13:02, willchan wrote: > Wrong, ordering. discardable_memory_allocator.h should be first here. Oops :) https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_android.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_android.cc:98: size_t PageAlign(size_t size) { On 2013/10/21 21:13:02, willchan wrote: > Why isn't this in the anonymous namespace? This is also used by the allocator now. https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_android.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_android.h:17: size_t PageAlign(size_t size); On 2013/10/21 21:13:02, willchan wrote: > Why is this here? Shouldn't it be in the .cc file in an anonymous namespace? I updated this top-level comment to make it clear that this is used both by discardable_memory_android.cc and discardable_memory_allocator_android.cc. This might change though depending on whether the emulator chooses to implement the allocator interface (see my reply to your comment in discardable_memory_allocator.h). https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_android.h:19: bool CreateAshmemRegion(const char* name, size_t size, int* fd, void** address); On 2013/10/21 21:13:02, willchan wrote: > Actually, where are these guys used too? Which of these actually needs to be in > base::internal? I don't see anything used in the unittest. Is this shared by > another implementation? > > * If they're used by a test, then BASE_EXPORT_PRIVATE them, or the component > build will break. > * If they're shared by impl files, then base::internal is fine. > * If they're only used in one .cc file, put them in an anonymous namespace > there. They are indeed shared by impl files. I updated the top-level comment.
https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_allocator.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_allocator.h:33: class BASE_EXPORT DiscardableMemoryAllocator { On 2013/10/22 09:32:52, Philippe wrote: > On 2013/10/21 18:30:40, willchan wrote: > > Why isn't this BASE_EXPORT_PRIVATE and in base::internal? > > I actually intend(ed) to make this public. I want clients to either directly > instantiate DiscardableMemory through the static factory method if they only > want a single "long lived" DiscardableMemoryInstance or better instantiate this > allocator and create DiscardableMemory instances with it. > > But this brings up the following question Avi asked which I would like us to > discuss here: could we avoid this extra interface and make the allocator private > to the Android's discardable memory implementation? And more importantly, could > we combine our efforts/be consistent with the discardable memory emulator? > > The drawbacks of the current design (vs. a global allocator implementation > private to the Android DiscardableMemory implementation) are: > - This extra interface (although it is quite trivial) makes discardable memory > slightly harder to create for clients. In particular the client must ensure that > the allocator outlives the returned DiscardableMemory instances. On the other > hand there are only two clients at the moment in the Blink and Skia platform > implementation layers. > - This interface allows multiple allocator instances to exist. This could > potentially increase fragmentation (don't panic we are talking about > fragmentation in discardable memory). On the other hand clients can have very > different allocation patterns and having a single allocator taking care of those > allocations could also not be optimal (see the benefits below). > > The benefits are: > - No global state. > - Explicit construction and destruction of the allocator. > - Flexibility in the sense that the upper layers can choose how many allocators > they want to create/share (they could also choose to have a single global > allocator). This would allow us in particular to minimize lock contention if a > single global allocator proves to suffer from that. > > David, I hadn't envisioned until now what you actually meant by looking into > consolidating the provider code and the allocator code but I realize now that > you may have meant to modify/implement the allocator interface (that would allow > you to get rid of the global state in particular I suppose). Would there be > similar benefits and drawbacks for the provider? Does the provider absolutely > need exactly (and not more than) one instance? If the provider implements the > allocator interface then discardable memory emulation would move to the > allocation layer which means that we should not allow discardable memory to be > created in a way other than through the allocator (I would be fine with it). > > What do you guys think? My only suggestion is that we keep things as simple as possible for now. A global allocator that is private to the DM implementation seems less complicated to me. A new allocator interface that clients are responsible for accessing in a thread safe manner might turn out useful but it's not clear to me that this is needed. I might implement the provider for emulated DM using this allocator interface if it turns out to be a good fit.
On 2013/10/22 18:09:05, David Reveman wrote: > https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... > File base/memory/discardable_memory_allocator.h (right): > > https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... > base/memory/discardable_memory_allocator.h:33: class BASE_EXPORT > DiscardableMemoryAllocator { > On 2013/10/22 09:32:52, Philippe wrote: > > On 2013/10/21 18:30:40, willchan wrote: > > > Why isn't this BASE_EXPORT_PRIVATE and in base::internal? > > > > I actually intend(ed) to make this public. I want clients to either directly > > instantiate DiscardableMemory through the static factory method if they only > > want a single "long lived" DiscardableMemoryInstance or better instantiate > this > > allocator and create DiscardableMemory instances with it. > > > > But this brings up the following question Avi asked which I would like us to > > discuss here: could we avoid this extra interface and make the allocator > private > > to the Android's discardable memory implementation? And more importantly, > could > > we combine our efforts/be consistent with the discardable memory emulator? > > > > The drawbacks of the current design (vs. a global allocator implementation > > private to the Android DiscardableMemory implementation) are: > > - This extra interface (although it is quite trivial) makes discardable memory > > slightly harder to create for clients. In particular the client must ensure > that > > the allocator outlives the returned DiscardableMemory instances. On the other > > hand there are only two clients at the moment in the Blink and Skia platform > > implementation layers. > > - This interface allows multiple allocator instances to exist. This could > > potentially increase fragmentation (don't panic we are talking about > > fragmentation in discardable memory). On the other hand clients can have very > > different allocation patterns and having a single allocator taking care of > those > > allocations could also not be optimal (see the benefits below). > > > > The benefits are: > > - No global state. > > - Explicit construction and destruction of the allocator. > > - Flexibility in the sense that the upper layers can choose how many > allocators > > they want to create/share (they could also choose to have a single global > > allocator). This would allow us in particular to minimize lock contention if a > > single global allocator proves to suffer from that. > > > > David, I hadn't envisioned until now what you actually meant by looking into > > consolidating the provider code and the allocator code but I realize now that > > you may have meant to modify/implement the allocator interface (that would > allow > > you to get rid of the global state in particular I suppose). Would there be > > similar benefits and drawbacks for the provider? Does the provider absolutely > > need exactly (and not more than) one instance? If the provider implements the > > allocator interface then discardable memory emulation would move to the > > allocation layer which means that we should not allow discardable memory to be > > created in a way other than through the allocator (I would be fine with it). > > > > What do you guys think? > > My only suggestion is that we keep things as simple as possible for now. A > global allocator that is private to the DM implementation seems less complicated > to me. A new allocator interface that clients are responsible for accessing in a > thread safe manner might turn out useful but it's not clear to me that this is > needed. > > I might implement the provider for emulated DM using this allocator interface if > it turns out to be a good fit. +1 for simplicity. The fact that there are very few users of DiscardableMemory suggests that there is no need in extra flexibility. Generally, adding code is easier than removing it. So let's not add (anything??) if it is not absolutely necessary. For DiscardableMemory user, allocator lifetimes is probably the last thing they would like to think about.
Philippe, I also looked at the code some. As usual, it is a great piece of engineering! I did only skim through it, but while doing so I recorded some thoughts on how to make it easier to understand or maybe even less advanced. Also could not resist a few nits :) See below. https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:28: class DiscardableMemoryChunk : public DiscardableMemory { A top-level comment would be good since it is not quite possible to understand what the code does without reading through every line of this file. An overview could say how the data structures are organized, what the limitations of ashmem are (FDs, having to provide size in advance, deleting by FD, not by sub-region), the basic algorithm of the allocator. Then later each class should be preceded with a short description on what it does. As for DiscardableMemoryChunk, I don't like the name, since its specifics are not in being a chunk, but rather in being ashmem-based. I also do not like that it inherits from DiscardableMemory copying all its fields, overriding non-virtual methods, smells like disaster. What we want is android-specific implementation of DiscardableMemory, if would be more natural to make different implementations of DiscardableMemory in platform-specific files. https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:116: if (!OpenAshmemRegion()) did you mean: if (AshmemRegionClosed() && !OpenAshmemRegion()) return ... ? https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:164: // Tries to recycle a previously freed chunk by doing a closest size match. nit: A synonym to "recycle" would help in the comment, for clarity. Reuse? https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:175: if (fragmentation_kbytes >= 16) { the code has quite a few heuristics, how about making them constants, moving up and explaining why the numbers are like these? https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:199: enum ContiguousChunksMergingFlags { OK, splitting chunks is probably a good idea, but it is not as clear about merging. Merging certainly complicates code, but is there evidence that without it the memory gets too fragmented? If there is indeed evidence, then I am scared, it would suggest we should think twice on the allocation algorithm. https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:349: // Use the actual allocator only once we past a certain number of open I find this comment a bit confusing. It requires some extra mental work to understand that "Not using the allocator" is a synonym of "Using discardable memory". Frankly, I understood it only after looking at the code. How about declaring that there are two options for allocation: 1. Allocating each region to a separate FD, this allows to delete regions individually 2. Allocation via the global allocator, this allows to limit global usage of the file descriptor resource, allows un-pinning, but does not allow deleting individual regions https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:371: std::max(kDefaultAshmemRegionSize, aligned_size), name_.c_str())); The "Default" part in the name is confusing, should be replaced with "Minimum"? https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... File base/memory/discardable_memory_android.cc (right): https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_android.cc:103: size_t PageAlign(size_t size) { nit: naming: this is not just aligning to page boundary, but rather AlignToNextPage
https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... File base/memory/discardable_memory_allocator.h (right): https://chromiumcodereview.appspot.com/25293002/diff/381001/base/memory/disca... base/memory/discardable_memory_allocator.h:33: class BASE_EXPORT DiscardableMemoryAllocator { On 2013/10/22 18:09:05, David Reveman wrote: > On 2013/10/22 09:32:52, Philippe wrote: > > On 2013/10/21 18:30:40, willchan wrote: > > > Why isn't this BASE_EXPORT_PRIVATE and in base::internal? > > > > I actually intend(ed) to make this public. I want clients to either directly > > instantiate DiscardableMemory through the static factory method if they only > > want a single "long lived" DiscardableMemoryInstance or better instantiate > this > > allocator and create DiscardableMemory instances with it. > > > > But this brings up the following question Avi asked which I would like us to > > discuss here: could we avoid this extra interface and make the allocator > private > > to the Android's discardable memory implementation? And more importantly, > could > > we combine our efforts/be consistent with the discardable memory emulator? > > > > The drawbacks of the current design (vs. a global allocator implementation > > private to the Android DiscardableMemory implementation) are: > > - This extra interface (although it is quite trivial) makes discardable memory > > slightly harder to create for clients. In particular the client must ensure > that > > the allocator outlives the returned DiscardableMemory instances. On the other > > hand there are only two clients at the moment in the Blink and Skia platform > > implementation layers. > > - This interface allows multiple allocator instances to exist. This could > > potentially increase fragmentation (don't panic we are talking about > > fragmentation in discardable memory). On the other hand clients can have very > > different allocation patterns and having a single allocator taking care of > those > > allocations could also not be optimal (see the benefits below). > > > > The benefits are: > > - No global state. > > - Explicit construction and destruction of the allocator. > > - Flexibility in the sense that the upper layers can choose how many > allocators > > they want to create/share (they could also choose to have a single global > > allocator). This would allow us in particular to minimize lock contention if a > > single global allocator proves to suffer from that. > > > > David, I hadn't envisioned until now what you actually meant by looking into > > consolidating the provider code and the allocator code but I realize now that > > you may have meant to modify/implement the allocator interface (that would > allow > > you to get rid of the global state in particular I suppose). Would there be > > similar benefits and drawbacks for the provider? Does the provider absolutely > > need exactly (and not more than) one instance? If the provider implements the > > allocator interface then discardable memory emulation would move to the > > allocation layer which means that we should not allow discardable memory to be > > created in a way other than through the allocator (I would be fine with it). > > > > What do you guys think? > > My only suggestion is that we keep things as simple as possible for now. A > global allocator that is private to the DM implementation seems less complicated > to me. A new allocator interface that clients are responsible for accessing in a > thread safe manner might turn out useful but it's not clear to me that this is > needed. > > I might implement the provider for emulated DM using this allocator interface if > it turns out to be a good fit. I am also for having a private global allocator.
Thanks Egor! PTAL :) https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:28: class DiscardableMemoryChunk : public DiscardableMemory { On 2013/10/22 20:14:29, pasko wrote: > A top-level comment would be good since it is not quite possible to understand > what the code does without reading through every line of this file. > > An overview could say how the data structures are organized, what the > limitations of ashmem are (FDs, having to provide size in advance, deleting by > FD, not by sub-region), the basic algorithm of the allocator. > > Then later each class should be preceded with a short description on what it > does. > > > As for DiscardableMemoryChunk, I don't like the name, since its specifics are > not in being a chunk, but rather in being ashmem-based. I also do not like that > it inherits from DiscardableMemory copying all its fields, overriding > non-virtual methods, smells like disaster. What we want is android-specific > implementation of DiscardableMemory, if would be more natural to make different > implementations of DiscardableMemory in platform-specific files. I added a small top-level comment introducing very briefly each class. I'm not sure how to provide more information without repeating myself. If you have specific ideas I would be glad to hear them :) https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:116: if (!OpenAshmemRegion()) On 2013/10/22 20:14:29, pasko wrote: > did you mean: > if (AshmemRegionClosed() && !OpenAshmemRegion()) > return ... > > ? Yeah, it's the same, right? :) https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:164: // Tries to recycle a previously freed chunk by doing a closest size match. On 2013/10/22 20:14:29, pasko wrote: > nit: A synonym to "recycle" would help in the comment, for clarity. Reuse? Done. https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:175: if (fragmentation_kbytes >= 16) { On 2013/10/22 20:14:29, pasko wrote: > the code has quite a few heuristics, how about making them constants, moving up > and explaining why the numbers are like these? I'm not a huge fan generally of externalizing constants used in a single place due to the extra indirection it requires (happening for the reader not at runtime) that usually results in an expensive page fault :) But I did it anyway. https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:199: enum ContiguousChunksMergingFlags { On 2013/10/22 20:14:29, pasko wrote: > OK, splitting chunks is probably a good idea, but it is not as clear about > merging. Merging certainly complicates code, but is there evidence that without > it the memory gets too fragmented? If there is indeed evidence, then I am > scared, it would suggest we should think twice on the allocation algorithm. I did see that merging was happening in production on images.google.com. I started working on adding support for statistics to the allocator (that I would like to show on e.g. memory-internal). Since this allocator is more focused on saving memory than speed I wanted to do everything it takes (please let me know if we can do more :)) to avoid fragmentation. I would not like to consume more memory with the allocator than without :) I will try to get some data soon. But I agree that merging complicates things a lot (there might be a simpler way to implement that btw). It requires chaining previous blocks for instance and having this not particularly elegant |free_chunk_for_address_| map. I'm not storing any metadata in the ashmem region itself since I want it to be evictable (without evicting my metadata :)). In a regular heap allocator, the metadata would be located right before the user chunk and things like chaining previous blocks would be trivial (although a lot of other things would not). https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:349: // Use the actual allocator only once we past a certain number of open On 2013/10/22 20:14:29, pasko wrote: > I find this comment a bit confusing. It requires some extra mental work to > understand that "Not using the allocator" is a synonym of "Using discardable > memory". Frankly, I understood it only after looking at the code. > > How about declaring that there are two options for allocation: > 1. Allocating each region to a separate FD, this allows to delete regions > individually > 2. Allocation via the global allocator, this allows to limit global usage of the > file descriptor resource, allows un-pinning, but does not allow deleting > individual regions Thanks. This part is now in discardable_memory_android.cc. https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:371: std::max(kDefaultAshmemRegionSize, aligned_size), name_.c_str())); On 2013/10/22 20:14:29, pasko wrote: > The "Default" part in the name is confusing, should be replaced with "Minimum"? Done.
https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:199: enum ContiguousChunksMergingFlags { On 2013/10/23 11:46:56, Philippe wrote: > On 2013/10/22 20:14:29, pasko wrote: > > OK, splitting chunks is probably a good idea, but it is not as clear about > > merging. Merging certainly complicates code, but is there evidence that > without > > it the memory gets too fragmented? If there is indeed evidence, then I am > > scared, it would suggest we should think twice on the allocation algorithm. > > I did see that merging was happening in production on http://images.google.com. I > started working on adding support for statistics to the allocator (that I would > like to show on e.g. memory-internal). Since this allocator is more focused on > saving memory than speed I wanted to do everything it takes (please let me know > if we can do more :)) to avoid fragmentation. I would not like to consume more > memory with the allocator than without :) > > I will try to get some data soon. But I agree that merging complicates things a > lot (there might be a simpler way to implement that btw). It requires chaining > previous blocks for instance and having this not particularly elegant > |free_chunk_for_address_| map. I'm not storing any metadata in the ashmem region > itself since I want it to be evictable (without evicting my metadata :)). In a > regular heap allocator, the metadata would be located right before the user > chunk and things like chaining previous blocks would be trivial (although a lot > of other things would not). For the record Egor and I observed offline that we were committing twice more pages when disabling free chunks merging after a few navigations on the google.com domain (for a total amount of ~4000 discardable memory allocations) so we are both convinced that free chunks merging is not completely useless :)
On 2013/10/24 08:35:52, Philippe wrote: > https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... > File base/memory/discardable_memory_allocator_android.cc (right): > > https://codereview.chromium.org/25293002/diff/571001/base/memory/discardable_... > base/memory/discardable_memory_allocator_android.cc:199: enum > ContiguousChunksMergingFlags { > On 2013/10/23 11:46:56, Philippe wrote: > > On 2013/10/22 20:14:29, pasko wrote: > > > OK, splitting chunks is probably a good idea, but it is not as clear about > > > merging. Merging certainly complicates code, but is there evidence that > > without > > > it the memory gets too fragmented? If there is indeed evidence, then I am > > > scared, it would suggest we should think twice on the allocation algorithm. > > > > I did see that merging was happening in production on > http://images.google.com. I > > started working on adding support for statistics to the allocator (that I > would > > like to show on e.g. memory-internal). Since this allocator is more focused on > > saving memory than speed I wanted to do everything it takes (please let me > know > > if we can do more :)) to avoid fragmentation. I would not like to consume more > > memory with the allocator than without :) > > > > I will try to get some data soon. But I agree that merging complicates things > a > > lot (there might be a simpler way to implement that btw). It requires chaining > > previous blocks for instance and having this not particularly elegant > > |free_chunk_for_address_| map. I'm not storing any metadata in the ashmem > region > > itself since I want it to be evictable (without evicting my metadata :)). In a > > regular heap allocator, the metadata would be located right before the user > > chunk and things like chaining previous blocks would be trivial (although a > lot > > of other things would not). > > For the record Egor and I observed offline that we were committing twice more > pages when disabling free chunks merging after a few navigations on the > http://google.com domain (for a total amount of ~4000 discardable memory allocations) > so we are both convinced that free chunks merging is not completely useless :) Ping? :)
comments beyond chunk splitting and merging logic, just did not look at them yet https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory.h (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory.h:48: // Thread-safety: DiscardableMemory instances are not thread-safe. nit: It should probably also say somewhere that it returns NULL if out of resources, and clients should be ready to fallback to heap allocations. https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:7: #include <cmath> is it still necessary? https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android.h (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.h:25: // problem by operating on large pieces of memory and returning to the client better wording: '... by allocating large ashmem regions internally and returning smaller chunks to the client. https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android_unittest.cc (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:45: static_cast<int*>(memory->Memory())[memory->Size() / sizeof(int) - 1] = 1; if you wrote 'char', you would not have had to divide .. Also, the style asks to prefer sizeof(varname) to sizeof(type), elimination solves the preference choice problem. https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:87: // The previous ashmem region should have been closed thus it should not be Does this rely on the region having all chunks unlocked? If so, then this precondition may unintentionally get broken later. The allocator is reused between tests, so this depends on the order in which tests are run, since previous tests could have leftover chunks. Frankly, I don't understand how it works: the test AllocateFreeAllocate seems to leave a locked chunk behind. https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:162: // allocation. does ashmem guarantee the ranges of virtual addresses after each create? A cleaner test would check that the address does not belong to the previous region. https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:171: int GetAllocatorDirtyMemoryInKBytes() { there is a similar function in disk_cache_memory_test.cc, did you consider unifying these two and placing somewhere in base/memory/dirty_memory_posix.cc? https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:194: it += 6; creating a vector of lines just to be able to skip 7 lines? I don't like it also because the kernel might add more fields to report some day, why not just skip until EOF or next range description? https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:204: EXPECT_EQ(4, GetAllocatorDirtyMemoryInKBytes()); isn't this a subject to race conditions with /proc reporting? https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_android.h (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_android.h:7: // discardable_memory_allocator_android.cc. Since people will just look into this header as the main starting point to gather knowledge about DiscardableMemory implementation on Android, I'd like to ask to put some overview of how DiscardableMemory::CreateLockedMemory() works, basically: 1. Creates real ashmem region per each instance of returned DiscardableMemory 2. when approaching the file descriptor limit for ashmem (90% used), start allocating large regions (like 32MiB), return them to the user in chunks. See DiscardableMemory::CreateLockedMemory() in discardable_memory_android.cc and discardable_memory_allocator_android.h.
Thanks Egor! https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory.h (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory.h:48: // Thread-safety: DiscardableMemory instances are not thread-safe. On 2013/10/25 15:21:13, pasko wrote: > nit: It should probably also say somewhere that it returns NULL if out of > resources, and clients should be ready to fallback to heap allocations. Good point although David wants to move to a point of reliability where discardable memory allocations can be considered as malloc failures (i.e. fatal): https://code.google.com/p/chromium/issues/detail?id=310231 If you don't mind I will let him address or not your comment :) https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:7: #include <cmath> On 2013/10/25 15:21:13, pasko wrote: > is it still necessary? This is needed for std::max(). https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android.h (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.h:25: // problem by operating on large pieces of memory and returning to the client On 2013/10/25 15:21:13, pasko wrote: > better wording: '... by allocating large ashmem regions internally and returning > smaller chunks to the client. Yeah definitely :) https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android_unittest.cc (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:45: static_cast<int*>(memory->Memory())[memory->Size() / sizeof(int) - 1] = 1; On 2013/10/25 15:21:13, pasko wrote: > if you wrote 'char', you would not have had to divide .. > > Also, the style asks to prefer sizeof(varname) to sizeof(type), elimination > solves the preference choice problem. Yeah good point. I'm also a big supporter of sizeof(varname) (and even sizeof with more complex expressions like function calls to do some advanced compile-time dispatching) but here there was no variable to use :) https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:87: // The previous ashmem region should have been closed thus it should not be On 2013/10/25 15:21:13, pasko wrote: > Does this rely on the region having all chunks unlocked? If so, then this > precondition may unintentionally get broken later. The allocator is reused > between tests, so this depends on the order in which tests are run, since > previous tests could have leftover chunks. Frankly, I don't understand how it > works: the test AllocateFreeAllocate seems to leave a locked chunk behind. As we discussed offline GTest is full of (nice) surprises :) Each test case has its own fixture class instance to provide some isolation. https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:162: // allocation. On 2013/10/25 15:21:13, pasko wrote: > does ashmem guarantee the ranges of virtual addresses after each create? A > cleaner test would check that the address does not belong to the previous > region. I'm not sure I see what you mean here? :) The first ashmem region exists/is mapped during the whole test. https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:171: int GetAllocatorDirtyMemoryInKBytes() { On 2013/10/25 15:21:13, pasko wrote: > there is a similar function in disk_cache_memory_test.cc, did you consider > unifying these two and placing somewhere in base/memory/dirty_memory_posix.cc? Yeah, I think I will just remove this function and the test below TBH. Sorry for the extra time you spent on it :/ I used to have this function to (incorrectly) test that free chunks were uncommitted (by using madvise()). As I explained in a comment in the .cc file, madvise(MADV_REMOVE) doesn't work unfortunately with ashmem/on non-root devices and madvise(MADV_FREE) just makes us hide our actual memory consumption as we used to do by unmapping the unpinned regions (please see https://code.google.com/p/chromium/issues/detail?id=311633). https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:204: EXPECT_EQ(4, GetAllocatorDirtyMemoryInKBytes()); On 2013/10/25 15:21:13, pasko wrote: > isn't this a subject to race conditions with /proc reporting? It's possible indeed if /proc reporting works by reading some metadata updated asynchronously (which I don't know about). I will just remove this test. If pages are not committed lazily then we have a serious problem :) https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_android.h (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_android.h:7: // discardable_memory_allocator_android.cc. On 2013/10/25 15:21:13, pasko wrote: > Since people will just look into this header as the main starting point to > gather knowledge about DiscardableMemory implementation on Android, I'd like to > ask to put some overview of how DiscardableMemory::CreateLockedMemory() works, > basically: > 1. Creates real ashmem region per each instance of returned DiscardableMemory > 2. when approaching the file descriptor limit for ashmem (90% used), start > allocating large regions (like 32MiB), return them to the user in chunks. > See DiscardableMemory::CreateLockedMemory() in discardable_memory_android.cc and > discardable_memory_allocator_android.h. I'm a little concerned about duplicating this information in several places. This is also an internal header so I would prefer making it painless to maintain (or not subject to containing stale information) rather than an entertaining read for public clients :) They can easily open discardable_memory.h and discardable_memory_android.cc to see what is going on if they really care (which they should not have to do). What do you think?
https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:316: bool OpenAshmemRegion() { ashmem regions can already be created, deleted, additionally they can be pinned, unpinned, while discardable memory can be locked and unlocked, with regions preferring to get opened and closed ... I am surprised not to find Acquire/Release :) I'd suggest naming: CreateLockedRegion DeleteRegion LockFreeChunk IsUninitialized https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:329: base::Lock lock_; // Protects the state below. this lock complicates interactions with the allocator while it does not seem to provide any benefits for performance, how about eliminating it? https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android_unittest.cc (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:162: // allocation. On 2013/10/28 09:44:43, Philippe wrote: > On 2013/10/25 15:21:13, pasko wrote: > > does ashmem guarantee the ranges of virtual addresses after each create? A > > cleaner test would check that the address does not belong to the previous > > region. > > I'm not sure I see what you mean here? :) The first ashmem region exists/is > mapped during the whole test. sorry, please disregard, I misread the test https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android_unittest.cc:171: int GetAllocatorDirtyMemoryInKBytes() { On 2013/10/28 09:44:43, Philippe wrote: > On 2013/10/25 15:21:13, pasko wrote: > > there is a similar function in disk_cache_memory_test.cc, did you consider > > unifying these two and placing somewhere in base/memory/dirty_memory_posix.cc? > > Yeah, I think I will just remove this function and the test below TBH. Sorry for > the extra time you spent on it :/ np! I am glad this ended up with less code :) https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_android.h (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_android.h:7: // discardable_memory_allocator_android.cc. On 2013/10/28 09:44:43, Philippe wrote: > On 2013/10/25 15:21:13, pasko wrote: > > Since people will just look into this header as the main starting point to > > gather knowledge about DiscardableMemory implementation on Android, I'd like > to > > ask to put some overview of how DiscardableMemory::CreateLockedMemory() > works, > > basically: > > 1. Creates real ashmem region per each instance of returned DiscardableMemory > > 2. when approaching the file descriptor limit for ashmem (90% used), start > > allocating large regions (like 32MiB), return them to the user in chunks. > > See DiscardableMemory::CreateLockedMemory() in discardable_memory_android.cc > and > > discardable_memory_allocator_android.h. > > I'm a little concerned about duplicating this information in several places. > This is also an internal header so I would prefer making it painless to maintain > (or not subject to containing stale information) rather than an entertaining > read for public clients :) They can easily open discardable_memory.h and > discardable_memory_android.cc to see what is going on if they really care (which > they should not have to do). What do you think? I understand your concern about duplication and about the header being internal. You are right, my proposal is probably not providing any value by being in an internal header. I was just thinking an outline of the algorithm with 3 short sentences put together can greatly simplify understanding of what is happening while the implementation is scattered around 2 files (600 lines total): discardable_memory_android.cc discardable_memory_allocator_android.cc One other way I could think of is to put the description of Android specifics to discardable_memory.h, but I won't insist on that, up to you.
https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory.h (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory.h:48: // Thread-safety: DiscardableMemory instances are not thread-safe. On 2013/10/28 09:44:43, Philippe wrote: > On 2013/10/25 15:21:13, pasko wrote: > > nit: It should probably also say somewhere that it returns NULL if out of > > resources, and clients should be ready to fallback to heap allocations. > > Good point although David wants to move to a point of reliability where > discardable memory allocations can be considered as malloc failures (i.e. > fatal): https://code.google.com/p/chromium/issues/detail?id=310231 > > If you don't mind I will let him address or not your comment :) If the platform specific DM implementation requires fallback to heap allocations, then I'd like to see that handled automatically by the DM system using emulated DM buffers instead of pushing the responsibility to the client. However, emulated DM need to land before we can do this of course.
https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory.h (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory.h:48: // Thread-safety: DiscardableMemory instances are not thread-safe. On 2013/10/28 15:45:23, David Reveman wrote: > On 2013/10/28 09:44:43, Philippe wrote: > > On 2013/10/25 15:21:13, pasko wrote: > > > nit: It should probably also say somewhere that it returns NULL if out of > > > resources, and clients should be ready to fallback to heap allocations. > > > > Good point although David wants to move to a point of reliability where > > discardable memory allocations can be considered as malloc failures (i.e. > > fatal): https://code.google.com/p/chromium/issues/detail?id=310231 > > > > If you don't mind I will let him address or not your comment :) > > If the platform specific DM implementation requires fallback to heap > allocations, then I'd like to see that handled automatically by the DM system > using emulated DM buffers instead of pushing the responsibility to the client. > However, emulated DM need to land before we can do this of course. +1 to that. https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:316: bool OpenAshmemRegion() { On 2013/10/28 14:40:06, pasko wrote: > ashmem regions can already be created, deleted, additionally they can be pinned, > unpinned, while discardable memory can be locked and unlocked, with regions > preferring to get opened and closed ... I am surprised not to find > Acquire/Release :) > > I'd suggest naming: > CreateLockedRegion > DeleteRegion > LockFreeChunk > IsUninitialized Thanks. I renamed internal::DeleteAshmemRegion() to internal::CloseAshmemRegion() since using DeleteRegion() here could let the reader believe that the object is self-deleting which is not the case. I also made IsUninitialized() IsInitialized() instead (it gives me slightly less headaches compared to things like !IsUninitialized() :). Let me know if you felt strongly about those. https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_... base/memory/discardable_memory_allocator_android.cc:329: base::Lock lock_; // Protects the state below. On 2013/10/28 14:40:06, pasko wrote: > this lock complicates interactions with the allocator while it does not seem to > provide any benefits for performance, how about eliminating it? Yeah. The great thing with eliminating it (in addition to the possibly complex threading interactions) is that I will be able to address my own comment on line 269 which means that we can get rid of AshmemRegionClosed(), CloseAshmemRegion(), OpenAshmemRegion() and make most of the object's state immutable :)
Oops I didn't mean to publish the comments now (since I haven't pushed the new patch set yet), sorry :) On Mon, Oct 28, 2013 at 4:49 PM, <pliard@chromium.org> wrote: > > https://codereview.chromium.**org/25293002/diff/801001/base/** > memory/discardable_memory.h<https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_memory.h> > File base/memory/discardable_**memory.h (right): > > https://codereview.chromium.**org/25293002/diff/801001/base/** > memory/discardable_memory.h#**newcode48<https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_memory.h#newcode48> > base/memory/discardable_**memory.h:48: // Thread-safety: DiscardableMemory > instances are not thread-safe. > On 2013/10/28 15:45:23, David Reveman wrote: > >> On 2013/10/28 09:44:43, Philippe wrote: >> > On 2013/10/25 15:21:13, pasko wrote: >> > > nit: It should probably also say somewhere that it returns NULL if >> > out of > >> > > resources, and clients should be ready to fallback to heap >> > allocations. > >> > >> > Good point although David wants to move to a point of reliability >> > where > >> > discardable memory allocations can be considered as malloc failures >> > (i.e. > >> > fatal): https://code.google.com/p/**chromium/issues/detail?id=**310231<https://code.g... >> > >> > If you don't mind I will let him address or not your comment :) >> > > If the platform specific DM implementation requires fallback to heap >> allocations, then I'd like to see that handled automatically by the DM >> > system > >> using emulated DM buffers instead of pushing the responsibility to the >> > client. > >> However, emulated DM need to land before we can do this of course. >> > > +1 to that. > > > https://codereview.chromium.**org/25293002/diff/801001/base/** > memory/discardable_memory_**allocator_android.cc<https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_memory_allocator_android.cc> > File base/memory/discardable_**memory_allocator_android.cc (right): > > https://codereview.chromium.**org/25293002/diff/801001/base/** > memory/discardable_memory_**allocator_android.cc#**newcode316<https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_memory_allocator_android.cc#newcode316> > base/memory/discardable_**memory_allocator_android.cc:**316: bool > OpenAshmemRegion() { > On 2013/10/28 14:40:06, pasko wrote: > >> ashmem regions can already be created, deleted, additionally they can >> > be pinned, > >> unpinned, while discardable memory can be locked and unlocked, with >> > regions > >> preferring to get opened and closed ... I am surprised not to find >> Acquire/Release :) >> > > I'd suggest naming: >> CreateLockedRegion >> DeleteRegion >> LockFreeChunk >> IsUninitialized >> > > Thanks. I renamed internal::DeleteAshmemRegion() to > internal::CloseAshmemRegion() since using DeleteRegion() here could let > the reader believe that the object is self-deleting which is not the > case. I also made IsUninitialized() IsInitialized() instead (it gives me > slightly less headaches compared to things like !IsUninitialized() :). > Let me know if you felt strongly about those. > > > https://codereview.chromium.**org/25293002/diff/801001/base/** > memory/discardable_memory_**allocator_android.cc#**newcode329<https://codereview.chromium.org/25293002/diff/801001/base/memory/discardable_memory_allocator_android.cc#newcode329> > base/memory/discardable_**memory_allocator_android.cc:**329: base::Lock > lock_; // Protects the state below. > On 2013/10/28 14:40:06, pasko wrote: > >> this lock complicates interactions with the allocator while it does >> > not seem to > >> provide any benefits for performance, how about eliminating it? >> > > Yeah. The great thing with eliminating it (in addition to the possibly > complex threading interactions) is that I will be able to address my own > comment on line 269 which means that we can get rid of > AshmemRegionClosed(), CloseAshmemRegion(), OpenAshmemRegion() and make > most of the object's state immutable :) > > https://codereview.chromium.**org/25293002/<https://codereview.chromium.org/2... > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:248: void MergeAndAddFreeChunk(int fd, FYI: parameter |fd| is not used.
https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:248: void MergeAndAddFreeChunk(int fd, On 2013/10/28 17:44:45, pasko wrote: > FYI: parameter |fd| is not used. Wow, thanks! It was used at some point I believe but not anymore.
LGTM, thanks, Philippe! https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:264: while (true) { Really a nit: I generally prefer this style of loops: with a minimum number of break statements: == begin code fragment == next_chunk = chunk; free_chunk_size = size; do { next_chunk = next_chunk + free_chunk_size; const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); free_chunk_size = free_chunk.size; while (!free_chunk.is_null()); == end code fragment == Also minimizes the number of lines in the loop. Do you like it? I would also add some comments like: // Merge with the previous chunks. // Merge with the next chunks. There was no need for that before, there was a self-descriptive constant, but now it would look nicer.
I've skimmed the whole CL now and have general comments before I dive deeper. At the risk of prematurely optimizing for abstraction, what do you think about restructuring the abstraction boundaries? Here's a straw man proposal: * Have DiscardableMemoryAndroid always go through DiscardableMemoryAllocator. * Decouple DiscardableMemoryAllocator from ashmem, make it operate purely on memory regions, with abstract resource requirements. Those resource requirements are controlled currently in DiscardableMemory::CreateLockedMemory(), which modulates between a 1:1 and M:N DiscardableMemory:backingstore relationship. This resource requirement could move into the allocator itself, statically configured per-platform. * If DiscardableMemoryAllocator is decoupled from ashmem and just operates on memory regions (via callbacks), then it'd be easier to test without relying on system globals like file descriptor limits, as long as these limits can be abstractly expressed to the allocator. * This also makes DiscardableMemoryAllocator potentially reusable by other platforms that don't use ashmem. It's not obvious to me that this is necessary, but it's possible. Pros of this proposal: * Testability without relying on system global values * Makes it possible to potentially reuse the allocator for other platforms (which may be advantageous for reasons other than just FD limits, such as fewer kernel<=>user context switches) Cons: * Extra layer of abstraction, especially for the common(?) 1:1 case. Does that incur extra overhead? https://codereview.chromium.org/25293002/diff/921001/base/memory/discardable_... File base/memory/discardable_memory.h (right): https://codereview.chromium.org/25293002/diff/921001/base/memory/discardable_... base/memory/discardable_memory.h:58: // Returns the memory's actual size that is greater or equal than the Can you explain why this is an important API to add? I'd equate this to malloc() implementations where due to page alignment boundaries and other implementation reasons, there may be extra usable memory, but the malloc() implementation doesn't expose that, and it's considered illegal to use that extra memory. I think it's a simpler not to support this API. If there is a good reason to support this (performance?), I can be convinced otherwise.
Thanks a lot Egor and William! I will look at your proposal more in detail today William. https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:264: while (true) { On 2013/10/28 18:10:33, pasko wrote: > Really a nit: > > I generally prefer this style of loops: with a minimum number of break > statements: > == begin code fragment == > next_chunk = chunk; > free_chunk_size = size; > do { > next_chunk = next_chunk + free_chunk_size; > const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); > free_chunk_size = free_chunk.size; > while (!free_chunk.is_null()); > == end code fragment == > > Also minimizes the number of lines in the loop. Do you like it? > > I would also add some comments like: > // Merge with the previous chunks. > > // Merge with the next chunks. > > There was no need for that before, there was a self-descriptive constant, but > now it would look nicer. Thanks for the snippet Egor :) The while loop looks more like some code that could have been written by a kid :) but I find it slightly more readable. In particular I would be concerned with the asymmetry of the while loop for previous chunks and the do while loop for the next chunks. Making the previous chunks merging use a do while loop would require a small non-super readable hack I believe (I'm open to suggestions though :)) or an extra variable which is fine from the performance perspective but again makes readability slightly less easier IMO (I like to involve as less mutable state as possible and also limit the scope/lifetime of the variables as much as possible). What do you think?
https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:264: while (true) { On 2013/10/29 10:11:25, Philippe wrote: > On 2013/10/28 18:10:33, pasko wrote: > > Really a nit: > > > > I generally prefer this style of loops: with a minimum number of break > > statements: > > == begin code fragment == > > next_chunk = chunk; > > free_chunk_size = size; > > do { > > next_chunk = next_chunk + free_chunk_size; > > const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); > > free_chunk_size = free_chunk.size; > > while (!free_chunk.is_null()); > > == end code fragment == > > > > Also minimizes the number of lines in the loop. Do you like it? > > > > I would also add some comments like: > > // Merge with the previous chunks. > > > > // Merge with the next chunks. > > > > There was no need for that before, there was a self-descriptive constant, but > > now it would look nicer. > > Thanks for the snippet Egor :) The while loop looks more like some code that > could have been written by a kid :) but I find it slightly more readable. In > particular I would be concerned with the asymmetry of the while loop for > previous chunks and the do while loop for the next chunks. Making the previous > chunks merging use a do while loop would require a small non-super readable hack > I believe (I'm open to suggestions though :)) or an extra variable which is fine > from the performance perspective but again makes readability slightly less > easier IMO (I like to involve as less mutable state as possible and also limit > the scope/lifetime of the variables as much as possible). > > What do you think? Hm, I did not think of asymmetry being a concern. But well, that's just a nit, consistency sometimes probably maybe in some sense is good to maintain. Thanks for adding comments BTW, I generally like to prepend such block-level comments with an empty line, but at the same time I'm trying to resist being too picky here, so up to you.
https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:264: while (true) { On 2013/10/29 10:47:57, pasko wrote: > On 2013/10/29 10:11:25, Philippe wrote: > > On 2013/10/28 18:10:33, pasko wrote: > > > Really a nit: > > > > > > I generally prefer this style of loops: with a minimum number of break > > > statements: > > > == begin code fragment == > > > next_chunk = chunk; > > > free_chunk_size = size; > > > do { > > > next_chunk = next_chunk + free_chunk_size; > > > const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); > > > free_chunk_size = free_chunk.size; > > > while (!free_chunk.is_null()); > > > == end code fragment == > > > > > > Also minimizes the number of lines in the loop. Do you like it? > > > > > > I would also add some comments like: > > > // Merge with the previous chunks. > > > > > > // Merge with the next chunks. > > > > > > There was no need for that before, there was a self-descriptive constant, > but > > > now it would look nicer. > > > > Thanks for the snippet Egor :) The while loop looks more like some code that > > could have been written by a kid :) but I find it slightly more readable. In > > particular I would be concerned with the asymmetry of the while loop for > > previous chunks and the do while loop for the next chunks. Making the previous > > chunks merging use a do while loop would require a small non-super readable > hack > > I believe (I'm open to suggestions though :)) or an extra variable which is > fine > > from the performance perspective but again makes readability slightly less > > easier IMO (I like to involve as less mutable state as possible and also limit > > the scope/lifetime of the variables as much as possible). > > > > What do you think? > > Hm, I did not think of asymmetry being a concern. But well, that's just a nit, > consistency sometimes probably maybe in some sense is good to maintain. > > Thanks for adding comments BTW, I generally like to prepend such block-level > comments with an empty line, but at the same time I'm trying to resist being too > picky here, so up to you. As you may have noticed I'm not super generous in terms of adding blank lines :) I used to add a lot of blank lines before joining Google but I got used to the very packed style in Google3 (in my previous team at least). That's kind of an extreme point of view but IMO blank lines create inconsistencies since everyone has its own preferences regarding their position/amount. I find generally comments enough to distinguish blocks in an editor with syntax highlighting. If a function gets particularly long/unreadable I prefer to split it (when it can). Just my 2 cents though :)
Hi William, I think the testability could indeed be improved with your proposal. Additionally, it may even test the allocator on platforms where it is not currently used. But, yes, at the same time there is a concern of adding another layer. And adding a layer is probably easier than removing it :) We can add another layer once it is used/can be used on two platforms maybe? Do you think the current design is so non-optimal in terms of testability? I agree that we are using system resources (e.g. ashmem) during testing but we are doing so in a very limited fashion. You might be more concerned about the current implementation of DiscardableMemory::Create() though that does the direct instantiation vs. allocator allocation dispatching. Previous tests in the test suite would really have to leak file descriptors though to trigger the non-expected path (allocator path) which is possible but maybe not a testing use case we should optimize for? I would love to use the allocator on other platforms although I’m not convinced it could bring a (significant) performance boost. You made a good point that it would possibly allow to minimize the kernel <-> userland context switches however pinning/unpinning still involves a syscall on Android if not on all the platforms (in additions to the few unfortunate heap allocations performed by e.g. std::multiset<>). Note that in a next iteration I would like to spend time doing some profiling to see if the allocator is a potential bottleneck and how we could make it faster if so. Aside from performance the main reason to use it on other platforms could be to solve the same problem as on Android: the fact that ashmem is a limited resource. However the whole allocator relies on a very specific property: the ability to pin/unpin subranges in a region. Platforms would need to have the same properties to allow the allocator to be used. Linux/ChromeOS would be a good candidate though with the incoming (but controversial) volatile ranges that seem to have the same properties I believe (file-based + ability to pin/unpin ranges). But we won’t have kernel support for that next week :) To be honest I would not be completely against seeing how the allocator performs in production/in the field on Android first before we get more ambitious :) What do you think? Thanks, Philippe. On Tue, Oct 29, 2013 at 11:54 AM, <pliard@chromium.org> wrote: > > https://codereview.chromium.**org/25293002/diff/1151001/** > base/memory/discardable_**memory_allocator_android.cc<https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable_memory_allocator_android.cc> > File base/memory/discardable_**memory_allocator_android.cc (right): > > https://codereview.chromium.**org/25293002/diff/1151001/** > base/memory/discardable_**memory_allocator_android.cc#**newcode264<https://codereview.chromium.org/25293002/diff/1151001/base/memory/discardable_memory_allocator_android.cc#newcode264> > base/memory/discardable_**memory_allocator_android.cc:**264: while (true) > { > On 2013/10/29 10:47:57, pasko wrote: > >> On 2013/10/29 10:11:25, Philippe wrote: >> > On 2013/10/28 18:10:33, pasko wrote: >> > > Really a nit: >> > > >> > > I generally prefer this style of loops: with a minimum number of >> > break > >> > > statements: >> > > == begin code fragment == >> > > next_chunk = chunk; >> > > free_chunk_size = size; >> > > do { >> > > next_chunk = next_chunk + free_chunk_size; >> > > const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); >> > > free_chunk_size = free_chunk.size; >> > > while (!free_chunk.is_null()); >> > > == end code fragment == >> > > >> > > Also minimizes the number of lines in the loop. Do you like it? >> > > >> > > I would also add some comments like: >> > > // Merge with the previous chunks. >> > > >> > > // Merge with the next chunks. >> > > >> > > There was no need for that before, there was a self-descriptive >> > constant, > >> but >> > > now it would look nicer. >> > >> > Thanks for the snippet Egor :) The while loop looks more like some >> > code that > >> > could have been written by a kid :) but I find it slightly more >> > readable. In > >> > particular I would be concerned with the asymmetry of the while loop >> > for > >> > previous chunks and the do while loop for the next chunks. Making >> > the previous > >> > chunks merging use a do while loop would require a small non-super >> > readable > >> hack >> > I believe (I'm open to suggestions though :)) or an extra variable >> > which is > >> fine >> > from the performance perspective but again makes readability >> > slightly less > >> > easier IMO (I like to involve as less mutable state as possible and >> > also limit > >> > the scope/lifetime of the variables as much as possible). >> > >> > What do you think? >> > > Hm, I did not think of asymmetry being a concern. But well, that's >> > just a nit, > >> consistency sometimes probably maybe in some sense is good to >> > maintain. > > Thanks for adding comments BTW, I generally like to prepend such >> > block-level > >> comments with an empty line, but at the same time I'm trying to resist >> > being too > >> picky here, so up to you. >> > > As you may have noticed I'm not super generous in terms of adding blank > lines :) I used to add a lot of blank lines before joining Google but I > got used to the very packed style in Google3 (in my previous team at > least). That's kind of an extreme point of view but IMO blank lines > create inconsistencies since everyone has its own preferences regarding > their position/amount. I find generally comments enough to distinguish > blocks in an editor with syntax highlighting. If a function gets > particularly long/unreadable I prefer to split it (when it can). Just my > 2 cents though :) > > https://codereview.chromium.**org/25293002/<https://codereview.chromium.org/2... > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
William: Ping? :) On Thu, Oct 31, 2013 at 3:05 PM, Philippe Liard <pliard@google.com> wrote: > Hi William, > > I think the testability could indeed be improved with your proposal. > Additionally, it may even test the allocator on platforms where it is not > currently used. But, yes, at the same time there is a concern of adding > another layer. And adding a layer is probably easier than removing it :) We > can add another layer once it is used/can be used on two platforms maybe? > > Do you think the current design is so non-optimal in terms of testability? I > agree that we are using system resources (e.g. ashmem) during testing but > we are doing so in a very limited fashion. You might be more concerned > about the current implementation of DiscardableMemory::Create() though that > does the direct instantiation vs. allocator allocation dispatching. > Previous tests in the test suite would really have to leak file descriptors > though to trigger the non-expected path (allocator path) which is possible > but maybe not a testing use case we should optimize for? > > I would love to use the allocator on other platforms although I’m not > convinced it could bring a (significant) performance boost. You made a good > point that it would possibly allow to minimize the kernel <-> userland > context switches however pinning/unpinning still involves a syscall on > Android if not on all the platforms (in additions to the few unfortunate > heap allocations performed by e.g. std::multiset<>). Note that in a next > iteration I would like to spend time doing some profiling to see if the > allocator is a potential bottleneck and how we could make it faster if so. > > Aside from performance the main reason to use it on other platforms could > be to solve the same problem as on Android: the fact that ashmem is a > limited resource. However the whole allocator relies on a very specific > property: the ability to pin/unpin subranges in a region. Platforms would > need to have the same properties to allow the allocator to be used. > Linux/ChromeOS would be a good candidate though with the incoming (but > controversial) volatile ranges that seem to have the same properties I > believe (file-based + ability to pin/unpin ranges). But we won’t have > kernel support for that next week :) > > To be honest I would not be completely against seeing how the allocator > performs in production/in the field on Android first before we get more > ambitious :) > > What do you think? > > Thanks, > Philippe. > > > > On Tue, Oct 29, 2013 at 11:54 AM, <pliard@chromium.org> wrote: > >> >> https://codereview.chromium.org/25293002/diff/1151001/ >> base/memory/discardable_memory_allocator_android.cc >> File base/memory/discardable_memory_allocator_android.cc (right): >> >> https://codereview.chromium.org/25293002/diff/1151001/ >> base/memory/discardable_memory_allocator_android.cc#newcode264 >> base/memory/discardable_memory_allocator_android.cc:264: while (true) { >> On 2013/10/29 10:47:57, pasko wrote: >> >>> On 2013/10/29 10:11:25, Philippe wrote: >>> > On 2013/10/28 18:10:33, pasko wrote: >>> > > Really a nit: >>> > > >>> > > I generally prefer this style of loops: with a minimum number of >>> >> break >> >>> > > statements: >>> > > == begin code fragment == >>> > > next_chunk = chunk; >>> > > free_chunk_size = size; >>> > > do { >>> > > next_chunk = next_chunk + free_chunk_size; >>> > > const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); >>> > > free_chunk_size = free_chunk.size; >>> > > while (!free_chunk.is_null()); >>> > > == end code fragment == >>> > > >>> > > Also minimizes the number of lines in the loop. Do you like it? >>> > > >>> > > I would also add some comments like: >>> > > // Merge with the previous chunks. >>> > > >>> > > // Merge with the next chunks. >>> > > >>> > > There was no need for that before, there was a self-descriptive >>> >> constant, >> >>> but >>> > > now it would look nicer. >>> > >>> > Thanks for the snippet Egor :) The while loop looks more like some >>> >> code that >> >>> > could have been written by a kid :) but I find it slightly more >>> >> readable. In >> >>> > particular I would be concerned with the asymmetry of the while loop >>> >> for >> >>> > previous chunks and the do while loop for the next chunks. Making >>> >> the previous >> >>> > chunks merging use a do while loop would require a small non-super >>> >> readable >> >>> hack >>> > I believe (I'm open to suggestions though :)) or an extra variable >>> >> which is >> >>> fine >>> > from the performance perspective but again makes readability >>> >> slightly less >> >>> > easier IMO (I like to involve as less mutable state as possible and >>> >> also limit >> >>> > the scope/lifetime of the variables as much as possible). >>> > >>> > What do you think? >>> >> >> Hm, I did not think of asymmetry being a concern. But well, that's >>> >> just a nit, >> >>> consistency sometimes probably maybe in some sense is good to >>> >> maintain. >> >> Thanks for adding comments BTW, I generally like to prepend such >>> >> block-level >> >>> comments with an empty line, but at the same time I'm trying to resist >>> >> being too >> >>> picky here, so up to you. >>> >> >> As you may have noticed I'm not super generous in terms of adding blank >> lines :) I used to add a lot of blank lines before joining Google but I >> got used to the very packed style in Google3 (in my previous team at >> least). That's kind of an extreme point of view but IMO blank lines >> create inconsistencies since everyone has its own preferences regarding >> their position/amount. I find generally comments enough to distinguish >> blocks in an editor with syntax highlighting. If a function gets >> particularly long/unreadable I prefer to split it (when it can). Just my >> 2 cents though :) >> >> https://codereview.chromium.org/25293002/ >> > > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Terribly sorry. If I don't respond in a day, feel free to ping me. I'm unfortunately at the Vancouver IETF 88 meeting right now, but I'll keep this tab open and get around to it when I can. On Wed, Nov 6, 2013 at 7:50 AM, Philippe Liard <pliard@google.com> wrote: > William: Ping? :) > > > On Thu, Oct 31, 2013 at 3:05 PM, Philippe Liard <pliard@google.com> wrote: > >> Hi William, >> >> I think the testability could indeed be improved with your proposal. >> Additionally, it may even test the allocator on platforms where it is not >> currently used. But, yes, at the same time there is a concern of adding >> another layer. And adding a layer is probably easier than removing it :) We >> can add another layer once it is used/can be used on two platforms maybe? >> >> Do you think the current design is so non-optimal in terms of >> testability? I agree that we are using system resources (e.g. ashmem) >> during testing but we are doing so in a very limited fashion. You might be >> more concerned about the current implementation of >> DiscardableMemory::Create() though that does the direct instantiation vs. >> allocator allocation dispatching. Previous tests in the test suite would >> really have to leak file descriptors though to trigger the non-expected >> path (allocator path) which is possible but maybe not a testing use case we >> should optimize for? >> >> I would love to use the allocator on other platforms although I’m not >> convinced it could bring a (significant) performance boost. You made a good >> point that it would possibly allow to minimize the kernel <-> userland >> context switches however pinning/unpinning still involves a syscall on >> Android if not on all the platforms (in additions to the few unfortunate >> heap allocations performed by e.g. std::multiset<>). Note that in a next >> iteration I would like to spend time doing some profiling to see if the >> allocator is a potential bottleneck and how we could make it faster if so. >> >> Aside from performance the main reason to use it on other platforms could >> be to solve the same problem as on Android: the fact that ashmem is a >> limited resource. However the whole allocator relies on a very specific >> property: the ability to pin/unpin subranges in a region. Platforms would >> need to have the same properties to allow the allocator to be used. >> Linux/ChromeOS would be a good candidate though with the incoming (but >> controversial) volatile ranges that seem to have the same properties I >> believe (file-based + ability to pin/unpin ranges). But we won’t have >> kernel support for that next week :) >> >> To be honest I would not be completely against seeing how the allocator >> performs in production/in the field on Android first before we get more >> ambitious :) >> >> What do you think? >> >> Thanks, >> Philippe. >> >> >> >> On Tue, Oct 29, 2013 at 11:54 AM, <pliard@chromium.org> wrote: >> >>> >>> https://codereview.chromium.org/25293002/diff/1151001/ >>> base/memory/discardable_memory_allocator_android.cc >>> File base/memory/discardable_memory_allocator_android.cc (right): >>> >>> https://codereview.chromium.org/25293002/diff/1151001/ >>> base/memory/discardable_memory_allocator_android.cc#newcode264 >>> base/memory/discardable_memory_allocator_android.cc:264: while (true) { >>> On 2013/10/29 10:47:57, pasko wrote: >>> >>>> On 2013/10/29 10:11:25, Philippe wrote: >>>> > On 2013/10/28 18:10:33, pasko wrote: >>>> > > Really a nit: >>>> > > >>>> > > I generally prefer this style of loops: with a minimum number of >>>> >>> break >>> >>>> > > statements: >>>> > > == begin code fragment == >>>> > > next_chunk = chunk; >>>> > > free_chunk_size = size; >>>> > > do { >>>> > > next_chunk = next_chunk + free_chunk_size; >>>> > > const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); >>>> > > free_chunk_size = free_chunk.size; >>>> > > while (!free_chunk.is_null()); >>>> > > == end code fragment == >>>> > > >>>> > > Also minimizes the number of lines in the loop. Do you like it? >>>> > > >>>> > > I would also add some comments like: >>>> > > // Merge with the previous chunks. >>>> > > >>>> > > // Merge with the next chunks. >>>> > > >>>> > > There was no need for that before, there was a self-descriptive >>>> >>> constant, >>> >>>> but >>>> > > now it would look nicer. >>>> > >>>> > Thanks for the snippet Egor :) The while loop looks more like some >>>> >>> code that >>> >>>> > could have been written by a kid :) but I find it slightly more >>>> >>> readable. In >>> >>>> > particular I would be concerned with the asymmetry of the while loop >>>> >>> for >>> >>>> > previous chunks and the do while loop for the next chunks. Making >>>> >>> the previous >>> >>>> > chunks merging use a do while loop would require a small non-super >>>> >>> readable >>> >>>> hack >>>> > I believe (I'm open to suggestions though :)) or an extra variable >>>> >>> which is >>> >>>> fine >>>> > from the performance perspective but again makes readability >>>> >>> slightly less >>> >>>> > easier IMO (I like to involve as less mutable state as possible and >>>> >>> also limit >>> >>>> > the scope/lifetime of the variables as much as possible). >>>> > >>>> > What do you think? >>>> >>> >>> Hm, I did not think of asymmetry being a concern. But well, that's >>>> >>> just a nit, >>> >>>> consistency sometimes probably maybe in some sense is good to >>>> >>> maintain. >>> >>> Thanks for adding comments BTW, I generally like to prepend such >>>> >>> block-level >>> >>>> comments with an empty line, but at the same time I'm trying to resist >>>> >>> being too >>> >>>> picky here, so up to you. >>>> >>> >>> As you may have noticed I'm not super generous in terms of adding blank >>> lines :) I used to add a lot of blank lines before joining Google but I >>> got used to the very packed style in Google3 (in my previous team at >>> least). That's kind of an extreme point of view but IMO blank lines >>> create inconsistencies since everyone has its own preferences regarding >>> their position/amount. I find generally comments enough to distinguish >>> blocks in an editor with syntax highlighting. If a function gets >>> particularly long/unreadable I prefer to split it (when it can). Just my >>> 2 cents though :) >>> >>> https://codereview.chromium.org/25293002/ >>> >> >> > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
No worries William :) I indeed saw on your calendar that you were OOO. Thanks. On Wed, Nov 6, 2013 at 5:50 PM, William Chan (陈智昌) <willchan@chromium.org>wrote: > Terribly sorry. If I don't respond in a day, feel free to ping me. I'm > unfortunately at the Vancouver IETF 88 meeting right now, but I'll keep > this tab open and get around to it when I can. > > > On Wed, Nov 6, 2013 at 7:50 AM, Philippe Liard <pliard@google.com> wrote: > >> William: Ping? :) >> >> >> On Thu, Oct 31, 2013 at 3:05 PM, Philippe Liard <pliard@google.com>wrote: >> >>> Hi William, >>> >>> I think the testability could indeed be improved with your proposal. >>> Additionally, it may even test the allocator on platforms where it is not >>> currently used. But, yes, at the same time there is a concern of adding >>> another layer. And adding a layer is probably easier than removing it :) We >>> can add another layer once it is used/can be used on two platforms maybe? >>> >>> Do you think the current design is so non-optimal in terms of >>> testability? I agree that we are using system resources (e.g. ashmem) >>> during testing but we are doing so in a very limited fashion. You might be >>> more concerned about the current implementation of >>> DiscardableMemory::Create() though that does the direct instantiation vs. >>> allocator allocation dispatching. Previous tests in the test suite would >>> really have to leak file descriptors though to trigger the non-expected >>> path (allocator path) which is possible but maybe not a testing use case we >>> should optimize for? >>> >>> I would love to use the allocator on other platforms although I’m not >>> convinced it could bring a (significant) performance boost. You made a good >>> point that it would possibly allow to minimize the kernel <-> userland >>> context switches however pinning/unpinning still involves a syscall on >>> Android if not on all the platforms (in additions to the few unfortunate >>> heap allocations performed by e.g. std::multiset<>). Note that in a next >>> iteration I would like to spend time doing some profiling to see if the >>> allocator is a potential bottleneck and how we could make it faster if so. >>> >>> Aside from performance the main reason to use it on other platforms >>> could be to solve the same problem as on Android: the fact that ashmem is a >>> limited resource. However the whole allocator relies on a very specific >>> property: the ability to pin/unpin subranges in a region. Platforms would >>> need to have the same properties to allow the allocator to be used. >>> Linux/ChromeOS would be a good candidate though with the incoming (but >>> controversial) volatile ranges that seem to have the same properties I >>> believe (file-based + ability to pin/unpin ranges). But we won’t have >>> kernel support for that next week :) >>> >>> To be honest I would not be completely against seeing how the allocator >>> performs in production/in the field on Android first before we get more >>> ambitious :) >>> >>> What do you think? >>> >>> Thanks, >>> Philippe. >>> >>> >>> >>> On Tue, Oct 29, 2013 at 11:54 AM, <pliard@chromium.org> wrote: >>> >>>> >>>> https://codereview.chromium.org/25293002/diff/1151001/ >>>> base/memory/discardable_memory_allocator_android.cc >>>> File base/memory/discardable_memory_allocator_android.cc (right): >>>> >>>> https://codereview.chromium.org/25293002/diff/1151001/ >>>> base/memory/discardable_memory_allocator_android.cc#newcode264 >>>> base/memory/discardable_memory_allocator_android.cc:264: while (true) { >>>> On 2013/10/29 10:47:57, pasko wrote: >>>> >>>>> On 2013/10/29 10:11:25, Philippe wrote: >>>>> > On 2013/10/28 18:10:33, pasko wrote: >>>>> > > Really a nit: >>>>> > > >>>>> > > I generally prefer this style of loops: with a minimum number of >>>>> >>>> break >>>> >>>>> > > statements: >>>>> > > == begin code fragment == >>>>> > > next_chunk = chunk; >>>>> > > free_chunk_size = size; >>>>> > > do { >>>>> > > next_chunk = next_chunk + free_chunk_size; >>>>> > > const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); >>>>> > > free_chunk_size = free_chunk.size; >>>>> > > while (!free_chunk.is_null()); >>>>> > > == end code fragment == >>>>> > > >>>>> > > Also minimizes the number of lines in the loop. Do you like it? >>>>> > > >>>>> > > I would also add some comments like: >>>>> > > // Merge with the previous chunks. >>>>> > > >>>>> > > // Merge with the next chunks. >>>>> > > >>>>> > > There was no need for that before, there was a self-descriptive >>>>> >>>> constant, >>>> >>>>> but >>>>> > > now it would look nicer. >>>>> > >>>>> > Thanks for the snippet Egor :) The while loop looks more like some >>>>> >>>> code that >>>> >>>>> > could have been written by a kid :) but I find it slightly more >>>>> >>>> readable. In >>>> >>>>> > particular I would be concerned with the asymmetry of the while loop >>>>> >>>> for >>>> >>>>> > previous chunks and the do while loop for the next chunks. Making >>>>> >>>> the previous >>>> >>>>> > chunks merging use a do while loop would require a small non-super >>>>> >>>> readable >>>> >>>>> hack >>>>> > I believe (I'm open to suggestions though :)) or an extra variable >>>>> >>>> which is >>>> >>>>> fine >>>>> > from the performance perspective but again makes readability >>>>> >>>> slightly less >>>> >>>>> > easier IMO (I like to involve as less mutable state as possible and >>>>> >>>> also limit >>>> >>>>> > the scope/lifetime of the variables as much as possible). >>>>> > >>>>> > What do you think? >>>>> >>>> >>>> Hm, I did not think of asymmetry being a concern. But well, that's >>>>> >>>> just a nit, >>>> >>>>> consistency sometimes probably maybe in some sense is good to >>>>> >>>> maintain. >>>> >>>> Thanks for adding comments BTW, I generally like to prepend such >>>>> >>>> block-level >>>> >>>>> comments with an empty line, but at the same time I'm trying to resist >>>>> >>>> being too >>>> >>>>> picky here, so up to you. >>>>> >>>> >>>> As you may have noticed I'm not super generous in terms of adding blank >>>> lines :) I used to add a lot of blank lines before joining Google but I >>>> got used to the very packed style in Google3 (in my previous team at >>>> least). That's kind of an extreme point of view but IMO blank lines >>>> create inconsistencies since everyone has its own preferences regarding >>>> their position/amount. I find generally comments enough to distinguish >>>> blocks in an editor with syntax highlighting. If a function gets >>>> particularly long/unreadable I prefer to split it (when it can). Just my >>>> 2 cents though :) >>>> >>>> https://codereview.chromium.org/25293002/ >>>> >>> >>> >> > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Friendly weekly ping :) On Wed, Nov 6, 2013 at 5:51 PM, Philippe Liard <pliard@google.com> wrote: > No worries William :) I indeed saw on your calendar that you were OOO. > Thanks. > > > On Wed, Nov 6, 2013 at 5:50 PM, William Chan (陈智昌) <willchan@chromium.org>wrote: > >> Terribly sorry. If I don't respond in a day, feel free to ping me. I'm >> unfortunately at the Vancouver IETF 88 meeting right now, but I'll keep >> this tab open and get around to it when I can. >> >> >> On Wed, Nov 6, 2013 at 7:50 AM, Philippe Liard <pliard@google.com> wrote: >> >>> William: Ping? :) >>> >>> >>> On Thu, Oct 31, 2013 at 3:05 PM, Philippe Liard <pliard@google.com>wrote: >>> >>>> Hi William, >>>> >>>> I think the testability could indeed be improved with your proposal. >>>> Additionally, it may even test the allocator on platforms where it is not >>>> currently used. But, yes, at the same time there is a concern of adding >>>> another layer. And adding a layer is probably easier than removing it :) We >>>> can add another layer once it is used/can be used on two platforms maybe? >>>> >>>> Do you think the current design is so non-optimal in terms of >>>> testability? I agree that we are using system resources (e.g. ashmem) >>>> during testing but we are doing so in a very limited fashion. You might be >>>> more concerned about the current implementation of >>>> DiscardableMemory::Create() though that does the direct instantiation vs. >>>> allocator allocation dispatching. Previous tests in the test suite would >>>> really have to leak file descriptors though to trigger the non-expected >>>> path (allocator path) which is possible but maybe not a testing use case we >>>> should optimize for? >>>> >>>> I would love to use the allocator on other platforms although I’m not >>>> convinced it could bring a (significant) performance boost. You made a good >>>> point that it would possibly allow to minimize the kernel <-> userland >>>> context switches however pinning/unpinning still involves a syscall on >>>> Android if not on all the platforms (in additions to the few unfortunate >>>> heap allocations performed by e.g. std::multiset<>). Note that in a next >>>> iteration I would like to spend time doing some profiling to see if the >>>> allocator is a potential bottleneck and how we could make it faster if so. >>>> >>>> Aside from performance the main reason to use it on other platforms >>>> could be to solve the same problem as on Android: the fact that ashmem is a >>>> limited resource. However the whole allocator relies on a very specific >>>> property: the ability to pin/unpin subranges in a region. Platforms would >>>> need to have the same properties to allow the allocator to be used. >>>> Linux/ChromeOS would be a good candidate though with the incoming (but >>>> controversial) volatile ranges that seem to have the same properties I >>>> believe (file-based + ability to pin/unpin ranges). But we won’t have >>>> kernel support for that next week :) >>>> >>>> To be honest I would not be completely against seeing how the allocator >>>> performs in production/in the field on Android first before we get more >>>> ambitious :) >>>> >>>> What do you think? >>>> >>>> Thanks, >>>> Philippe. >>>> >>>> >>>> >>>> On Tue, Oct 29, 2013 at 11:54 AM, <pliard@chromium.org> wrote: >>>> >>>>> >>>>> https://codereview.chromium.org/25293002/diff/1151001/ >>>>> base/memory/discardable_memory_allocator_android.cc >>>>> File base/memory/discardable_memory_allocator_android.cc (right): >>>>> >>>>> https://codereview.chromium.org/25293002/diff/1151001/ >>>>> base/memory/discardable_memory_allocator_android.cc#newcode264 >>>>> base/memory/discardable_memory_allocator_android.cc:264: while (true) >>>>> { >>>>> On 2013/10/29 10:47:57, pasko wrote: >>>>> >>>>>> On 2013/10/29 10:11:25, Philippe wrote: >>>>>> > On 2013/10/28 18:10:33, pasko wrote: >>>>>> > > Really a nit: >>>>>> > > >>>>>> > > I generally prefer this style of loops: with a minimum number of >>>>>> >>>>> break >>>>> >>>>>> > > statements: >>>>>> > > == begin code fragment == >>>>>> > > next_chunk = chunk; >>>>>> > > free_chunk_size = size; >>>>>> > > do { >>>>>> > > next_chunk = next_chunk + free_chunk_size; >>>>>> > > const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); >>>>>> > > free_chunk_size = free_chunk.size; >>>>>> > > while (!free_chunk.is_null()); >>>>>> > > == end code fragment == >>>>>> > > >>>>>> > > Also minimizes the number of lines in the loop. Do you like it? >>>>>> > > >>>>>> > > I would also add some comments like: >>>>>> > > // Merge with the previous chunks. >>>>>> > > >>>>>> > > // Merge with the next chunks. >>>>>> > > >>>>>> > > There was no need for that before, there was a self-descriptive >>>>>> >>>>> constant, >>>>> >>>>>> but >>>>>> > > now it would look nicer. >>>>>> > >>>>>> > Thanks for the snippet Egor :) The while loop looks more like some >>>>>> >>>>> code that >>>>> >>>>>> > could have been written by a kid :) but I find it slightly more >>>>>> >>>>> readable. In >>>>> >>>>>> > particular I would be concerned with the asymmetry of the while loop >>>>>> >>>>> for >>>>> >>>>>> > previous chunks and the do while loop for the next chunks. Making >>>>>> >>>>> the previous >>>>> >>>>>> > chunks merging use a do while loop would require a small non-super >>>>>> >>>>> readable >>>>> >>>>>> hack >>>>>> > I believe (I'm open to suggestions though :)) or an extra variable >>>>>> >>>>> which is >>>>> >>>>>> fine >>>>>> > from the performance perspective but again makes readability >>>>>> >>>>> slightly less >>>>> >>>>>> > easier IMO (I like to involve as less mutable state as possible and >>>>>> >>>>> also limit >>>>> >>>>>> > the scope/lifetime of the variables as much as possible). >>>>>> > >>>>>> > What do you think? >>>>>> >>>>> >>>>> Hm, I did not think of asymmetry being a concern. But well, that's >>>>>> >>>>> just a nit, >>>>> >>>>>> consistency sometimes probably maybe in some sense is good to >>>>>> >>>>> maintain. >>>>> >>>>> Thanks for adding comments BTW, I generally like to prepend such >>>>>> >>>>> block-level >>>>> >>>>>> comments with an empty line, but at the same time I'm trying to resist >>>>>> >>>>> being too >>>>> >>>>>> picky here, so up to you. >>>>>> >>>>> >>>>> As you may have noticed I'm not super generous in terms of adding blank >>>>> lines :) I used to add a lot of blank lines before joining Google but I >>>>> got used to the very packed style in Google3 (in my previous team at >>>>> least). That's kind of an extreme point of view but IMO blank lines >>>>> create inconsistencies since everyone has its own preferences regarding >>>>> their position/amount. I find generally comments enough to distinguish >>>>> blocks in an editor with syntax highlighting. If a function gets >>>>> particularly long/unreadable I prefer to split it (when it can). Just >>>>> my >>>>> 2 cents though :) >>>>> >>>>> https://codereview.chromium.org/25293002/ >>>>> >>>> >>>> >>> >> > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Sigh. Such fail. My bad. I'll try to do this tonight before I go to sleep (I'm in Tokyo right now). On Mon, Nov 18, 2013 at 2:38 AM, Philippe Liard <pliard@google.com> wrote: > Friendly weekly ping :) > > > On Wed, Nov 6, 2013 at 5:51 PM, Philippe Liard <pliard@google.com> wrote: > >> No worries William :) I indeed saw on your calendar that you were OOO. >> Thanks. >> >> >> On Wed, Nov 6, 2013 at 5:50 PM, William Chan (陈智昌) <willchan@chromium.org >> > wrote: >> >>> Terribly sorry. If I don't respond in a day, feel free to ping me. I'm >>> unfortunately at the Vancouver IETF 88 meeting right now, but I'll keep >>> this tab open and get around to it when I can. >>> >>> >>> On Wed, Nov 6, 2013 at 7:50 AM, Philippe Liard <pliard@google.com>wrote: >>> >>>> William: Ping? :) >>>> >>>> >>>> On Thu, Oct 31, 2013 at 3:05 PM, Philippe Liard <pliard@google.com>wrote: >>>> >>>>> Hi William, >>>>> >>>>> I think the testability could indeed be improved with your proposal. >>>>> Additionally, it may even test the allocator on platforms where it is not >>>>> currently used. But, yes, at the same time there is a concern of adding >>>>> another layer. And adding a layer is probably easier than removing it :) We >>>>> can add another layer once it is used/can be used on two platforms maybe? >>>>> >>>>> Do you think the current design is so non-optimal in terms of >>>>> testability? I agree that we are using system resources (e.g. ashmem) >>>>> during testing but we are doing so in a very limited fashion. You might be >>>>> more concerned about the current implementation of >>>>> DiscardableMemory::Create() though that does the direct instantiation vs. >>>>> allocator allocation dispatching. Previous tests in the test suite would >>>>> really have to leak file descriptors though to trigger the non-expected >>>>> path (allocator path) which is possible but maybe not a testing use case we >>>>> should optimize for? >>>>> >>>>> I would love to use the allocator on other platforms although I’m not >>>>> convinced it could bring a (significant) performance boost. You made a good >>>>> point that it would possibly allow to minimize the kernel <-> userland >>>>> context switches however pinning/unpinning still involves a syscall on >>>>> Android if not on all the platforms (in additions to the few unfortunate >>>>> heap allocations performed by e.g. std::multiset<>). Note that in a next >>>>> iteration I would like to spend time doing some profiling to see if the >>>>> allocator is a potential bottleneck and how we could make it faster if so. >>>>> >>>>> Aside from performance the main reason to use it on other platforms >>>>> could be to solve the same problem as on Android: the fact that ashmem is a >>>>> limited resource. However the whole allocator relies on a very specific >>>>> property: the ability to pin/unpin subranges in a region. Platforms would >>>>> need to have the same properties to allow the allocator to be used. >>>>> Linux/ChromeOS would be a good candidate though with the incoming (but >>>>> controversial) volatile ranges that seem to have the same properties I >>>>> believe (file-based + ability to pin/unpin ranges). But we won’t have >>>>> kernel support for that next week :) >>>>> >>>>> To be honest I would not be completely against seeing how the >>>>> allocator performs in production/in the field on Android first before we >>>>> get more ambitious :) >>>>> >>>>> What do you think? >>>>> >>>>> Thanks, >>>>> Philippe. >>>>> >>>>> >>>>> >>>>> On Tue, Oct 29, 2013 at 11:54 AM, <pliard@chromium.org> wrote: >>>>> >>>>>> >>>>>> https://codereview.chromium.org/25293002/diff/1151001/ >>>>>> base/memory/discardable_memory_allocator_android.cc >>>>>> File base/memory/discardable_memory_allocator_android.cc (right): >>>>>> >>>>>> https://codereview.chromium.org/25293002/diff/1151001/ >>>>>> base/memory/discardable_memory_allocator_android.cc#newcode264 >>>>>> base/memory/discardable_memory_allocator_android.cc:264: while >>>>>> (true) { >>>>>> On 2013/10/29 10:47:57, pasko wrote: >>>>>> >>>>>>> On 2013/10/29 10:11:25, Philippe wrote: >>>>>>> > On 2013/10/28 18:10:33, pasko wrote: >>>>>>> > > Really a nit: >>>>>>> > > >>>>>>> > > I generally prefer this style of loops: with a minimum number of >>>>>>> >>>>>> break >>>>>> >>>>>>> > > statements: >>>>>>> > > == begin code fragment == >>>>>>> > > next_chunk = chunk; >>>>>>> > > free_chunk_size = size; >>>>>>> > > do { >>>>>>> > > next_chunk = next_chunk + free_chunk_size; >>>>>>> > > const FreeChunk free_chunk = RemoveFreeChunk(next_chunk); >>>>>>> > > free_chunk_size = free_chunk.size; >>>>>>> > > while (!free_chunk.is_null()); >>>>>>> > > == end code fragment == >>>>>>> > > >>>>>>> > > Also minimizes the number of lines in the loop. Do you like it? >>>>>>> > > >>>>>>> > > I would also add some comments like: >>>>>>> > > // Merge with the previous chunks. >>>>>>> > > >>>>>>> > > // Merge with the next chunks. >>>>>>> > > >>>>>>> > > There was no need for that before, there was a self-descriptive >>>>>>> >>>>>> constant, >>>>>> >>>>>>> but >>>>>>> > > now it would look nicer. >>>>>>> > >>>>>>> > Thanks for the snippet Egor :) The while loop looks more like some >>>>>>> >>>>>> code that >>>>>> >>>>>>> > could have been written by a kid :) but I find it slightly more >>>>>>> >>>>>> readable. In >>>>>> >>>>>>> > particular I would be concerned with the asymmetry of the while >>>>>>> loop >>>>>>> >>>>>> for >>>>>> >>>>>>> > previous chunks and the do while loop for the next chunks. Making >>>>>>> >>>>>> the previous >>>>>> >>>>>>> > chunks merging use a do while loop would require a small non-super >>>>>>> >>>>>> readable >>>>>> >>>>>>> hack >>>>>>> > I believe (I'm open to suggestions though :)) or an extra variable >>>>>>> >>>>>> which is >>>>>> >>>>>>> fine >>>>>>> > from the performance perspective but again makes readability >>>>>>> >>>>>> slightly less >>>>>> >>>>>>> > easier IMO (I like to involve as less mutable state as possible and >>>>>>> >>>>>> also limit >>>>>> >>>>>>> > the scope/lifetime of the variables as much as possible). >>>>>>> > >>>>>>> > What do you think? >>>>>>> >>>>>> >>>>>> Hm, I did not think of asymmetry being a concern. But well, that's >>>>>>> >>>>>> just a nit, >>>>>> >>>>>>> consistency sometimes probably maybe in some sense is good to >>>>>>> >>>>>> maintain. >>>>>> >>>>>> Thanks for adding comments BTW, I generally like to prepend such >>>>>>> >>>>>> block-level >>>>>> >>>>>>> comments with an empty line, but at the same time I'm trying to >>>>>>> resist >>>>>>> >>>>>> being too >>>>>> >>>>>>> picky here, so up to you. >>>>>>> >>>>>> >>>>>> As you may have noticed I'm not super generous in terms of adding >>>>>> blank >>>>>> lines :) I used to add a lot of blank lines before joining Google but >>>>>> I >>>>>> got used to the very packed style in Google3 (in my previous team at >>>>>> least). That's kind of an extreme point of view but IMO blank lines >>>>>> create inconsistencies since everyone has its own preferences >>>>>> regarding >>>>>> their position/amount. I find generally comments enough to distinguish >>>>>> blocks in an editor with syntax highlighting. If a function gets >>>>>> particularly long/unreadable I prefer to split it (when it can). Just >>>>>> my >>>>>> 2 cents though :) >>>>>> >>>>>> https://codereview.chromium.org/25293002/ >>>>>> >>>>> >>>>> >>>> >>> >> > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Sorry, I'm still jetlagged and didn't finish this iteration. I'm fine with your reply about not adding the abstraction layer yet. I commented on the factory method about another reason I liked the extra abstraction layer, and I'm curious what you think about it. I'll hopefully finish it up tomorrow. https://codereview.chromium.org/25293002/diff/921001/base/memory/discardable_... File base/memory/discardable_memory.h (right): https://codereview.chromium.org/25293002/diff/921001/base/memory/discardable_... base/memory/discardable_memory.h:58: // Returns the memory's actual size that is greater or equal than the On 2013/10/28 19:54:05, willchan wrote: > Can you explain why this is an important API to add? I'd equate this to malloc() > implementations where due to page alignment boundaries and other implementation > reasons, there may be extra usable memory, but the malloc() implementation > doesn't expose that, and it's considered illegal to use that extra memory. I > think it's a simpler not to support this API. If there is a good reason to > support this (performance?), I can be convinced otherwise. Can you answer this question? https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:44: // Note that this is not replaced with base::Callback to save the extra heap My initial inclination is to call this premature optimization. Do you expect this code path to be hot enough that this would actually matter? Or do you think this is a death by a thousand cuts thing and we should prefer this pattern in general? https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:46: struct DeletionObserver { class? https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:55: DiscardableAshmemChunk(DeletionObserver* deletion_observer, The lifetime of the deletion_observer needs to be documented. I assume it must always outlive the DiscardableAshmemChunk. https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.h (right): https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_allocator_android.h:41: DiscardableMemoryAllocator(const std::string& name); explicit https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... File base/memory/discardable_memory_android.cc (right): https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_android.cc:44: void decrement_ahmem_fd_count() { s/ahmem/ashmem/? https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_android.cc:197: scoped_ptr<DiscardableMemory> DiscardableMemory::CreateLockedMemory( One reason I had suggested the decoupling of the allocator from ashmem is that this function is a little confusing. Unlike the other platforms' DiscardableMemory factories, this one creates a DiscardableMemoryAndroid sometime, and sometimes uses the allocator. It's non-obvious to someone skimming the code that this factory method wouldn't always create a DiscardableMemoryAndroid, but might go through this allocator instead. I think it'd be more consistent to have this factory method always create a DiscardableMemoryAndroid, and have the DiscardableMemoryAndroid usually be the simple ashmem region, but sometimes be a chunk of a larger ashmem region. Alternatively, perhaps renaming DiscardableMemoryAndroid to something else (I suck at naming, but perhaps DiscardableMemoryAndroidSimple?). This makes it more obvious that the reader should look for another object for other cases. Adding comments in that class definition explaining that it's not always used would also be helpful. I just don't want someone who skims the code looking for DiscardableMemoryFooPlatform to see DiscardableMemoryAndroid and assume that that is the class that is always instantiated. It'd be easy to miss the factory method's modulating between using the simple object vs the allocator.
Thanks a lot William! https://codereview.chromium.org/25293002/diff/921001/base/memory/discardable_... File base/memory/discardable_memory.h (right): https://codereview.chromium.org/25293002/diff/921001/base/memory/discardable_... base/memory/discardable_memory.h:58: // Returns the memory's actual size that is greater or equal than the On 2013/11/18 14:53:13, willchan wrote: > On 2013/10/28 19:54:05, willchan wrote: > > Can you explain why this is an important API to add? I'd equate this to > malloc() > > implementations where due to page alignment boundaries and other > implementation > > reasons, there may be extra usable memory, but the malloc() implementation > > doesn't expose that, and it's considered illegal to use that extra memory. I > > think it's a simpler not to support this API. If there is a good reason to > > support this (performance?), I can be convinced otherwise. > > Can you answer this question? Yes, sorry. I added this mainly to simplify testing TBH. However I expect this could be useful to let the client minimize fragmentation (and avoid unnecessary resize if this is used to implement a vector for instance). The changes I'm making currently in Blink to use discardable memory (e.g. in WTF::Vector) already make sure that the allocated sizes are page-aligned so I don't strictly need this. I'm happy to remove this now and possibly reconsider it later if I have data that shows this is useful. https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:44: // Note that this is not replaced with base::Callback to save the extra heap On 2013/11/18 14:53:13, willchan wrote: > My initial inclination is to call this premature optimization. Do you expect > this code path to be hot enough that this would actually matter? Or do you think > this is a death by a thousand cuts thing and we should prefer this pattern in > general? I would also call that premature optimization if this code was in an upper layer and not in a "memory allocator". We are already seeing performance issues on Android because we do way too many heap allocations in general and dlmalloc doesn't scale in multi-threaded environments (see http://code.google.com/p/chromium/issues/detail?id=309658 and http://code.google.com/p/chromium/issues/detail?id=303282 for instance). I would like ideally to make discardable memory a first class citizen in Blink/make it appealing in general. If one discardable memory allocation implies e.g. 5 heap allocations + some atomicops then it's harder to sell :) I agree that such choices should be preceded with profiling though (which I intend to do once we submitted this "baseline"). I took the liberty here of very slightly (IMO) optimizing prematurely since the "optimized" version (i.e. using the observer) is not overly complicated (IMO) compared to the "non-optimized" one (using a Callback). In fact people already mistakenly use single-method delegates sometimes in place of callbacks in upper layers where performance doesn't necessarily matter so I thought this would be reasonable here in base/. What do you think? Completely off-topic, but since we are talking about premature optimization, I came across this nice read about mature optimization this weekend. I strongly recommend it if you haven't seen it before: http://carlos.bueno.org/optimization/mature-optimization.pdf https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:46: struct DeletionObserver { On 2013/11/18 14:53:13, willchan wrote: > class? Done. https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:55: DiscardableAshmemChunk(DeletionObserver* deletion_observer, On 2013/11/18 14:53:13, willchan wrote: > The lifetime of the deletion_observer needs to be documented. I assume it must > always outlive the DiscardableAshmemChunk. Yeah, done. https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.h (right): https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_allocator_android.h:41: DiscardableMemoryAllocator(const std::string& name); On 2013/11/18 14:53:13, willchan wrote: > explicit Done. https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... File base/memory/discardable_memory_android.cc (right): https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_android.cc:44: void decrement_ahmem_fd_count() { On 2013/11/18 14:53:13, willchan wrote: > s/ahmem/ashmem/? Good catch. https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_android.cc:197: scoped_ptr<DiscardableMemory> DiscardableMemory::CreateLockedMemory( On 2013/11/18 14:53:13, willchan wrote: > One reason I had suggested the decoupling of the allocator from ashmem is that > this function is a little confusing. Unlike the other platforms' > DiscardableMemory factories, this one creates a DiscardableMemoryAndroid > sometime, and sometimes uses the allocator. It's non-obvious to someone skimming > the code that this factory method wouldn't always create a > DiscardableMemoryAndroid, but might go through this allocator instead. I think > it'd be more consistent to have this factory method always create a > DiscardableMemoryAndroid, and have the DiscardableMemoryAndroid usually be the > simple ashmem region, but sometimes be a chunk of a larger ashmem region. > > Alternatively, perhaps renaming DiscardableMemoryAndroid to something else (I > suck at naming, but perhaps DiscardableMemoryAndroidSimple?). This makes it more > obvious that the reader should look for another object for other cases. Adding > comments in that class definition explaining that it's not always used would > also be helpful. > > I just don't want someone who skims the code looking for > DiscardableMemoryFooPlatform to see DiscardableMemoryAndroid and assume that > that is the class that is always instantiated. It'd be easy to miss the factory > method's modulating between using the simple object vs the allocator. Agreed, thanks. I renamed the class to DiscardableMemoryAndroidSimple (the name is fine IMO) and added a comment above the class declaration.
Sorry for taking so long. I've done a fairly deep review now and there's no big issue here. I am not sure if there's a merging bug in the code, but I suggested a test for it just to make sure. Most of my suggestions are just personal recommendations for what I think would make it clearer, but as always, feel free to disagree if you prefer the current way. Cheers. https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1261001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:44: // Note that this is not replaced with base::Callback to save the extra heap On 2013/11/18 16:34:51, Philippe wrote: > On 2013/11/18 14:53:13, willchan wrote: > > My initial inclination is to call this premature optimization. Do you expect > > this code path to be hot enough that this would actually matter? Or do you > think > > this is a death by a thousand cuts thing and we should prefer this pattern in > > general? > > I would also call that premature optimization if this code was in an upper layer > and not in a "memory allocator". We are already seeing performance issues on > Android because we do way too many heap allocations in general and dlmalloc > doesn't scale in multi-threaded environments (see > http://code.google.com/p/chromium/issues/detail?id=309658 and > http://code.google.com/p/chromium/issues/detail?id=303282 for instance). I would > like ideally to make discardable memory a first class citizen in Blink/make it > appealing in general. If one discardable memory allocation implies e.g. 5 heap > allocations + some atomicops then it's harder to sell :) I agree that such > choices should be preceded with profiling though (which I intend to do once we > submitted this "baseline"). I took the liberty here of very slightly (IMO) > optimizing prematurely since the "optimized" version (i.e. using the observer) > is not overly complicated (IMO) compared to the "non-optimized" one (using a > Callback). In fact people already mistakenly use single-method delegates > sometimes in place of callbacks in upper layers where performance doesn't > necessarily matter so I thought this would be reasonable here in base/. > > What do you think? Since this isn't in a public header, I don't mind so much. I defer to you. I guess I don't really think of DiscardableMemory as being cheap anyway, so an extra heap alloc doesn't bother me. It usually (except under memory pressure, when it switches to the allocator) has to do a context switch to acquire an ashmem region anyway. This is why I asked if you expect the code path to be hot. If this were hot, I'd think it'd be nicer to use the DiscardableMemoryAllocator more so as to avoid the kernel context switch. That said, as I look at it again, why don't you just make this a sibling class of AshmemRegion, so it'll also be nested within DiscardableMemoryAllocator. And then pass in a AshmemRegion*, rather than use the extra interface class. Do you think the interface layering buys very much? I think it'll make it more readable (and save the virtual indirection) by avoiding requiring the reader to chase down the indirection. > > Completely off-topic, but since we are talking about premature optimization, I > came across this nice read about mature optimization this weekend. I strongly > recommend it if you haven't seen it before: > http://carlos.bueno.org/optimization/mature-optimization.pdf TL;DR yet, but maybe later. Looks interesting though. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:119: Lock* lock, DiscardableMemoryAllocator::AshmemRegion is a nested class, so it has access to DiscardableMemoryAllocator (the enclosing class)'s private members. Why don't you just pass DiscardableMemoryAllocator*, rather than a private member pointer and a callback. This also saves the extra heap allocations. And it makes things a bit clearer in terms of ownership, since |lock|'s lifetime is not clear. DiscardableMemoryAllocator and AshmemRegion are already pretty tightly coupled after all, so the weak layering boundaries don't buy that much, and it's all internal to this .cc file anyway. I think giving up and accepting the tight coupling will improve readability since the reader doesn't have to chase down the indirection to find out what the concrete references are. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:134: scoped_ptr<DiscardableMemory> Allocate(size_t client_requested_size, Can you document this function? https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:147: new DiscardableAshmemChunk(this, fd_, last_allocated_chunk_, address, Is this the highest allocated chunk? What do you mean by "last"? It's unclear if "last" includes a just recycled chunk. It looks like it does not, since you don't update this variable when you can RecycleFreeChunk(). https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:233: // the value of |chunk_merging_flags|. chunk_merging_flags? https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:243: // fragmentation. This comment section wasn't completely clear. Just doublechecking, are you saying that free chunks are unlocked and thus may be purged by the kernel under memory pressure, but without memory pressure, it won't immediately get released? If there's no memory pressure, then what's the problem? And what does minimizing fragmentation do? Increase the likelihood that we'll be able to free an entire region? https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:244: void MergeAndAddFreeChunk(void* previous_chunk, void* chunk, size_t size) { I defer to you, but I kinda like adding a suffix to indicate this is locked. Makes it more obvious when reading code. MergeAndFreeChunk_Locked() for example. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:275: deletion_callback_.Run(make_scoped_ptr(this)); // Deletes |this|. I dislike this, because in my mind, this is saying that |this| owns itself, which makes little sense. The allocator owns the AshmemRegion. This is simply releasing the region back to the allocator (which will probably delete it). https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:278: void AddFreeChunk(const FreeChunk& free_chunk) { This should be locked already, right? AssertAcquired()? https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:311: std::multiset<FreeChunk> free_chunks_; FYI, I'm spending a lot of time evaluating your data structure choices in doing this review right now. What would help me and future readers/reviewers is explaining what operations you expect, and how your choice in data structures facilitates those operations. There's no documentation anywhere explaining how the actual recycling/allocation works. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:326: scoped_ptr<DiscardableMemory> DiscardableMemoryAllocator::Allocate( This allocation scheme sure is naive :) That's fine, although I'd definitely leave a TODO that it's worth looking into. Not saying you have to do the TODO, but it's nice to note it. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... File base/memory/discardable_memory_allocator_android_unittest.cc (right): https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:164: I'm still going through this, but is there a test for something like this. 1) Allocate 16KB chunk (triggers an ashmem region of 32KB) 2) Allocate 16KB chunk (uses the rest of the ashmem region of 32KB) 3) Free the first chunk. 4) Allocate a 8KB chunk. This splits the free chunk into two. Now we have 8KB (used), 8KB (free), 16KB (used) 5) Free the latter 16KB chunk At this point, what's the state like? 8KB (used), 24KB (free)? I'm still trying to grok the code, but it looks like 8KB (used), 8KB (free), 16KB (free) from what I can tell. Is that right?
Thanks William! I also got rid of the uintptr_t conversions by making the maps use void* directly. I hadn't done that initially since I had though that hash<> wasn't specialized for void* but I was wrong. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:119: Lock* lock, On 2013/11/19 02:19:43, willchan wrote: > DiscardableMemoryAllocator::AshmemRegion is a nested class, so it has access to > DiscardableMemoryAllocator (the enclosing class)'s private members. Why don't > you just pass DiscardableMemoryAllocator*, rather than a private member pointer > and a callback. This also saves the extra heap allocations. And it makes things > a bit clearer in terms of ownership, since |lock|'s lifetime is not clear. > DiscardableMemoryAllocator and AshmemRegion are already pretty tightly coupled > after all, so the weak layering boundaries don't buy that much, and it's all > internal to this .cc file anyway. I think giving up and accepting the tight > coupling will improve readability since the reader doesn't have to chase down > the indirection to find out what the concrete references are. Agreed, although I'm still doing the heap allocation (that happens very rarely since we are not expecting more than a couple of AshmemRegion instances) since the regions (pushed to a vector) are not copyable. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:134: scoped_ptr<DiscardableMemory> Allocate(size_t client_requested_size, On 2013/11/19 02:19:43, willchan wrote: > Can you document this function? Yes, sorry. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:147: new DiscardableAshmemChunk(this, fd_, last_allocated_chunk_, address, On 2013/11/19 02:19:43, willchan wrote: > Is this the highest allocated chunk? What do you mean by "last"? It's unclear if > "last" includes a just recycled chunk. It looks like it does not, since you > don't update this variable when you can RecycleFreeChunk(). Good point. Renamed to |highest_allocated_chunk_|. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:233: // the value of |chunk_merging_flags|. On 2013/11/19 02:19:43, willchan wrote: > chunk_merging_flags? This was stale. I removed it. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:243: // fragmentation. On 2013/11/19 02:19:43, willchan wrote: > This comment section wasn't completely clear. Just doublechecking, are you > saying that free chunks are unlocked and thus may be purged by the kernel under > memory pressure, but without memory pressure, it won't immediately get released? > If there's no memory pressure, then what's the problem? And what does minimizing > fragmentation do? Increase the likelihood that we'll be able to free an entire > region? Yeah, this is correct. We are trying to minimize fragmentation to avoid causing memory pressure ourselves. The free chunks are unpinned but they can still cause memory pressure if we are keeping too many of them (which would happen without merging them). We still want to avoid causing "artificial" memory pressure even if the ashmem subranges are evictable since the ashmem shrinker would unnecessarily evict some ashmem subranges (that might belong to other processes) that could be precious. More space for free chunks also means less space for page cache (which would be one of the first candidates for eviction I believe). More importantly the Android framework is quite disconnected from the kernel unfortunately and has its own low-memory notification system (that is not really "ashmem aware"). If we start causing "artificial" memory pressure Android would start sending those notifications to background apps (and ourselves). This would trigger a lot of activity on the system (and could even increase memory usage). Does that make any sense? I clarified a bit the last sentence. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:244: void MergeAndAddFreeChunk(void* previous_chunk, void* chunk, size_t size) { On 2013/11/19 02:19:43, willchan wrote: > I defer to you, but I kinda like adding a suffix to indicate this is locked. > Makes it more obvious when reading code. MergeAndFreeChunk_Locked() for example. Yeah, good idea. I used the assert at the beginning of the function's body to document that but it was not visible at the call side. I also like suffixing functions with e.g. "OnUIThread" when it makes sense. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:275: deletion_callback_.Run(make_scoped_ptr(this)); // Deletes |this|. On 2013/11/19 02:19:43, willchan wrote: > I dislike this, because in my mind, this is saying that |this| owns itself, > which makes little sense. The allocator owns the AshmemRegion. This is simply > releasing the region back to the allocator (which will probably delete it). Yes. It was a form of indirect self-deletion IMO which is why I did this but I'm fine with a raw pointer too. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:278: void AddFreeChunk(const FreeChunk& free_chunk) { On 2013/11/19 02:19:43, willchan wrote: > This should be locked already, right? AssertAcquired()? Yes, the assert was missing. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:311: std::multiset<FreeChunk> free_chunks_; On 2013/11/19 02:19:43, willchan wrote: > FYI, I'm spending a lot of time evaluating your data structure choices in doing > this review right now. What would help me and future readers/reviewers is > explaining what operations you expect, and how your choice in data structures > facilitates those operations. There's no documentation anywhere explaining how > the actual recycling/allocation works. Yeah, sorry. I added a comment here and below. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:326: scoped_ptr<DiscardableMemory> DiscardableMemoryAllocator::Allocate( On 2013/11/19 02:19:43, willchan wrote: > This allocation scheme sure is naive :) That's fine, although I'd definitely > leave a TODO that it's worth looking into. Not saying you have to do the TODO, > but it's nice to note it. Yeah :) I added a TODO. I favored simplicity here since I don't expect us to have more than 2/3 AshmemRegion instances. But I will definitely look into that later. https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... File base/memory/discardable_memory_allocator_android_unittest.cc (right): https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:164: On 2013/11/19 02:19:43, willchan wrote: > I'm still going through this, but is there a test for something like this. > 1) Allocate 16KB chunk (triggers an ashmem region of 32KB) > 2) Allocate 16KB chunk (uses the rest of the ashmem region of 32KB) > 3) Free the first chunk. > 4) Allocate a 8KB chunk. This splits the free chunk into two. Now we have 8KB > (used), 8KB (free), 16KB (used) > 5) Free the latter 16KB chunk > > At this point, what's the state like? > 8KB (used), 24KB (free)? > > I'm still trying to grok the code, but it looks like 8KB (used), 8KB (free), > 16KB (free) from what I can tell. Is that right? Wow, amazing catch! I added your unit test and made it pass. This implied quite a lot of changes. Let me know if you can come up with a better approach than what I did.
On 2013/11/19 15:42:59, Philippe wrote: > Thanks William! > > I also got rid of the uintptr_t conversions by making the maps use void* > directly. I hadn't done that initially since I had though that hash<> wasn't > specialized for void* but I was wrong. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > File base/memory/discardable_memory_allocator_android.cc (right): > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:119: Lock* lock, > On 2013/11/19 02:19:43, willchan wrote: > > DiscardableMemoryAllocator::AshmemRegion is a nested class, so it has access > to > > DiscardableMemoryAllocator (the enclosing class)'s private members. Why don't > > you just pass DiscardableMemoryAllocator*, rather than a private member > pointer > > and a callback. This also saves the extra heap allocations. And it makes > things > > a bit clearer in terms of ownership, since |lock|'s lifetime is not clear. > > DiscardableMemoryAllocator and AshmemRegion are already pretty tightly coupled > > after all, so the weak layering boundaries don't buy that much, and it's all > > internal to this .cc file anyway. I think giving up and accepting the tight > > coupling will improve readability since the reader doesn't have to chase down > > the indirection to find out what the concrete references are. > > Agreed, although I'm still doing the heap allocation (that happens very rarely > since we are not expecting more than a couple of AshmemRegion instances) since > the regions (pushed to a vector) are not copyable. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:134: > scoped_ptr<DiscardableMemory> Allocate(size_t client_requested_size, > On 2013/11/19 02:19:43, willchan wrote: > > Can you document this function? > > Yes, sorry. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:147: new > DiscardableAshmemChunk(this, fd_, last_allocated_chunk_, address, > On 2013/11/19 02:19:43, willchan wrote: > > Is this the highest allocated chunk? What do you mean by "last"? It's unclear > if > > "last" includes a just recycled chunk. It looks like it does not, since you > > don't update this variable when you can RecycleFreeChunk(). > > Good point. Renamed to |highest_allocated_chunk_|. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:233: // the value of > |chunk_merging_flags|. > On 2013/11/19 02:19:43, willchan wrote: > > chunk_merging_flags? > > This was stale. I removed it. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:243: // fragmentation. > On 2013/11/19 02:19:43, willchan wrote: > > This comment section wasn't completely clear. Just doublechecking, are you > > saying that free chunks are unlocked and thus may be purged by the kernel > under > > memory pressure, but without memory pressure, it won't immediately get > released? > > If there's no memory pressure, then what's the problem? And what does > minimizing > > fragmentation do? Increase the likelihood that we'll be able to free an entire > > region? > > Yeah, this is correct. We are trying to minimize fragmentation to avoid causing > memory pressure ourselves. The free chunks are unpinned but they can still cause > memory pressure if we are keeping too many of them (which would happen without > merging them). We still want to avoid causing "artificial" memory pressure even > if the ashmem subranges are evictable since the ashmem shrinker would > unnecessarily evict some ashmem subranges (that might belong to other processes) > that could be precious. More space for free chunks also means less space for > page cache (which would be one of the first candidates for eviction I believe). > More importantly the Android framework is quite disconnected from the kernel > unfortunately and has its own low-memory notification system (that is not really > "ashmem aware"). If we start causing "artificial" memory pressure Android would > start sending those notifications to background apps (and ourselves). This would > trigger a lot of activity on the system (and could even increase memory usage). > > Does that make any sense? I clarified a bit the last sentence. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:244: void > MergeAndAddFreeChunk(void* previous_chunk, void* chunk, size_t size) { > On 2013/11/19 02:19:43, willchan wrote: > > I defer to you, but I kinda like adding a suffix to indicate this is locked. > > Makes it more obvious when reading code. MergeAndFreeChunk_Locked() for > example. > > Yeah, good idea. I used the assert at the beginning of the function's body to > document that but it was not visible at the call side. I also like suffixing > functions with e.g. "OnUIThread" when it makes sense. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:275: > deletion_callback_.Run(make_scoped_ptr(this)); // Deletes |this|. > On 2013/11/19 02:19:43, willchan wrote: > > I dislike this, because in my mind, this is saying that |this| owns itself, > > which makes little sense. The allocator owns the AshmemRegion. This is simply > > releasing the region back to the allocator (which will probably delete it). > > Yes. It was a form of indirect self-deletion IMO which is why I did this but I'm > fine with a raw pointer too. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:278: void AddFreeChunk(const > FreeChunk& free_chunk) { > On 2013/11/19 02:19:43, willchan wrote: > > This should be locked already, right? AssertAcquired()? > > Yes, the assert was missing. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:311: > std::multiset<FreeChunk> free_chunks_; > On 2013/11/19 02:19:43, willchan wrote: > > FYI, I'm spending a lot of time evaluating your data structure choices in > doing > > this review right now. What would help me and future readers/reviewers is > > explaining what operations you expect, and how your choice in data structures > > facilitates those operations. There's no documentation anywhere explaining how > > the actual recycling/allocation works. > > Yeah, sorry. I added a comment here and below. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android.cc:326: > scoped_ptr<DiscardableMemory> DiscardableMemoryAllocator::Allocate( > On 2013/11/19 02:19:43, willchan wrote: > > This allocation scheme sure is naive :) That's fine, although I'd definitely > > leave a TODO that it's worth looking into. Not saying you have to do the TODO, > > but it's nice to note it. > > Yeah :) I added a TODO. I favored simplicity here since I don't expect us to > have more than 2/3 AshmemRegion instances. But I will definitely look into that > later. > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > File base/memory/discardable_memory_allocator_android_unittest.cc (right): > > https://codereview.chromium.org/25293002/diff/1421001/base/memory/discardable... > base/memory/discardable_memory_allocator_android_unittest.cc:164: > On 2013/11/19 02:19:43, willchan wrote: > > I'm still going through this, but is there a test for something like this. > > 1) Allocate 16KB chunk (triggers an ashmem region of 32KB) > > 2) Allocate 16KB chunk (uses the rest of the ashmem region of 32KB) > > 3) Free the first chunk. > > 4) Allocate a 8KB chunk. This splits the free chunk into two. Now we have 8KB > > (used), 8KB (free), 16KB (used) > > 5) Free the latter 16KB chunk > > > > At this point, what's the state like? > > 8KB (used), 24KB (free)? > > > > I'm still trying to grok the code, but it looks like 8KB (used), 8KB (free), > > 16KB (free) from what I can tell. Is that right? > > Wow, amazing catch! I added your unit test and made it pass. This implied quite > a lot of changes. Let me know if you can come up with a better approach than > what I did. Ping William :) Please diff against patch set 30. I uploaded a couple of patch sets after that as follow ups.
Thanks for the reminder. I'm flying back from Tokyo and will look at this on Tuesday. If you don't hear from me then, please ping again. Sorry for the latency. On Nov 25, 2013 5:45 PM, <pliard@chromium.org> wrote: > On 2013/11/19 15:42:59, Philippe wrote: > >> Thanks William! >> > > I also got rid of the uintptr_t conversions by making the maps use void* >> directly. I hadn't done that initially since I had though that hash<> >> wasn't >> specialized for void* but I was wrong. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc > >> File base/memory/discardable_memory_allocator_android.cc (right): >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode119 > >> base/memory/discardable_memory_allocator_android.cc:119: Lock* lock, >> On 2013/11/19 02:19:43, willchan wrote: >> > DiscardableMemoryAllocator::AshmemRegion is a nested class, so it has >> access >> to >> > DiscardableMemoryAllocator (the enclosing class)'s private members. Why >> > don't > >> > you just pass DiscardableMemoryAllocator*, rather than a private member >> pointer >> > and a callback. This also saves the extra heap allocations. And it makes >> things >> > a bit clearer in terms of ownership, since |lock|'s lifetime is not >> clear. >> > DiscardableMemoryAllocator and AshmemRegion are already pretty tightly >> > coupled > >> > after all, so the weak layering boundaries don't buy that much, and >> it's all >> > internal to this .cc file anyway. I think giving up and accepting the >> tight >> > coupling will improve readability since the reader doesn't have to chase >> > down > >> > the indirection to find out what the concrete references are. >> > > Agreed, although I'm still doing the heap allocation (that happens very >> rarely >> since we are not expecting more than a couple of AshmemRegion instances) >> since >> the regions (pushed to a vector) are not copyable. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode134 > >> base/memory/discardable_memory_allocator_android.cc:134: >> scoped_ptr<DiscardableMemory> Allocate(size_t client_requested_size, >> On 2013/11/19 02:19:43, willchan wrote: >> > Can you document this function? >> > > Yes, sorry. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode147 > >> base/memory/discardable_memory_allocator_android.cc:147: new >> DiscardableAshmemChunk(this, fd_, last_allocated_chunk_, address, >> On 2013/11/19 02:19:43, willchan wrote: >> > Is this the highest allocated chunk? What do you mean by "last"? It's >> > unclear > >> if >> > "last" includes a just recycled chunk. It looks like it does not, since >> you >> > don't update this variable when you can RecycleFreeChunk(). >> > > Good point. Renamed to |highest_allocated_chunk_|. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode233 > >> base/memory/discardable_memory_allocator_android.cc:233: // the value of >> |chunk_merging_flags|. >> On 2013/11/19 02:19:43, willchan wrote: >> > chunk_merging_flags? >> > > This was stale. I removed it. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode243 > >> base/memory/discardable_memory_allocator_android.cc:243: // >> fragmentation. >> On 2013/11/19 02:19:43, willchan wrote: >> > This comment section wasn't completely clear. Just doublechecking, are >> you >> > saying that free chunks are unlocked and thus may be purged by the >> kernel >> under >> > memory pressure, but without memory pressure, it won't immediately get >> released? >> > If there's no memory pressure, then what's the problem? And what does >> minimizing >> > fragmentation do? Increase the likelihood that we'll be able to free an >> > entire > >> > region? >> > > Yeah, this is correct. We are trying to minimize fragmentation to avoid >> > causing > >> memory pressure ourselves. The free chunks are unpinned but they can still >> > cause > >> memory pressure if we are keeping too many of them (which would happen >> without >> merging them). We still want to avoid causing "artificial" memory pressure >> > even > >> if the ashmem subranges are evictable since the ashmem shrinker would >> unnecessarily evict some ashmem subranges (that might belong to other >> > processes) > >> that could be precious. More space for free chunks also means less space >> for >> page cache (which would be one of the first candidates for eviction I >> > believe). > >> More importantly the Android framework is quite disconnected from the >> kernel >> unfortunately and has its own low-memory notification system (that is not >> > really > >> "ashmem aware"). If we start causing "artificial" memory pressure Android >> > would > >> start sending those notifications to background apps (and ourselves). This >> > would > >> trigger a lot of activity on the system (and could even increase memory >> > usage). > > Does that make any sense? I clarified a bit the last sentence. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode244 > >> base/memory/discardable_memory_allocator_android.cc:244: void >> MergeAndAddFreeChunk(void* previous_chunk, void* chunk, size_t size) { >> On 2013/11/19 02:19:43, willchan wrote: >> > I defer to you, but I kinda like adding a suffix to indicate this is >> locked. >> > Makes it more obvious when reading code. MergeAndFreeChunk_Locked() for >> example. >> > > Yeah, good idea. I used the assert at the beginning of the function's >> body to >> document that but it was not visible at the call side. I also like >> suffixing >> functions with e.g. "OnUIThread" when it makes sense. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode275 > >> base/memory/discardable_memory_allocator_android.cc:275: >> deletion_callback_.Run(make_scoped_ptr(this)); // Deletes |this|. >> On 2013/11/19 02:19:43, willchan wrote: >> > I dislike this, because in my mind, this is saying that |this| owns >> itself, >> > which makes little sense. The allocator owns the AshmemRegion. This is >> > simply > >> > releasing the region back to the allocator (which will probably delete >> it). >> > > Yes. It was a form of indirect self-deletion IMO which is why I did this >> but >> > I'm > >> fine with a raw pointer too. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode278 > >> base/memory/discardable_memory_allocator_android.cc:278: void >> > AddFreeChunk(const > >> FreeChunk& free_chunk) { >> On 2013/11/19 02:19:43, willchan wrote: >> > This should be locked already, right? AssertAcquired()? >> > > Yes, the assert was missing. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode311 > >> base/memory/discardable_memory_allocator_android.cc:311: >> std::multiset<FreeChunk> free_chunks_; >> On 2013/11/19 02:19:43, willchan wrote: >> > FYI, I'm spending a lot of time evaluating your data structure choices >> in >> doing >> > this review right now. What would help me and future readers/reviewers >> is >> > explaining what operations you expect, and how your choice in data >> > structures > >> > facilitates those operations. There's no documentation anywhere >> explaining >> > how > >> > the actual recycling/allocation works. >> > > Yeah, sorry. I added a comment here and below. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android.cc#newcode326 > >> base/memory/discardable_memory_allocator_android.cc:326: >> scoped_ptr<DiscardableMemory> DiscardableMemoryAllocator::Allocate( >> On 2013/11/19 02:19:43, willchan wrote: >> > This allocation scheme sure is naive :) That's fine, although I'd >> definitely >> > leave a TODO that it's worth looking into. Not saying you have to do the >> > TODO, > >> > but it's nice to note it. >> > > Yeah :) I added a TODO. I favored simplicity here since I don't expect us >> to >> have more than 2/3 AshmemRegion instances. But I will definitely look into >> > that > >> later. >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android_unittest.cc > >> File base/memory/discardable_memory_allocator_android_unittest.cc >> (right): >> > > > https://codereview.chromium.org/25293002/diff/1421001/ > base/memory/discardable_memory_allocator_android_unittest.cc#newcode164 > >> base/memory/discardable_memory_allocator_android_unittest.cc:164: >> On 2013/11/19 02:19:43, willchan wrote: >> > I'm still going through this, but is there a test for something like >> this. >> > 1) Allocate 16KB chunk (triggers an ashmem region of 32KB) >> > 2) Allocate 16KB chunk (uses the rest of the ashmem region of 32KB) >> > 3) Free the first chunk. >> > 4) Allocate a 8KB chunk. This splits the free chunk into two. Now we >> have >> > 8KB > >> > (used), 8KB (free), 16KB (used) >> > 5) Free the latter 16KB chunk >> > >> > At this point, what's the state like? >> > 8KB (used), 24KB (free)? >> > >> > I'm still trying to grok the code, but it looks like 8KB (used), 8KB >> (free), >> > 16KB (free) from what I can tell. Is that right? >> > > Wow, amazing catch! I added your unit test and made it pass. This implied >> > quite > >> a lot of changes. Let me know if you can come up with a better approach >> than >> what I did. >> > > Ping William :) Please diff against patch set 30. I uploaded a couple of > patch > sets after that as follow ups. > > https://chromiumcodereview.appspot.com/25293002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Great, thanks! On Mon, Nov 25, 2013 at 9:48 AM, William Chan (陈智昌) <willchan@chromium.org>wrote: > Thanks for the reminder. I'm flying back from Tokyo and will look at this > on Tuesday. If you don't hear from me then, please ping again. Sorry for > the latency. > On Nov 25, 2013 5:45 PM, <pliard@chromium.org> wrote: > >> On 2013/11/19 15:42:59, Philippe wrote: >> >>> Thanks William! >>> >> >> I also got rid of the uintptr_t conversions by making the maps use void* >>> directly. I hadn't done that initially since I had though that hash<> >>> wasn't >>> specialized for void* but I was wrong. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc >> >>> File base/memory/discardable_memory_allocator_android.cc (right): >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode119 >> >>> base/memory/discardable_memory_allocator_android.cc:119: Lock* lock, >>> On 2013/11/19 02:19:43, willchan wrote: >>> > DiscardableMemoryAllocator::AshmemRegion is a nested class, so it has >>> access >>> to >>> > DiscardableMemoryAllocator (the enclosing class)'s private members. Why >>> >> don't >> >>> > you just pass DiscardableMemoryAllocator*, rather than a private member >>> pointer >>> > and a callback. This also saves the extra heap allocations. And it >>> makes >>> things >>> > a bit clearer in terms of ownership, since |lock|'s lifetime is not >>> clear. >>> > DiscardableMemoryAllocator and AshmemRegion are already pretty tightly >>> >> coupled >> >>> > after all, so the weak layering boundaries don't buy that much, and >>> it's all >>> > internal to this .cc file anyway. I think giving up and accepting the >>> tight >>> > coupling will improve readability since the reader doesn't have to >>> chase >>> >> down >> >>> > the indirection to find out what the concrete references are. >>> >> >> Agreed, although I'm still doing the heap allocation (that happens very >>> rarely >>> since we are not expecting more than a couple of AshmemRegion instances) >>> since >>> the regions (pushed to a vector) are not copyable. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode134 >> >>> base/memory/discardable_memory_allocator_android.cc:134: >>> scoped_ptr<DiscardableMemory> Allocate(size_t client_requested_size, >>> On 2013/11/19 02:19:43, willchan wrote: >>> > Can you document this function? >>> >> >> Yes, sorry. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode147 >> >>> base/memory/discardable_memory_allocator_android.cc:147: new >>> DiscardableAshmemChunk(this, fd_, last_allocated_chunk_, address, >>> On 2013/11/19 02:19:43, willchan wrote: >>> > Is this the highest allocated chunk? What do you mean by "last"? It's >>> >> unclear >> >>> if >>> > "last" includes a just recycled chunk. It looks like it does not, >>> since you >>> > don't update this variable when you can RecycleFreeChunk(). >>> >> >> Good point. Renamed to |highest_allocated_chunk_|. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode233 >> >>> base/memory/discardable_memory_allocator_android.cc:233: // the value of >>> |chunk_merging_flags|. >>> On 2013/11/19 02:19:43, willchan wrote: >>> > chunk_merging_flags? >>> >> >> This was stale. I removed it. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode243 >> >>> base/memory/discardable_memory_allocator_android.cc:243: // >>> fragmentation. >>> On 2013/11/19 02:19:43, willchan wrote: >>> > This comment section wasn't completely clear. Just doublechecking, are >>> you >>> > saying that free chunks are unlocked and thus may be purged by the >>> kernel >>> under >>> > memory pressure, but without memory pressure, it won't immediately get >>> released? >>> > If there's no memory pressure, then what's the problem? And what does >>> minimizing >>> > fragmentation do? Increase the likelihood that we'll be able to free an >>> >> entire >> >>> > region? >>> >> >> Yeah, this is correct. We are trying to minimize fragmentation to avoid >>> >> causing >> >>> memory pressure ourselves. The free chunks are unpinned but they can >>> still >>> >> cause >> >>> memory pressure if we are keeping too many of them (which would happen >>> without >>> merging them). We still want to avoid causing "artificial" memory >>> pressure >>> >> even >> >>> if the ashmem subranges are evictable since the ashmem shrinker would >>> unnecessarily evict some ashmem subranges (that might belong to other >>> >> processes) >> >>> that could be precious. More space for free chunks also means less space >>> for >>> page cache (which would be one of the first candidates for eviction I >>> >> believe). >> >>> More importantly the Android framework is quite disconnected from the >>> kernel >>> unfortunately and has its own low-memory notification system (that is not >>> >> really >> >>> "ashmem aware"). If we start causing "artificial" memory pressure Android >>> >> would >> >>> start sending those notifications to background apps (and ourselves). >>> This >>> >> would >> >>> trigger a lot of activity on the system (and could even increase memory >>> >> usage). >> >> Does that make any sense? I clarified a bit the last sentence. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode244 >> >>> base/memory/discardable_memory_allocator_android.cc:244: void >>> MergeAndAddFreeChunk(void* previous_chunk, void* chunk, size_t size) { >>> On 2013/11/19 02:19:43, willchan wrote: >>> > I defer to you, but I kinda like adding a suffix to indicate this is >>> locked. >>> > Makes it more obvious when reading code. MergeAndFreeChunk_Locked() for >>> example. >>> >> >> Yeah, good idea. I used the assert at the beginning of the function's >>> body to >>> document that but it was not visible at the call side. I also like >>> suffixing >>> functions with e.g. "OnUIThread" when it makes sense. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode275 >> >>> base/memory/discardable_memory_allocator_android.cc:275: >>> deletion_callback_.Run(make_scoped_ptr(this)); // Deletes |this|. >>> On 2013/11/19 02:19:43, willchan wrote: >>> > I dislike this, because in my mind, this is saying that |this| owns >>> itself, >>> > which makes little sense. The allocator owns the AshmemRegion. This is >>> >> simply >> >>> > releasing the region back to the allocator (which will probably delete >>> it). >>> >> >> Yes. It was a form of indirect self-deletion IMO which is why I did this >>> but >>> >> I'm >> >>> fine with a raw pointer too. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode278 >> >>> base/memory/discardable_memory_allocator_android.cc:278: void >>> >> AddFreeChunk(const >> >>> FreeChunk& free_chunk) { >>> On 2013/11/19 02:19:43, willchan wrote: >>> > This should be locked already, right? AssertAcquired()? >>> >> >> Yes, the assert was missing. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode311 >> >>> base/memory/discardable_memory_allocator_android.cc:311: >>> std::multiset<FreeChunk> free_chunks_; >>> On 2013/11/19 02:19:43, willchan wrote: >>> > FYI, I'm spending a lot of time evaluating your data structure choices >>> in >>> doing >>> > this review right now. What would help me and future readers/reviewers >>> is >>> > explaining what operations you expect, and how your choice in data >>> >> structures >> >>> > facilitates those operations. There's no documentation anywhere >>> explaining >>> >> how >> >>> > the actual recycling/allocation works. >>> >> >> Yeah, sorry. I added a comment here and below. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android.cc#newcode326 >> >>> base/memory/discardable_memory_allocator_android.cc:326: >>> scoped_ptr<DiscardableMemory> DiscardableMemoryAllocator::Allocate( >>> On 2013/11/19 02:19:43, willchan wrote: >>> > This allocation scheme sure is naive :) That's fine, although I'd >>> definitely >>> > leave a TODO that it's worth looking into. Not saying you have to do >>> the >>> >> TODO, >> >>> > but it's nice to note it. >>> >> >> Yeah :) I added a TODO. I favored simplicity here since I don't expect >>> us to >>> have more than 2/3 AshmemRegion instances. But I will definitely look >>> into >>> >> that >> >>> later. >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android_unittest.cc >> >>> File base/memory/discardable_memory_allocator_android_unittest.cc >>> (right): >>> >> >> >> https://codereview.chromium.org/25293002/diff/1421001/ >> base/memory/discardable_memory_allocator_android_unittest.cc#newcode164 >> >>> base/memory/discardable_memory_allocator_android_unittest.cc:164: >>> On 2013/11/19 02:19:43, willchan wrote: >>> > I'm still going through this, but is there a test for something like >>> this. >>> > 1) Allocate 16KB chunk (triggers an ashmem region of 32KB) >>> > 2) Allocate 16KB chunk (uses the rest of the ashmem region of 32KB) >>> > 3) Free the first chunk. >>> > 4) Allocate a 8KB chunk. This splits the free chunk into two. Now we >>> have >>> >> 8KB >> >>> > (used), 8KB (free), 16KB (used) >>> > 5) Free the latter 16KB chunk >>> > >>> > At this point, what's the state like? >>> > 8KB (used), 24KB (free)? >>> > >>> > I'm still trying to grok the code, but it looks like 8KB (used), 8KB >>> (free), >>> > 16KB (free) from what I can tell. Is that right? >>> >> >> Wow, amazing catch! I added your unit test and made it pass. This implied >>> >> quite >> >>> a lot of changes. Let me know if you can come up with a better approach >>> than >>> what I did. >>> >> >> Ping William :) Please diff against patch set 30. I uploaded a couple of >> patch >> sets after that as follow ups. >> >> https://chromiumcodereview.appspot.com/25293002/ >> > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
I'm sorry, I'm too jetlagged to think straight. I'm going to sleep and look at this some more later. https://codereview.chromium.org/25293002/diff/1761001/base/base.gypi File base/base.gypi (right): https://codereview.chromium.org/25293002/diff/1761001/base/base.gypi#newcode289 base/base.gypi:289: 'memory/discardable_memory_allocator_android.cc', Nit: please sort these alphabetically...c before h. https://codereview.chromium.org/25293002/diff/1761001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1761001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:113: // |client_requested_size|). Can you add a DCHECK to enforce this invariant? https://codereview.chromium.org/25293002/diff/1761001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:180: scoped_ptr<DiscardableMemory> RecycleFreeChunk_Locked( Nit: WDYT about s/Recycle/Reuse/? My mental model for recycling is the act of throwing something away in a recycling bin so it can be reused in the future, whereas here we're actually reusing it in the present. https://codereview.chromium.org/25293002/diff/1761001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:202: std::make_pair(chunk_it->start, chunk_it->previous_chunk)); I'm trying to understand this. It looks like the previous chunk information is duplicated. It's stored as a const in FreeChunk, and in the previous_chunk_for_used_chunk_ map. It looks to me like various operations can modify previous_chunk_for_used_chunk_, such as SplitFreeChunk_Locked(). Doesn't that mean it's wrong to use chunk_it->previous_chunk here? Hm...let me see if I can concoct an example and ask you to write a test for it so we can validate it works. 1) Allocate 16KB chunk (triggers an ashmem region of 32KB, FreeChunk::previous_chunk is NULL) 2) Allocate 16KB chunk (uses the rest of the ashmem region of 32KB, FreeChunk::previous_chunk points to chunk in (1)) 3) Free the first chunk from (1). 4) Allocate a 8KB chunk. This splits the free chunk into two. Now we have 8KB (used), 8KB (free), 16KB (used) [its FreeChunk::previous_chunk points to the front of the split chunk]. 5) Allocate a 8KB chunk. Now we have 8KB (used), 8KB (used), 16KB (used)[but its FreeChunk::previous_chunk member now points to the wrong chunk] I guess to trigger a problem, you'd have to reuse the 16KB chunk to set the dangling FreeChunk::previous_chunk member into |previous_chunk_for_used_chunk_|. Ultimately, I think that we shouldn't duplicate the previous_chunk information in two places, and it should not be const, since it can obviously change over the lifetime of a chunk.
Thanks William! I just realized that I had done a rebase, sorry for that. https://chromiumcodereview.appspot.com/25293002/diff/1761001/base/memory/disc... File base/memory/discardable_memory_allocator_android.cc (right): https://chromiumcodereview.appspot.com/25293002/diff/1761001/base/memory/disc... base/memory/discardable_memory_allocator_android.cc:113: // |client_requested_size|). On 2013/11/27 06:52:07, willchan wrote: > Can you add a DCHECK to enforce this invariant? Done. https://chromiumcodereview.appspot.com/25293002/diff/1761001/base/memory/disc... base/memory/discardable_memory_allocator_android.cc:180: scoped_ptr<DiscardableMemory> RecycleFreeChunk_Locked( On 2013/11/27 06:52:07, willchan wrote: > Nit: WDYT about s/Recycle/Reuse/? My mental model for recycling is the act of > throwing something away in a recycling bin so it can be reused in the future, > whereas here we're actually reusing it in the present. Done. Egor wasn't very comfortable either with the term recycle so I guess I misused it :) https://chromiumcodereview.appspot.com/25293002/diff/1761001/base/memory/disc... base/memory/discardable_memory_allocator_android.cc:202: std::make_pair(chunk_it->start, chunk_it->previous_chunk)); On 2013/11/27 06:52:07, willchan wrote: > I'm trying to understand this. It looks like the previous chunk information is > duplicated. It's stored as a const in FreeChunk, and in the > previous_chunk_for_used_chunk_ map. It looks to me like various operations can > modify previous_chunk_for_used_chunk_, such as SplitFreeChunk_Locked(). Doesn't > that mean it's wrong to use chunk_it->previous_chunk here? > > Hm...let me see if I can concoct an example and ask you to write a test for it > so we can validate it works. > 1) Allocate 16KB chunk (triggers an ashmem region of 32KB, > FreeChunk::previous_chunk is NULL) > 2) Allocate 16KB chunk (uses the rest of the ashmem region of 32KB, > FreeChunk::previous_chunk points to chunk in (1)) > 3) Free the first chunk from (1). > 4) Allocate a 8KB chunk. This splits the free chunk into two. Now we have 8KB > (used), 8KB (free), 16KB (used) [its FreeChunk::previous_chunk points to the > front of the split chunk]. > 5) Allocate a 8KB chunk. Now we have 8KB (used), 8KB (used), 16KB (used)[but its > FreeChunk::previous_chunk member now points to the wrong chunk] > > I guess to trigger a problem, you'd have to reuse the 16KB chunk to set the > dangling FreeChunk::previous_chunk member into |previous_chunk_for_used_chunk_|. > > Ultimately, I think that we shouldn't duplicate the previous_chunk information > in two places, and it should not be const, since it can obviously change over > the lifetime of a chunk. I added your unit test (please make sure that it fully reflects what you wrote since I might have misunderstood) and it seems to pass this time :) The thing you might have missed is the fact that the previous chunk information is not duplicated strictly speaking. |previous_chunk_for_used_chunk_| only covers the chunks being *used* so the previous chunk information is rather moved from |previous_chunk_for_used_chunk_| to |FreeChunk::previous_chunk| when a chunk gets deleted by the client (and also moved back from |FreeChunk::previous_chunk| to |previous_chunk_for_used_chunk_| when a chunk gets recycled). I might be the one being jetlagged though :) I uploaded a separate patch set (#36) that tries to remove |FreeChunk::previous_chunk| but unfortunately fails currently since a DCHECK is failing. I find it more complicated without |FreeChunk::previous_chunk| to be honest. Let me know which version you prefer/looks simpler to you. If patch set 36 looks better to you then I will make it pass the unit tests but as you may have understood I have a slight preference for patch set 38 (which is a revert of patch set 36 + addresses the rest of your comments). What do you think?
https://codereview.chromium.org/25293002/diff/1761001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1761001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:202: std::make_pair(chunk_it->start, chunk_it->previous_chunk)); On 2013/11/27 14:22:10, Philippe wrote: > On 2013/11/27 06:52:07, willchan wrote: > > I'm trying to understand this. It looks like the previous chunk information is > > duplicated. It's stored as a const in FreeChunk, and in the > > previous_chunk_for_used_chunk_ map. It looks to me like various operations can > > modify previous_chunk_for_used_chunk_, such as SplitFreeChunk_Locked(). > Doesn't > > that mean it's wrong to use chunk_it->previous_chunk here? > > > > Hm...let me see if I can concoct an example and ask you to write a test for it > > so we can validate it works. > > 1) Allocate 16KB chunk (triggers an ashmem region of 32KB, > > FreeChunk::previous_chunk is NULL) > > 2) Allocate 16KB chunk (uses the rest of the ashmem region of 32KB, > > FreeChunk::previous_chunk points to chunk in (1)) > > 3) Free the first chunk from (1). > > 4) Allocate a 8KB chunk. This splits the free chunk into two. Now we have 8KB > > (used), 8KB (free), 16KB (used) [its FreeChunk::previous_chunk points to the > > front of the split chunk]. > > 5) Allocate a 8KB chunk. Now we have 8KB (used), 8KB (used), 16KB (used)[but > its > > FreeChunk::previous_chunk member now points to the wrong chunk] > > > > I guess to trigger a problem, you'd have to reuse the 16KB chunk to set the > > dangling FreeChunk::previous_chunk member into > |previous_chunk_for_used_chunk_|. > > > > Ultimately, I think that we shouldn't duplicate the previous_chunk information > > in two places, and it should not be const, since it can obviously change over > > the lifetime of a chunk. > > I added your unit test (please make sure that it fully reflects what you wrote > since I might have misunderstood) and it seems to pass this time :) The thing > you might have missed is the fact that the previous chunk information is not > duplicated strictly speaking. |previous_chunk_for_used_chunk_| only covers the > chunks being *used* so the previous chunk information is rather moved from > |previous_chunk_for_used_chunk_| to |FreeChunk::previous_chunk| when a chunk > gets deleted by the client (and also moved back from |FreeChunk::previous_chunk| > to |previous_chunk_for_used_chunk_| when a chunk gets recycled). > I might be the one being jetlagged though :) I uploaded a separate patch set > (#36) that tries to remove |FreeChunk::previous_chunk| but unfortunately fails > currently since a DCHECK is failing. I find it more complicated without > |FreeChunk::previous_chunk| to be honest. Let me know which version you > prefer/looks simpler to you. If patch set 36 looks better to you then I will > make it pass the unit tests but as you may have understood I have a slight > preference for patch set 38 (which is a revert of patch set 36 + addresses the > rest of your comments). > > What do you think? I think I was too jetlagged last night and wrong :) Thanks for the explanation and test. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:137: previous_chunk_for_used_chunk_.insert( Nit: I don't know if my suggestion will be any better, but it took me a little bit earlier to understand this member variable. I had to go look it up, see it was a hash_map, and then look at the uses to figure out which was the key and which was the value. FWIW, I'd go with |used_to_previous_chunk_map_|. But up to you. Naming is Hard (TM). https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:177: highest_allocated_chunk_(NULL) { Can you DCHECK the parameters and comment that |allocator| must outlive AshmemRegion? https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:190: const size_t fragmentation_bytes = chunk_it->size - client_requested_size; I'm trying to understand why you're subtracting the chunk size by |client_requested_size| here rather than |actual_size|. Here is the case I'm considering: Free chunk of size 10KB. If trying to allocate <=2KB, then it'll be split, but it won't be split when allocating 2KB<x<=4KB bytes. That's even though the actual size allocated in each is 4KB due to page alignment. I'm not objecting to this algorithmic choice...I just don't understand the pros/cons here. Why is it better to split the chunk in the <=2KB case, but not the 2KB<x<=4KB case? Comments would be appreciated. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:192: SplitFreeChunk_Locked(*chunk_it, actual_size); OK, I thought about it. Keep it if you prefer, but I'd like to restructure the code thusly: * Kill off SplitFreeChunk_Locked(), move the code in here. * Actually remove the FreeChunk (update |free_chunk_for_address_| and |free_chunks_|) sooner rather than at the end of the function. Store a copy of it in a local variable. Ah, I see you have a RemoveFreeChunk_Locked() function. That sounds perfect to me. My reasoning is, although I like smaller, more granular functions, I feel like these are fairly tightly coupled. SplitFreeChunk_Locked() isn't actually splitting the chunk. It's adding another (smaller) FreeChunk and updating the next used chunk to point to the new smaller one. But at the end of this function, we have this weird state where the data structures have two FreeChunks, one of which is subsumed by the other larger one, which is in process of being removed. That's what I mean by coupling. I then have to look to make sure that there are no other consumers for this SplitFreeChunk_Locked() function, and have to make sure there is never any new consumer that violates the invariants. That's why I think it's simpler to just move it into the calling function. I think this also enables actually removing the FreeChunk from the various data structures before adding the new one. Up to you though if you feel the current/another way is cleaner. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:218: const size_t new_chunk_size = free_chunk.size - allocation_size; Please add a DCHECK_GT(free_chunk.size, allocation_size). https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:255: while (previous_chunk) { I don't understand this while loop. There shouldn't be more than one _contiguous_ previous _free_ chunk, right? Otherwise, it should already have been merged... If previous_chunk is non-NULL on the 2nd iteration, I think it must be a used chunk. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:265: while (true) { Ditto, unsure if the while loop is necessary. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:279: DCHECK_EQ(size_, new_free_chunk_size); Unneeded, this was just checked above. It'd be good to DCHECK_EQ(base_, first_free_chunk) though. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:280: DCHECK(free_chunks_.empty() && free_chunk_for_address_.empty()); previous_chunk_for_used_chunk_ should be empty too, right? https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:312: size_t offset_; Can you move this next to highest_allocated_chunk_? And comment what it is (the end of the highest allocated chunk). https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:344: DCHECK(thread_checker_.CalledOnValidThread()); I think it'd be good to DCHECK(ashmem_regions_.empty()). Might catch a leak in the tests that way. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:352: // couple of AshmemRegion instances in practice though. Can we throw up a DCHECK() for this, so people hit the DCHECK and notice that they should be optimizing this? Pick whatever arbitrary constant you think makes sense. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:383: ashmem_regions_.resize(ashmem_regions_.size() - 1); why not pop_back() instead? https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_allocator_android.h (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.h:37: kMinAshmemRegionSize = 32 * 1024 * 1024, When the allocation size is larger than this, is there any benefit to using DiscardableMemoryAllocator? Since it eagerly frees memory to prevent memory pressure, then anything larger than this constant will be immediately freed. I see you have a test for this, and it's not obvious to me why you wouldn't just create a DiscardableMemoryAndroidSimple for it instead. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_allocator_android_unittest.cc (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:67: memory.reset(); It's not obvious to me why this shouldn't result in the AshmemRegion getting deleted. After all, it now has 0 used chunks. It's true, it's only got one FreeChunk, but the rest of the memory is not counted as a FreeChunk, but |offset_| hasn't reached |base_| + |size_| yet. Is the intent here to keep around the memory in case it gets reused? I thought we wanted to free as much memory as possible when we can. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:119: memory2.reset(); Similar to above, it's not clear to me why this shouldn't kill the whole AshmemRegion. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:120: memory1.reset(allocator_.Allocate(3 * kPageSize).release()); Why isn't this just memory1 = allocator_.Allocate(3 * kPageSize); https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:165: } It'd be cool if you added a third one of these where you ended up with: free, used, free. When you freed the used chunk, that'd merge all of them, and then also delete the entire AshmemRegion. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_android.cc (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_android.cc:117: const size_t kPageSize = 4096; Is there any way to whitelist this for known architectures so if someone tries to build this on an architecture that has a different page size, this will fail? Also, what happens if/when Android starts using hugepages? I'm fine with not solving these problems now, but I think it's be nice to make this code fail loudly (compile failure?) when that happens in the future.
Thanks a lot William! Great catches as usual. I feel quite retarded now :) https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:137: previous_chunk_for_used_chunk_.insert( On 2013/11/28 06:16:37, willchan wrote: > Nit: I don't know if my suggestion will be any better, but it took me a little > bit earlier to understand this member variable. I had to go look it up, see it > was a hash_map, and then look at the uses to figure out which was the key and > which was the value. FWIW, I'd go with |used_to_previous_chunk_map_|. But up to > you. Naming is Hard (TM). Yeah, I also renamed |free_chunk_for_address_| accordingly for consistency. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:177: highest_allocated_chunk_(NULL) { On 2013/11/28 06:16:37, willchan wrote: > Can you DCHECK the parameters and comment that |allocator| must outlive > AshmemRegion? Sure. One of the good points about immutability is that the parameters can be checked only once at construction. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:190: const size_t fragmentation_bytes = chunk_it->size - client_requested_size; On 2013/11/28 06:16:37, willchan wrote: > I'm trying to understand why you're subtracting the chunk size by > |client_requested_size| here rather than |actual_size|. Here is the case I'm > considering: > Free chunk of size 10KB. If trying to allocate <=2KB, then it'll be split, but > it won't be split when allocating 2KB<x<=4KB bytes. That's even though the > actual size allocated in each is 4KB due to page alignment. > > I'm not objecting to this algorithmic choice...I just don't understand the > pros/cons here. Why is it better to split the chunk in the <=2KB case, but not > the 2KB<x<=4KB case? Comments would be appreciated. I might be missing something here but my initial intent was to consider as fragmentation all the bytes that would not be visible by the client. If the client asks for 2 KBytes and the free chunk size is 4 KBytes, then there is a waste of 2 KBytes (i.e. nobody would use those 2 KBytes). If we use |actual_size| to do the computation then we would not see this waste (i.e. |fragmentation_bytes| would be 0). I agree that the current value of |kMaxChunkFragmentationBytes| (8192) is quite arbitrary/wrong though and I'm quite tempted to decrease it since this allocator tries to be optimized for memory rather than speed. We can see later if splitting chunks too often leads to performance issues. I added a comment at the top where the constant is defined. Also I will have a follow up CLs to report statistics. Two of them will be fragmentation (in pinned memory) caused by the client (since I'm now forbidding the allocator to cause fragmentation when a chunk gets reused) and "discardable fragmentation" (unpinned memory in the free chunks). Does that make any sense? What do you think? https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:192: SplitFreeChunk_Locked(*chunk_it, actual_size); On 2013/11/28 06:16:37, willchan wrote: > OK, I thought about it. Keep it if you prefer, but I'd like to restructure the > code thusly: > * Kill off SplitFreeChunk_Locked(), move the code in here. > * Actually remove the FreeChunk (update |free_chunk_for_address_| and > |free_chunks_|) sooner rather than at the end of the function. Store a copy of > it in a local variable. Ah, I see you have a RemoveFreeChunk_Locked() function. > That sounds perfect to me. > > My reasoning is, although I like smaller, more granular functions, I feel like > these are fairly tightly coupled. SplitFreeChunk_Locked() isn't actually > splitting the chunk. It's adding another (smaller) FreeChunk and updating the > next used chunk to point to the new smaller one. But at the end of this > function, we have this weird state where the data structures have two > FreeChunks, one of which is subsumed by the other larger one, which is in > process of being removed. That's what I mean by coupling. I then have to look to > make sure that there are no other consumers for this SplitFreeChunk_Locked() > function, and have to make sure there is never any new consumer that violates > the invariants. That's why I think it's simpler to just move it into the calling > function. I think this also enables actually removing the FreeChunk from the > various data structures before adding the new one. > > Up to you though if you feel the current/another way is cleaner. Agreed. I had externalized free chunk splitting to improve "visual readability" by making this function smaller but it was at the cost of actual readability. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:218: const size_t new_chunk_size = free_chunk.size - allocation_size; On 2013/11/28 06:16:37, willchan wrote: > Please add a DCHECK_GT(free_chunk.size, allocation_size). Yeah good point. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:255: while (previous_chunk) { On 2013/11/28 06:16:37, willchan wrote: > I don't understand this while loop. There shouldn't be more than one > _contiguous_ previous _free_ chunk, right? Otherwise, it should already have > been merged... > > If previous_chunk is non-NULL on the 2nd iteration, I think it must be a used > chunk. Yeah, you're right. I added a DCHECK though :) https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:265: while (true) { On 2013/11/28 06:16:37, willchan wrote: > Ditto, unsure if the while loop is necessary. Done. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:279: DCHECK_EQ(size_, new_free_chunk_size); On 2013/11/28 06:16:37, willchan wrote: > Unneeded, this was just checked above. It'd be good to DCHECK_EQ(base_, > first_free_chunk) though. Yeah, good point. The DCHECK you suggested actually raised a bug :) (which made me move the code updating the next contiguous chunk to AddFreeChunk()). https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:280: DCHECK(free_chunks_.empty() && free_chunk_for_address_.empty()); On 2013/11/28 06:16:37, willchan wrote: > previous_chunk_for_used_chunk_ should be empty too, right? Yes, I had forgotten to update this code when I added |previous_chunk_for_used_chunk_|. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:312: size_t offset_; On 2013/11/28 06:16:37, willchan wrote: > Can you move this next to highest_allocated_chunk_? And comment what it is (the > end of the highest allocated chunk). Done. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:344: DCHECK(thread_checker_.CalledOnValidThread()); On 2013/11/28 06:16:37, willchan wrote: > I think it'd be good to DCHECK(ashmem_regions_.empty()). Might catch a leak in > the tests that way. Great idea! This might also catch uses after free btw since the used chunks have a pointer to their ashmem region. In practice this code path will be very rarely/never taken in production though since this is used in a leaky lazy instance but anyway, good idea. Also, this caught another small bug which made me change |whole_ashmem_region_is_free| in the chunks merging function. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:352: // couple of AshmemRegion instances in practice though. On 2013/11/28 06:16:37, willchan wrote: > Can we throw up a DCHECK() for this, so people hit the DCHECK and notice that > they should be optimizing this? Pick whatever arbitrary constant you think makes > sense. Good idea! https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:383: ashmem_regions_.resize(ashmem_regions_.size() - 1); On 2013/11/28 06:16:37, willchan wrote: > why not pop_back() instead? Yeah good point :) I added ScopedVector::pop_back(). Let me know if you want me to revert that change. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_allocator_android.h (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.h:37: kMinAshmemRegionSize = 32 * 1024 * 1024, On 2013/11/28 06:16:37, willchan wrote: > When the allocation size is larger than this, is there any benefit to using > DiscardableMemoryAllocator? Since it eagerly frees memory to prevent memory > pressure, then anything larger than this constant will be immediately freed. I > see you have a test for this, and it's not obvious to me why you wouldn't just > create a DiscardableMemoryAndroidSimple for it instead. Yeah, you're right. I slightly modified the heuristic in discardable_memory_android.cc to take this into account (note that I might also tune the heuristic a little more later on for e.g. small allocations). I kept the unit test though since in some cases we might have to fallback to the allocator if FD usage gets really too high. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_allocator_android_unittest.cc (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:67: memory.reset(); On 2013/11/28 06:16:37, willchan wrote: > It's not obvious to me why this shouldn't result in the AshmemRegion getting > deleted. After all, it now has 0 used chunks. It's true, it's only got one > FreeChunk, but the rest of the memory is not counted as a FreeChunk, but > |offset_| hasn't reached |base_| + |size_| yet. Is the intent here to keep > around the memory in case it gets reused? I thought we wanted to free as much > memory as possible when we can. You're right :) There was no reason not to delete the AshmemRegion. For some reason my limited brain made me restrict the region deletion to the case where the region gets full first and then gets empty (as FreeingWholeAshmemRegionClosesAshmem testified below). Weird :) https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:119: memory2.reset(); On 2013/11/28 06:16:37, willchan wrote: > Similar to above, it's not clear to me why this shouldn't kill the whole > AshmemRegion. Done. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:120: memory1.reset(allocator_.Allocate(3 * kPageSize).release()); On 2013/11/28 06:16:37, willchan wrote: > Why isn't this just memory1 = allocator_.Allocate(3 * kPageSize); Yeah, no idea why I wrote this :) https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:165: } On 2013/11/28 06:16:37, willchan wrote: > It'd be cool if you added a third one of these where you ended up with: free, > used, free. > When you freed the used chunk, that'd merge all of them, and then also delete > the entire AshmemRegion. Yeah, good idea. Done. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_android.cc (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_android.cc:117: const size_t kPageSize = 4096; On 2013/11/28 06:16:37, willchan wrote: > Is there any way to whitelist this for known architectures so if someone tries > to build this on an architecture that has a different page size, this will fail? > Also, what happens if/when Android starts using hugepages? I'm fine with not > solving these problems now, but I think it's be nice to make this code fail > loudly (compile failure?) when that happens in the future. Yeah good idea. I added a DCHECK. This is not ideal compared to a COMPILE_ASSERT but it should fail early enough.
Can you move out the ScopedVector change to a separate changelist? Feel free to TBR=willchan, saying that I reviewed it in this codereview, and I'll send the approval after. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:190: const size_t fragmentation_bytes = chunk_it->size - client_requested_size; On 2013/11/28 16:42:53, Philippe wrote: > On 2013/11/28 06:16:37, willchan wrote: > > I'm trying to understand why you're subtracting the chunk size by > > |client_requested_size| here rather than |actual_size|. Here is the case I'm > > considering: > > Free chunk of size 10KB. If trying to allocate <=2KB, then it'll be split, but > > it won't be split when allocating 2KB<x<=4KB bytes. That's even though the > > actual size allocated in each is 4KB due to page alignment. > > > > I'm not objecting to this algorithmic choice...I just don't understand the > > pros/cons here. Why is it better to split the chunk in the <=2KB case, but not > > the 2KB<x<=4KB case? Comments would be appreciated. > > I might be missing something here but my initial intent was to consider as > fragmentation all the bytes that would not be visible by the client. If the > client asks for 2 KBytes and the free chunk size is 4 KBytes, then there is a > waste of 2 KBytes (i.e. nobody would use those 2 KBytes). If we use > |actual_size| to do the computation then we would not see this waste (i.e. > |fragmentation_bytes| would be 0). I agree that the current value of > |kMaxChunkFragmentationBytes| (8192) is quite arbitrary/wrong though and I'm > quite tempted to decrease it since this allocator tries to be optimized for > memory rather than speed. We can see later if splitting chunks too often leads > to performance issues. I added a comment at the top where the constant is > defined. Also I will have a follow up CLs to report statistics. Two of them will > be fragmentation (in pinned memory) caused by the client (since I'm now > forbidding the allocator to cause fragmentation when a chunk gets reused) and > "discardable fragmentation" (unpinned memory in the free chunks). > > Does that make any sense? What do you think? I agree that the current calculation is more accurate in terms of determining unused bytes / fragmentation. I guess what I was thinking is it's not how many bytes get wasted that matters, but rather how many bytes can be gained by splitting. I'm having difficulty thinking through the implications of this fragmentation. I'm inclined to let you do what you want here and take on the responsibility of fixing it once you have data on actual performance. https://codereview.chromium.org/25293002/diff/1901001/base/memory/scoped_vect... File base/memory/scoped_vector.h (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/scoped_vect... base/memory/scoped_vector.h:68: DCHECK(!empty()); Add logging.h for this.
Sorry, it's Thanksgiving so I didn't finish this as thoroughly. I'll do another pass later. And don't worry about the issues I pointed out earlier, I think it's normal for this kinda code to have tons of edge cases that can be hard to keep track of when authoring the code. Sometimes you need a fresh set of eyes. If I wrote this, I'm sure I'd make a number of similar/different mistakes myself :) https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:199: const FreeChunk free_chunk = RemoveFreeChunkFromIterator_Locked( I go back and forth on this, but part of me wonders if this would be clearer if you named free_chunk as reused_chunk or chunk_to_reuse or something. It's free at this point, but later on it sometimes is a bit confusing when the chunk is split and there's another free chunk. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:202: return scoped_ptr<DiscardableMemory>(); Nit: I'd prefer a newline here. Up to you though. I think the reason I prefer it is my brain expects a mental break after an early function return. If your brain disagrees, feel free to ignore. This is purely personal style. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:263: // There should not be more contiguous free chunks. Nit: just to be explicit, perhaps "There should not be more contiguous previous free chunks." Since there may be a contiguous "next" free chunk. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:267: // Merge with the next chunk. Nit: "Merge with the next chunk if free and present."? https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... File base/memory/discardable_memory_allocator_android_unittest.cc (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:50: // production though. I'd explain why...that they should just use DiscardableMemoryAndroidSimple instead. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:168: TEST_F(DiscardableMemoryAllocatorTest, MergeFreeChunksAdvanced3) { I try to avoid numbers if possible, although sometimes it's unavoidable. In this case, maybe name MergeFreeChunksAndDeleteAshmemRegion? https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... File base/memory/discardable_memory_android.cc (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:114: // This allows to delete ashmem regions individually by closing the ashmem file grammar nit: s/to delete/deleting/ https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:116: // allows us to do so. Or when the allocation size would require an entire ashmem region. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:119: // (i.e. releasing the physycal pages backing) individiual regions. spelling nits: * s/physycal/physical/ * s/individiual/individual/ https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:119: // (i.e. releasing the physycal pages backing) individiual regions. spelling nits: s/physycal/physical/ s/individiual/individual/ https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:127: // FD usage is too high no matter how big the requested size is. I don't fully understand this algorithm. The allocator helps conserve FDs, so I understand that as FDs get more constrained, you'd want to use the allocator more. The allocator only helps conserve FDs when the requested size is small enough that it doesn't fully use up kMinAshmemRegionSize. When size > kMinAshmemRegionSize - kPageSize, the allocator won't help conserve FDs. Therefore, I don't understand the point of kMaxFDUsageRateForVeryLargeAllocations. Again, I defer to you in this stylistic preference, but my suggestion is to re-merge this function into the other function and do: if size >= kMinAshmemRegionSize try DiscardableMemoryAndroidSimple else if ashmem_fd_count is low enough try DiscardableMemoryAndroidSimple try DiscardableMemoryAllocator::Allocate() https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:133: // allocations to systematically go through individial ashmem regions. spelling nit: s/individial/individual/ https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:134: if (size > internal::DiscardableMemoryAllocator::kMinAshmemRegionSize) size >= ... perhaps? Even more precise would be size > kMinAshmemRegionSize - kPageSize, although that's arguably a bit of a layering violation since the page alignment could arguably be a implementation internal detail of the allocator. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:149: return (size + kPageSize - 1) & mask; I'm not sure, but this might be a security issue due to overflow. I suggest failing the alloc if this wraps around. It's better than returning a smaller size than requested. It's probably a good idea to write a test that tries to allocate sizeof(size_t) and make sure it behaves as expected. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:230: if (ShouldUseAllocator(size)) It occurs to me now that all this checking of the FD limit is locked, but the lock is not held while the actual allocation happens, so the allocation may be racy and exceed the limit, but probably not by much. On the upside, it reduces the amount of contention on allocation. I'm fine with keeping as is if you think the slight amount of slop around respecting the limit is worth the reduction in contention, but it should be documented so people don't look at this code later and wonder if it's buggy. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... File base/memory/discardable_memory_android.h (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.h:25: size_t off, Nit: s/off/offset/, just to be explicit. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.h:29: bool UnlockAshmemRegion(int fd, size_t off, size_t size, const void* address); Nit: ditto here on offset
Thanks William, have a nice Thanksgiving! FYI, I also landed the pop_back() change as r237921 and did a rebase. https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1861009/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:190: const size_t fragmentation_bytes = chunk_it->size - client_requested_size; On 2013/11/28 17:44:19, willchan wrote: > On 2013/11/28 16:42:53, Philippe wrote: > > On 2013/11/28 06:16:37, willchan wrote: > > > I'm trying to understand why you're subtracting the chunk size by > > > |client_requested_size| here rather than |actual_size|. Here is the case I'm > > > considering: > > > Free chunk of size 10KB. If trying to allocate <=2KB, then it'll be split, > but > > > it won't be split when allocating 2KB<x<=4KB bytes. That's even though the > > > actual size allocated in each is 4KB due to page alignment. > > > > > > I'm not objecting to this algorithmic choice...I just don't understand the > > > pros/cons here. Why is it better to split the chunk in the <=2KB case, but > not > > > the 2KB<x<=4KB case? Comments would be appreciated. > > > > I might be missing something here but my initial intent was to consider as > > fragmentation all the bytes that would not be visible by the client. If the > > client asks for 2 KBytes and the free chunk size is 4 KBytes, then there is a > > waste of 2 KBytes (i.e. nobody would use those 2 KBytes). If we use > > |actual_size| to do the computation then we would not see this waste (i.e. > > |fragmentation_bytes| would be 0). I agree that the current value of > > |kMaxChunkFragmentationBytes| (8192) is quite arbitrary/wrong though and I'm > > quite tempted to decrease it since this allocator tries to be optimized for > > memory rather than speed. We can see later if splitting chunks too often leads > > to performance issues. I added a comment at the top where the constant is > > defined. Also I will have a follow up CLs to report statistics. Two of them > will > > be fragmentation (in pinned memory) caused by the client (since I'm now > > forbidding the allocator to cause fragmentation when a chunk gets reused) and > > "discardable fragmentation" (unpinned memory in the free chunks). > > > > Does that make any sense? What do you think? > > I agree that the current calculation is more accurate in terms of determining > unused bytes / fragmentation. I guess what I was thinking is it's not how many > bytes get wasted that matters, but rather how many bytes can be gained by > splitting. > > I'm having difficulty thinking through the implications of this fragmentation. > I'm inclined to let you do what you want here and take on the responsibility of > fixing it once you have data on actual performance. Right, I understand now your point of view. I will profile this once we submitted this to have a baseline. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:199: const FreeChunk free_chunk = RemoveFreeChunkFromIterator_Locked( On 2013/11/28 22:51:55, willchan wrote: > I go back and forth on this, but part of me wonders if this would be clearer if > you named free_chunk as reused_chunk or chunk_to_reuse or something. It's free > at this point, but later on it sometimes is a bit confusing when the chunk is > split and there's another free chunk. Yes, I was also tempted to rename this variable, good idea. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:202: return scoped_ptr<DiscardableMemory>(); On 2013/11/28 22:51:55, willchan wrote: > Nit: I'd prefer a newline here. Up to you though. I think the reason I prefer it > is my brain expects a mental break after an early function return. If your brain > disagrees, feel free to ignore. This is purely personal style. I added the blank line. I tend to avoid them in general since they are source of inconsistency in my opinion but sometimes we really need them :) https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:263: // There should not be more contiguous free chunks. On 2013/11/28 22:51:55, willchan wrote: > Nit: just to be explicit, perhaps > "There should not be more contiguous previous free chunks." Since there may be a > contiguous "next" free chunk. Done. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:267: // Merge with the next chunk. On 2013/11/28 22:51:55, willchan wrote: > Nit: "Merge with the next chunk if free and present."? Done. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... File base/memory/discardable_memory_allocator_android_unittest.cc (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:50: // production though. On 2013/11/28 22:51:55, willchan wrote: > I'd explain why...that they should just use DiscardableMemoryAndroidSimple > instead. Yes, good point. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_allocator_android_unittest.cc:168: TEST_F(DiscardableMemoryAllocatorTest, MergeFreeChunksAdvanced3) { On 2013/11/28 22:51:55, willchan wrote: > I try to avoid numbers if possible, although sometimes it's unavoidable. In this > case, maybe name MergeFreeChunksAndDeleteAshmemRegion? Done. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... File base/memory/discardable_memory_android.cc (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:114: // This allows to delete ashmem regions individually by closing the ashmem file On 2013/11/28 22:51:55, willchan wrote: > grammar nit: s/to delete/deleting/ Oops, thanks :) https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:116: // allows us to do so. On 2013/11/28 22:51:55, willchan wrote: > Or when the allocation size would require an entire ashmem region. Done. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:119: // (i.e. releasing the physycal pages backing) individiual regions. On 2013/11/28 22:51:55, willchan wrote: > spelling nits: > * s/physycal/physical/ > * s/individiual/individual/ Oops, I told you I was retarded :) https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:127: // FD usage is too high no matter how big the requested size is. On 2013/11/28 22:51:55, willchan wrote: > I don't fully understand this algorithm. The allocator helps conserve FDs, so I > understand that as FDs get more constrained, you'd want to use the allocator > more. The allocator only helps conserve FDs when the requested size is small > enough that it doesn't fully use up kMinAshmemRegionSize. When size > > kMinAshmemRegionSize - kPageSize, the allocator won't help conserve FDs. > Therefore, I don't understand the point of > kMaxFDUsageRateForVeryLargeAllocations. > > Again, I defer to you in this stylistic preference, but my suggestion is to > re-merge this function into the other function and do: > if size >= kMinAshmemRegionSize > try DiscardableMemoryAndroidSimple > else if ashmem_fd_count is low enough > try DiscardableMemoryAndroidSimple > > try DiscardableMemoryAllocator::Allocate() You're right, the allocator would very likely create a FD anyway for big allocations. I'm moving this logic back to Create() since it should be now simpler. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:133: // allocations to systematically go through individial ashmem regions. On 2013/11/28 22:51:55, willchan wrote: > spelling nit: s/individial/individual/ Done. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:133: // allocations to systematically go through individial ashmem regions. On 2013/11/28 22:51:55, willchan wrote: > spelling nit: s/individial/individual/ Done. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:134: if (size > internal::DiscardableMemoryAllocator::kMinAshmemRegionSize) On 2013/11/28 22:51:55, willchan wrote: > size >= ... perhaps? Even more precise would be size > kMinAshmemRegionSize - > kPageSize, although that's arguably a bit of a layering violation since the page > alignment could arguably be a implementation internal detail of the allocator. Agreed for the layering violation and also page alignment should not make a big difference for such big sizes (e.g. 32 MBytes). On the other hand I'm already doing page alignment too in CreateLockedMemory() so it would be unfortunate not to use it. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:149: return (size + kPageSize - 1) & mask; On 2013/11/28 22:51:55, willchan wrote: > I'm not sure, but this might be a security issue due to overflow. I suggest > failing the alloc if this wraps around. It's better than returning a smaller > size than requested. It's probably a good idea to write a test that tries to > allocate sizeof(size_t) and make sure it behaves as expected. Great catch! https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.cc:230: if (ShouldUseAllocator(size)) On 2013/11/28 22:51:55, willchan wrote: > It occurs to me now that all this checking of the FD limit is locked, but the > lock is not held while the actual allocation happens, so the allocation may be > racy and exceed the limit, but probably not by much. On the upside, it reduces > the amount of contention on allocation. > > I'm fine with keeping as is if you think the slight amount of slop around > respecting the limit is worth the reduction in contention, but it should be > documented so people don't look at this code later and wonder if it's buggy. Yeah this is indeed slightly racy. I think it should not be a big deal in practice though and I would be a bit concerned if we kept the lock acquired for to the whole allocation. We could easily end up with a dead lock when the allocator tries to create an ashmem region (and reacquire the already-acquired lock). I added a comment though. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... File base/memory/discardable_memory_android.h (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.h:25: size_t off, On 2013/11/28 22:51:55, willchan wrote: > Nit: s/off/offset/, just to be explicit. Done. https://codereview.chromium.org/25293002/diff/1901001/base/memory/discardable... base/memory/discardable_memory_android.h:29: bool UnlockAshmemRegion(int fd, size_t off, size_t size, const void* address); On 2013/11/28 22:51:55, willchan wrote: > Nit: ditto here on offset Done. https://codereview.chromium.org/25293002/diff/1901001/base/memory/scoped_vect... File base/memory/scoped_vector.h (right): https://codereview.chromium.org/25293002/diff/1901001/base/memory/scoped_vect... base/memory/scoped_vector.h:68: DCHECK(!empty()); On 2013/11/28 17:44:19, willchan wrote: > Add logging.h for this. Done.
LGTM I only had nits left. Sorry again for the long delays in reviewing this CL. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:153: base::AutoLock auto_lock(allocator_->lock_); Not sure if you care, but base:: is unnecessary, since this code is all in base:: already. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:154: MergeAndAddFreeChunk_Locked(chunk, size); Nit: I like to add comments after this indicating |this| may be deleted beyond this point. Just so no one tries to dereference |this| somehow, like referencing a member variable. This is a not uncommon bug. Ditto for the end of MergeAndAddFreeChunk_Locked. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:211: reused_chunk.size - client_requested_size; Nit: add a DCHECK_GE(reused_chunk.size, client_requested_size)? https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:217: void* const previous_chunk = reused_chunk.start; I think it'd be clearer if you deleted this. Since the previous chunk is the reused_chunk. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:254: DCHECK_NE(0U, used_to_previous_chunk_map_.size()); DCHECK(!used_to_previous_chunk_map_.empty()) Some STL containers have a non-constant .size() impl. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:287: DCHECK_EQ(0U, free_chunks_.size()); I'd use .empty() for these guys too. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... File base/memory/discardable_memory_unittest.cc (right): https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_unittest.cc:7: #include "base/memory/discardable_memory.h" foo.h always goes first in foo_unittest.cc, see the example in the style guide for basictypes_unittest.cc and basictypes.h.
Thanks a lot William! FYI, I will see if I can add a new Telemetry test doing some scrolling on images.google.com before I land this. There should be a direct impact on private dirty. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... File base/memory/discardable_memory_allocator_android.cc (right): https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:153: base::AutoLock auto_lock(allocator_->lock_); On 2013/12/01 00:58:56, willchan wrote: > Not sure if you care, but base:: is unnecessary, since this code is all in > base:: already. Yes, good point. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:154: MergeAndAddFreeChunk_Locked(chunk, size); On 2013/12/01 00:58:56, willchan wrote: > Nit: I like to add comments after this indicating |this| may be deleted beyond > this point. Just so no one tries to dereference |this| somehow, like referencing > a member variable. This is a not uncommon bug. > > Ditto for the end of MergeAndAddFreeChunk_Locked. Yes, good idea. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:211: reused_chunk.size - client_requested_size; On 2013/12/01 00:58:56, willchan wrote: > Nit: add a DCHECK_GE(reused_chunk.size, client_requested_size)? Yeah, good idea. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:217: void* const previous_chunk = reused_chunk.start; On 2013/12/01 00:58:56, willchan wrote: > I think it'd be clearer if you deleted this. Since the previous chunk is the > reused_chunk. Done. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:254: DCHECK_NE(0U, used_to_previous_chunk_map_.size()); On 2013/12/01 00:58:56, willchan wrote: > DCHECK(!used_to_previous_chunk_map_.empty()) > > Some STL containers have a non-constant .size() impl. I used this pattern to have more detailed assertion messages but that's true that we don't really care about the specific size of the container here in case the assertion fails. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_allocator_android.cc:287: DCHECK_EQ(0U, free_chunks_.size()); On 2013/12/01 00:58:56, willchan wrote: > I'd use .empty() for these guys too. Done. https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... File base/memory/discardable_memory_unittest.cc (right): https://codereview.chromium.org/25293002/diff/2011001/base/memory/discardable... base/memory/discardable_memory_unittest.cc:7: #include "base/memory/discardable_memory.h" On 2013/12/01 00:58:56, willchan wrote: > foo.h always goes first in foo_unittest.cc, see the example in the style guide > for basictypes_unittest.cc and basictypes.h. Done.
https://codereview.chromium.org/25293002/diff/2051001/base/memory/discardable... File base/memory/discardable_memory_unittest.cc (right): https://codereview.chromium.org/25293002/diff/2051001/base/memory/discardable... base/memory/discardable_memory_unittest.cc:15: TEST(DiscardableMemoryTest, TooLargeAllocationFails) { The addition of this test is causing emulated discardable memory to behave bad and the LockAndUnLock test below will fail if run after this test. Is this test really valid on all platforms? max_allowed_allocation_size seem to be very implementation specific.
On 2013/12/05 23:21:39, David Reveman wrote: > https://codereview.chromium.org/25293002/diff/2051001/base/memory/discardable... > File base/memory/discardable_memory_unittest.cc (right): > > https://codereview.chromium.org/25293002/diff/2051001/base/memory/discardable... > base/memory/discardable_memory_unittest.cc:15: TEST(DiscardableMemoryTest, > TooLargeAllocationFails) { > The addition of this test is causing emulated discardable memory to behave bad > and the LockAndUnLock test below will fail if run after this test. Is this test > really valid on all platforms? max_allowed_allocation_size seem to be very > implementation specific. Ok, time to land :) JFYI, I have been slightly worried in the last few days about the fact that the decoded images cache doesn't take into account discardable bitmaps when computing the cache memory usage (used to trigger eviction or not). This means in practice that we can end up with (too) many ashmem regions whose most of the chunks are discardable but this can cause memory pressure. Basically, DiscardableMemory instances are almost never deleted currently. I'm confident though that this can be addressed in the short term with https://chromiumcodereview.appspot.com/110273002/.
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/pliard@chromium.org/25293002/2091001
Message was sent while issue was closed.
Change committed as 239763
Message was sent while issue was closed.
https://codereview.chromium.org/25293002/diff/2091001/base/memory/discardable... File base/memory/discardable_memory_unittest.cc (right): https://codereview.chromium.org/25293002/diff/2091001/base/memory/discardable... base/memory/discardable_memory_unittest.cc:15: TEST(DiscardableMemoryTest, TooLargeAllocationFails) { FYI, I'm reverting the CL. ASAN complains due to malloc() returning NULL on Linux with the provider. I think we should just disable the test with ASAN or tell ASAN somehow that the NULL is expected. I will reland with the fix. |