|
|
DescriptionAdd memory pressure listener to Blob storage
This CL adds a memory pressure listener to blob storage memory
controller and evicts the blob items to disk on pressure. The evictions
are throttled to 30seconds to avoid thrashing disk. Records histograms
of sizes evicted with reasons.
BUG=715859
Review-Url: https://codereview.chromium.org/2857283002
Cr-Commit-Position: refs/heads/master@{#471189}
Committed: https://chromium.googlesource.com/chromium/src/+/b26c3a43ba0a153e42639400d53ded106f5791a9
Patch Set 1 : . #Patch Set 2 : fix #Patch Set 3 : add test. #Patch Set 4 : set limit in constants. #
Total comments: 7
Patch Set 5 : ratio and pressue arg. #
Total comments: 8
Patch Set 6 : Rename constant, use base::Uma function and add base attribute. #Patch Set 7 : Rename constant, use base::Uma function and add base attribute. #
Depends on Patchset: Dependent Patchsets: Messages
Total messages: 52 (35 generated)
Patchset #1 (id:1) has been deleted
Patchset #1 (id:20001) has been deleted
The CQ bit was checked by ssid@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: cast_shell_linux on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/cast_shell_linu...) linux_chromium_tsan_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by ssid@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_tsan_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
Patchset #1 (id:40001) has been deleted
The CQ bit was checked by ssid@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_android_rel_ng on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/linux_androi...)
ssid@chromium.org changed reviewers: + dmurph@chromium.org
PTAL thanks
On 2017/05/04 21:14:28, ssid wrote: > PTAL thanks This is awesome, I'd like to take a slightly separate approach and try to keep those limits constant. Is there a way to listen to changes in memory pressure? So we know when we are in normal, close to out, and critical? Perhaps we can bake in knowledge of memory pressure into the system - what should we do when we have memory pressure? Evict everything to disk? Reduce our in-memory quota to 5%? Also, there's a possibility that us evicting stuff to disk on critical pressure will increase memory before we can actually free stuff.
On 2017/05/05 03:38:40, dmurph wrote: > On 2017/05/04 21:14:28, ssid wrote: > > PTAL thanks > > This is awesome, I'd like to take a slightly separate approach and try to keep > those limits constant. > > Is there a way to listen to changes in memory pressure? So we know when we are > in normal, close to out, and critical? Hm, so there is no way to know if we got back to normal after a memory pressure. In Android, we just get signals of moderate and critical from the system just depending on how many apps are left in background and how soon the process will get killed with OOM. So, the maximum we can optimize here is for moderate vs critical. Also, on other platforms we mostly never get the signals, and i think in ChromeOS we just get critical (not sure). > Perhaps we can bake in knowledge of memory pressure into the system - what > should we do when we have memory pressure? Evict everything to disk? Reduce our > in-memory quota to 5%? Also, there's a possibility that us evicting stuff to > disk on critical pressure will increase memory before we can actually free > stuff. hm that is an interesting point, I didn't realize that the in-flight memory could be that much. I am not very familiar with this code, could you explain why we could have a big usage just before writing to disk. Maybe we can try to address this issue first. Generally the policy is to try and clear as much memory as possible in browser to stay alive after a memory pressure to not cause foreground OOM crashes. I am trying to commit as much as possible to disk and keep at most 100KB in-memory (since the page file size). If you think it looks bad to change the limits at pressure, maybe i can try to change the committing logic to check for limits in normal case and to commit maximum for the pressure case (an argument to EvictToDisk(until_healthy OR memory_pressure)).
On 2017/05/05 18:05:05, ssid wrote: > On 2017/05/05 03:38:40, dmurph wrote: > > On 2017/05/04 21:14:28, ssid wrote: > > > PTAL thanks > > > > This is awesome, I'd like to take a slightly separate approach and try to keep > > those limits constant. > > > > Is there a way to listen to changes in memory pressure? So we know when we are > > in normal, close to out, and critical? > > Hm, so there is no way to know if we got back to normal after a memory pressure. > In Android, we just get signals of moderate and critical from the system just > depending on how many apps are left in background and how soon the process will > get killed with OOM. > So, the maximum we can optimize here is for moderate vs critical. Also, on other > platforms we mostly never get the signals, and i think in ChromeOS we just get > critical (not sure). > > > Perhaps we can bake in knowledge of memory pressure into the system - what > > should we do when we have memory pressure? Evict everything to disk? Reduce > our > > in-memory quota to 5%? Also, there's a possibility that us evicting stuff to > > disk on critical pressure will increase memory before we can actually free > > stuff. > > hm that is an interesting point, I didn't realize that the in-flight memory > could be that much. I am not very familiar with this code, could you explain why > we could have a big usage just before writing to disk. Maybe we can try to > address this issue first. We basically schedule a bunch of tasks to save memory to disk. We don't use THAT much more memory, but we definitely don't immediately free it, and we only free the memory once it has been fully saved to disk. And then it's just a switch of a refptr, so any BlobReader that is holding the 'memory-backed' snapshot will still keep that memory alive :/ probably not super common though, as we evict using an LRU. Basically the danger here is that 1. We're not immediately evicting, and 2. We're technically using more memory to store our task/storage state while we wait for the file runner to run our task 3. Using the file writing primitives could possibly use memory? Not sure here. So this could ironically cause things to get worse in the critical case. > Generally the policy is to try and clear as much memory as possible in browser > to stay alive after a memory pressure to not cause foreground OOM crashes. I am > trying to commit as much as possible to disk and keep at most 100KB in-memory > (since the page file size). > > If you think it looks bad to change the limits at pressure, maybe i can try to > change the committing logic to check for limits in normal case and to commit > maximum for the pressure case (an argument to EvictToDisk(until_healthy OR > memory_pressure)). If we could either make the memory pressure modified size part of the BlobStorageLimits struct, either as a member variable or a method that applies a percentage, I would be a fan. Then we could make that method change you mention, so the while loop can handle the criteria separately (or use an appropriate temp variable, etc).
> We basically schedule a bunch of tasks to save memory to disk. We don't use THAT > much more memory, but we definitely don't immediately free it, and we only free > the memory once it has been fully saved to disk. And then it's just a switch of > a refptr, so any BlobReader that is holding the 'memory-backed' snapshot will > still keep that memory alive :/ probably not super common though, as we evict > using an LRU. So, IIUC, the extra memory usage is because of the 2 vectors: items_paging_to_file_, items_to_swap and items_for_paging. One is vector of ints, one scoped_refptr and other is vector of ptrs. I think it is okay to use this much when compared to the amount of memory that we will free. I do not understand why in_flight_memory_used_ is incremented by the total items size rather than just number of items * sizeof(int + void* + scoped_refptr); Can we find out if a reader is holding reference to the memory before returning the element in CollectItemsForEviction(); > Basically the danger here is that > 1. We're not immediately evicting, and > 2. We're technically using more memory to store our task/storage state while we > wait for the file runner to run our task > 3. Using the file writing primitives could possibly use memory? Not sure here. So, for the case where we have 2MB of blobs in memory and we do not clear them because the limit is 2.5 for file and 5MB for in-memory size. In this case if we set the page file size to be 100KB and write memory to disk, the memory usage increase will be somewhere comparable to 100KB since the file writes are sequential and the impact would be 2MB savings overall. In case where we have only 200KB of storage and we try to write we will end up with a peak usage of 300KB then after writing files we would clear the memory. In either case a 100KB increase in the memory for saving memory would still be worthwhile. I can verify it, but by looking at the code the memory usage because of the vectors mentioned above should not comparable to 100KB. So, I'd still say lets try and write to files even in critical pressure case. > So this could ironically cause things to get worse in the critical case. > > > Generally the policy is to try and clear as much memory as possible in browser > > to stay alive after a memory pressure to not cause foreground OOM crashes. I > am > > trying to commit as much as possible to disk and keep at most 100KB in-memory > > (since the page file size). > > > > If you think it looks bad to change the limits at pressure, maybe i can try to > > change the committing logic to check for limits in normal case and to commit > > maximum for the pressure case (an argument to EvictToDisk(until_healthy OR > > memory_pressure)). > > If we could either make the memory pressure modified size part of the > BlobStorageLimits struct, either as a member variable or a method that applies a > percentage, I would be a fan. Then we could make that method change you mention, > so the while loop can handle the criteria separately (or use an appropriate temp > variable, etc). Sounds good I will try this one. Thanks!
I forgot to mention, the memory pressure listeners are replaced with memory coordinator clients, which has a purging call and a throttle call. But, these are not enabled yet and we are currently using pressure listeners. In the new version we can implement better strategy for purging only when the system wants us to purge. So, I believe this would be a short term fix.
On 2017/05/06 00:57:59, ssid wrote: > > We basically schedule a bunch of tasks to save memory to disk. We don't use > THAT > > much more memory, but we definitely don't immediately free it, and we only > free > > the memory once it has been fully saved to disk. And then it's just a switch > of > > a refptr, so any BlobReader that is holding the 'memory-backed' snapshot will > > still keep that memory alive :/ probably not super common though, as we evict > > using an LRU. > > So, IIUC, the extra memory usage is because of the 2 vectors: > items_paging_to_file_, items_to_swap and items_for_paging. One is vector of > ints, one scoped_refptr and other is vector of ptrs. I think it is okay to use > this much when compared to the amount of memory that we will free. I do not > understand why in_flight_memory_used_ is incremented by the total items size > rather than just number of items * sizeof(int + void* + scoped_refptr); > > Can we find out if a reader is holding reference to the memory before returning > the element in CollectItemsForEviction(); I... I don't think so. We don't keep this anywhere other than refcounts.... Actually, you could look at the scoped_refptr<BlobDataItem> before we grab a ref - if it only has one reference, then we don't have a snapshot elsewhere. That snapshot shouldn't be used for too long - it is just for a read of the blob, then it goes away. > > > Basically the danger here is that > > 1. We're not immediately evicting, and > > 2. We're technically using more memory to store our task/storage state while > we > > wait for the file runner to run our task > > 3. Using the file writing primitives could possibly use memory? Not sure here. > > So, for the case where we have 2MB of blobs in memory and we do not clear them > because the limit is 2.5 for file and 5MB for in-memory size. In this case if we > set the page file size to be 100KB and write memory to disk, the memory usage > increase will be somewhere comparable to 100KB since the file writes are > sequential and the impact would be 2MB savings overall. In case where we have > only 200KB of storage and we try to write we will end up with a peak usage of > 300KB then after writing files we would clear the memory. In either case a 100KB > increase in the memory for saving memory would still be worthwhile. I can verify > it, but by looking at the code the memory usage because of the vectors mentioned > above should not comparable to 100KB. > So, I'd still say lets try and write to files even in critical pressure case. Ok. We can also look at making the min page size a percentage of the in-memory quota. > > > So this could ironically cause things to get worse in the critical case. > > > > > Generally the policy is to try and clear as much memory as possible in > browser > > > to stay alive after a memory pressure to not cause foreground OOM crashes. I > > am > > > trying to commit as much as possible to disk and keep at most 100KB > in-memory > > > (since the page file size). > > > > > > If you think it looks bad to change the limits at pressure, maybe i can try > to > > > change the committing logic to check for limits in normal case and to commit > > > maximum for the pressure case (an argument to EvictToDisk(until_healthy OR > > > memory_pressure)). > > > > If we could either make the memory pressure modified size part of the > > BlobStorageLimits struct, either as a member variable or a method that applies > a > > percentage, I would be a fan. Then we could make that method change you > mention, > > so the while loop can handle the criteria separately (or use an appropriate > temp > > variable, etc). > > Sounds good I will try this one. Thanks!
https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... File storage/browser/blob/blob_memory_controller.cc (right): https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.cc:511: base::Unretained(this))), Unretained? How is this being registered? Can we use weak ptrs? https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.cc:812: in_memory_limit = 0; Do we want to flush everything? https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... File storage/browser/blob/blob_memory_controller.h (right): https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.h:215: void MaybeScheduleEvictionUntilSystemHealthy(bool on_memory_pressure); Please make the argument MemoryPressureLevel instead of a bool.
> I... I don't think so. We don't keep this anywhere other than refcounts.... > > Actually, you could look at the scoped_refptr<BlobDataItem> before we grab a ref > - if it only has one reference, then we don't have a snapshot elsewhere. That > snapshot shouldn't be used for too long - it is just for a read of the blob, > then it goes away. I don't think we have an api in scoped_refptr to look up number of references. I think it is fine to keep them alive since they are actually being read. Once their use is over they will be deleted anyway. wdyt? > Ok. We can also look at making the min page size a percentage of the in-memory > quota. Made it a ratio of memory limit, instead of constant. https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... File storage/browser/blob/blob_memory_controller.cc (right): https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.cc:511: base::Unretained(this))), On 2017/05/09 21:57:33, dmurph wrote: > Unretained? How is this being registered? Can we use weak ptrs? The unregistration is safe because the callback is only invoked on the correct thread, and the notification is also always invoked on the same thread. The code here prevents the observer from getting accessed after unregistration: https://chromium.googlesource.com/chromium/src/+/a39b866eac82e716d65def1bd9a4... I did not add it because it is not required. Also just realized that i cannot add GetWeakPtr() here since the weak_factory_ is initialized later. I cannot move the weak_factory_ up in the initialization because of static assert that requires the weak_factory_ to always be the last member in the class. Do you think I should make this unique_ptr and pass weak ptr? Or I could add a comment to explain why unretained. https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.cc:812: in_memory_limit = 0; On 2017/05/09 21:57:33, dmurph wrote: > Do we want to flush everything? Actually we would still not flush anything less than min_page_file_size_under_pressure. So, that number tells maximum size keep in memory, right? https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... File storage/browser/blob/blob_memory_controller.h (right): https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.h:215: void MaybeScheduleEvictionUntilSystemHealthy(bool on_memory_pressure); On 2017/05/09 21:57:33, dmurph wrote: > Please make the argument MemoryPressureLevel instead of a bool. Done.
Description was changed from ========== Add memory pressure listener to Blob storage This CL adds a memory pressure listener to blob storage memory controller and evicts the blob items to disk on pressure. The evictions are throttled to 60seconds to avoid thrashing disk. Records histograms of sizes evicted with reasons. BUG=715859 ========== to ========== Add memory pressure listener to Blob storage This CL adds a memory pressure listener to blob storage memory controller and evicts the blob items to disk on pressure. The evictions are throttled to 30seconds to avoid thrashing disk. Records histograms of sizes evicted with reasons. BUG=715859 ==========
lgtm with nits. If you disagree w/ constant rename let me know. https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... File storage/browser/blob/blob_memory_controller.cc (right): https://codereview.chromium.org/2857283002/diff/120001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.cc:511: base::Unretained(this))), On 2017/05/10 00:21:38, ssid wrote: > On 2017/05/09 21:57:33, dmurph wrote: > > Unretained? How is this being registered? Can we use weak ptrs? > > The unregistration is safe because the callback is only invoked on the correct > thread, and the notification is also always invoked on the same thread. The code > here prevents the observer from getting accessed after unregistration: > https://chromium.googlesource.com/chromium/src/+/a39b866eac82e716d65def1bd9a4... > I did not add it because it is not required. > > Also just realized that i cannot add GetWeakPtr() here since the weak_factory_ > is initialized later. I cannot move the weak_factory_ up in the initialization > because of static assert that requires the weak_factory_ to always be the last > member in the class. > > Do you think I should make this unique_ptr and pass weak ptr? Or I could add a > comment to explain why unretained. If you think that it won't get called after-free, then that's fine as long as you comment how that invariant is kept. I'm assuming this is done by how the listener is registered, which isn't in this patch, correct? (you could also use a weakptr and just create the callback in the constructor method) { memory_pressure_listener_ = base::Bind(&BlobMemoryController::OnMemoryPressure, weak_factory_.GetWeakPtr()) } But then it can't be const. https://codereview.chromium.org/2857283002/diff/140001/storage/browser/blob/b... File storage/browser/blob/blob_memory_controller.cc (right): https://codereview.chromium.org/2857283002/diff/140001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.cc:888: nit: remove newline https://codereview.chromium.org/2857283002/diff/140001/storage/common/blob_st... File storage/common/blob_storage/blob_storage_constants.h (right): https://codereview.chromium.org/2857283002/diff/140001/storage/common/blob_st... storage/common/blob_storage/blob_storage_constants.h:69: float min_page_file_size_ratio_under_pressure = Can we put this in terms of the memory limit instead of page size? This and the constant above. What about max_blob_in_memory_space_under_pressure_ratio? Then we can add a comment like so: " The ratio applied to max_blob_in_memory_space to reduce memory usage under memory pressure. Note: Under pressure we modify the min_page_file_size to ensure we can evict items until we get below the reduced memory limit. " And then kDefaultMaxBlobInMemorySpaceUnderPressureRatio
ssid@chromium.org changed reviewers: + isherman@chromium.org
+isherman for histograms.xml change. ptal Thanks
Metrics LGTM % comments: https://codereview.chromium.org/2857283002/diff/140001/storage/browser/blob/b... File storage/browser/blob/blob_memory_controller.cc (right): https://codereview.chromium.org/2857283002/diff/140001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.cc:940: base::HistogramBase::kUmaTargetedHistogramFlag); Please use one of the base::UmaHistogram... functions rather than calling FactoryGet directly, unless you're trying to do something extra fancy here that I'm missing. (The functions are somewhat new, and now preferred over manual FactoryGet invocations in most cases.) https://codereview.chromium.org/2857283002/diff/140001/tools/metrics/histogra... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/2857283002/diff/140001/tools/metrics/histogra... tools/metrics/histograms/histograms.xml:70620: +<histogram name="Storage.Blob.SizeEvictedToDiskInKB" units="KB"> nit: Please add the attribute base="true" to indicate that this histogram name is not emitted to directly.
The CQ bit was checked by ssid@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_chromium_rel_ng on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_rel_...) win_chromium_x64_rel_ng on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_x64_...)
The CQ bit was checked by ssid@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_chromium_compile_dbg_ng on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_comp...)
Patchset #7 (id:180001) has been deleted
The CQ bit was checked by ssid@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Made the changes suggested, Thanks. https://codereview.chromium.org/2857283002/diff/140001/storage/browser/blob/b... File storage/browser/blob/blob_memory_controller.cc (right): https://codereview.chromium.org/2857283002/diff/140001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.cc:888: On 2017/05/10 19:13:34, dmurph wrote: > nit: remove newline Done. https://codereview.chromium.org/2857283002/diff/140001/storage/browser/blob/b... storage/browser/blob/blob_memory_controller.cc:940: base::HistogramBase::kUmaTargetedHistogramFlag); On 2017/05/11 20:13:39, Ilya Sherman wrote: > Please use one of the base::UmaHistogram... functions rather than calling > FactoryGet directly, unless you're trying to do something extra fancy here that > I'm missing. (The functions are somewhat new, and now preferred over manual > FactoryGet invocations in most cases.) Ah thanks! I couldn't find these functions when I searched. Fixed. https://codereview.chromium.org/2857283002/diff/140001/storage/common/blob_st... File storage/common/blob_storage/blob_storage_constants.h (right): https://codereview.chromium.org/2857283002/diff/140001/storage/common/blob_st... storage/common/blob_storage/blob_storage_constants.h:69: float min_page_file_size_ratio_under_pressure = On 2017/05/10 19:13:34, dmurph wrote: > Can we put this in terms of the memory limit instead of page size? This and the > constant above. > > What about max_blob_in_memory_space_under_pressure_ratio? > > Then we can add a comment like so: > " > The ratio applied to max_blob_in_memory_space to reduce memory usage under > memory pressure. > Note: Under pressure we modify the min_page_file_size to ensure we can evict > items until we get below the reduced memory limit. > " > > And then kDefaultMaxBlobInMemorySpaceUnderPressureRatio Done. https://codereview.chromium.org/2857283002/diff/140001/tools/metrics/histogra... File tools/metrics/histograms/histograms.xml (right): https://codereview.chromium.org/2857283002/diff/140001/tools/metrics/histogra... tools/metrics/histograms/histograms.xml:70620: +<histogram name="Storage.Blob.SizeEvictedToDiskInKB" units="KB"> On 2017/05/11 20:13:40, Ilya Sherman wrote: > nit: Please add the attribute base="true" to indicate that this histogram name > is not emitted to directly. Done.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by ssid@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from isherman@chromium.org, dmurph@chromium.org Link to the patchset: https://codereview.chromium.org/2857283002/#ps200001 (title: "Rename constant, use base::Uma function and add base attribute.")
CQ is trying da patch. Follow status at: https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 200001, "attempt_start_ts": 1494555671302880, "parent_rev": "5e53c60ebe374282c8c2864a8e6d1d864afa221a", "commit_rev": "b26c3a43ba0a153e42639400d53ded106f5791a9"}
Message was sent while issue was closed.
Description was changed from ========== Add memory pressure listener to Blob storage This CL adds a memory pressure listener to blob storage memory controller and evicts the blob items to disk on pressure. The evictions are throttled to 30seconds to avoid thrashing disk. Records histograms of sizes evicted with reasons. BUG=715859 ========== to ========== Add memory pressure listener to Blob storage This CL adds a memory pressure listener to blob storage memory controller and evicts the blob items to disk on pressure. The evictions are throttled to 30seconds to avoid thrashing disk. Records histograms of sizes evicted with reasons. BUG=715859 Review-Url: https://codereview.chromium.org/2857283002 Cr-Commit-Position: refs/heads/master@{#471189} Committed: https://chromium.googlesource.com/chromium/src/+/b26c3a43ba0a153e42639400d53d... ==========
Message was sent while issue was closed.
Committed patchset #7 (id:200001) as https://chromium.googlesource.com/chromium/src/+/b26c3a43ba0a153e42639400d53d...
Message was sent while issue was closed.
On 2017/05/12 at 02:28:59, commit-bot wrote: > Committed patchset #7 (id:200001) as https://chromium.googlesource.com/chromium/src/+/b26c3a43ba0a153e42639400d53d... The unit test added in this CL appears to be flaky: https://luci-logdog.appspot.com/v/?s=chromium%2Fbb%2Fchromium.linux%2FLinux_T... [ RUN ] BlobMemoryControllerTest.OnMemoryPressure ../../storage/browser/blob/blob_memory_controller_unittest.cc:1159: Failure Value of: file_runner_->HasPendingTask() Actual: false Expected: true ../../storage/browser/blob/blob_memory_controller_unittest.cc:1167: Failure Expected: 1u Which is: 1 To be equal to: controller.memory_usage() Which is: 5 ../../storage/browser/blob/blob_memory_controller_unittest.cc:1168: Failure Expected: size_to_load - 1 Which is: 4 To be equal to: controller.disk_usage() Which is: 0 [ FAILED ] BlobMemoryControllerTest.OnMemoryPressure (0 ms)
Message was sent while issue was closed.
On 2017/05/22 20:22:01, Marijn Kruisselbrink wrote: > On 2017/05/12 at 02:28:59, commit-bot wrote: > > Committed patchset #7 (id:200001) as > https://chromium.googlesource.com/chromium/src/+/b26c3a43ba0a153e42639400d53d... > > The unit test added in this CL appears to be flaky: > https://luci-logdog.appspot.com/v/?s=chromium%2Fbb%2Fchromium.linux%2FLinux_T... > > [ RUN ] BlobMemoryControllerTest.OnMemoryPressure > ../../storage/browser/blob/blob_memory_controller_unittest.cc:1159: Failure > Value of: file_runner_->HasPendingTask() > Actual: false > Expected: true > ../../storage/browser/blob/blob_memory_controller_unittest.cc:1167: Failure > Expected: 1u > Which is: 1 > To be equal to: controller.memory_usage() > Which is: 5 > ../../storage/browser/blob/blob_memory_controller_unittest.cc:1168: Failure > Expected: size_to_load - 1 > Which is: 4 > To be equal to: controller.disk_usage() > Which is: 0 > [ FAILED ] BlobMemoryControllerTest.OnMemoryPressure (0 ms) Thanks for posting, Can you please file a bug? I can look at it in some time. |