|
|
Created:
7 years, 4 months ago by kaanb Modified:
7 years, 3 months ago CC:
chromium-reviews, joi+watch-content_chromium.org, darin-cc_chromium.org, cc-bugs_chromium.org, jam, apatrick_chromium, klobag.chromium Base URL:
https://chromium.googlesource.com/chromium/src.git@master Visibility:
Public. |
DescriptionSets the mapped memory reclaim limit for the renderer compositor on Android. Also moves the max_bytes_pending_upload limit of the PixelBufferRasterWorkerPool to the RenderWidgetCompositor since the mapped memory reclaim limit depends on it.
BUG=272591
Committed: https://src.chromium.org/viewvc/chrome?view=rev&revision=222158
Patch Set 1 #Patch Set 2 : fixes #Patch Set 3 : Fix unittests #Patch Set 4 : Add comment #
Total comments: 3
Patch Set 5 : unittest fixes first part #Patch Set 6 : Second part of test cleanup #Patch Set 7 : More test cleanup #Patch Set 8 : Fix ImplSidePaintingSettings #Patch Set 9 : Set MappedMemoryReclaimLimit to be the same as MaxBytesPendingUpload #Patch Set 10 : s/max_bytes_pending_upload/max_transfer_buffer_usage_bytes/ #
Total comments: 9
Patch Set 11 : incorporate code reviews #Patch Set 12 : Add a default for max_transfer_buffer_usage_bytes #Patch Set 13 : IWYU #
Total comments: 7
Patch Set 14 : Code review feedback #
Total comments: 2
Patch Set 15 : Move max_transfer_buffer_usage_bytes to OutputSurface::Capabilities #
Total comments: 2
Patch Set 16 : move to using contextprovider capabilities #Patch Set 17 : cleanup #Patch Set 18 : rebase #Patch Set 19 : rebase #
Total comments: 19
Patch Set 20 : Incorporate code reviews #
Total comments: 1
Patch Set 21 : feedback #
Total comments: 14
Patch Set 22 : IWYU #
Total comments: 28
Patch Set 23 : More code reviews #Patch Set 24 : more code review comments #Patch Set 25 : Add a unittest #Patch Set 26 : Fixed the unittest #
Total comments: 9
Patch Set 27 : Set the limit in the unittest #
Total comments: 4
Patch Set 28 : Incorporate code reviews #
Total comments: 6
Patch Set 29 : rename constants #Patch Set 30 : rebase #Patch Set 31 : rebase #Messages
Total messages: 82 (0 generated)
https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_settin... File cc/trees/layer_tree_settings.h (right): https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_settin... cc/trees/layer_tree_settings.h:67: size_t max_bytes_pending_upload; how about we rename this to something a bit less specific if you're moving it out of PBRWP, max_transfer_buffer_usage_bytes? https://codereview.chromium.org/22900018/diff/9001/content/renderer/render_wi... File content/renderer/render_widget.cc (right): https://codereview.chromium.org/22900018/diff/9001/content/renderer/render_wi... content/renderer/render_widget.cc:2544: const size_t kMappedMemoryReclaimLimit = max_bytes_pending_upload / 4; this needs to be explained. why divide by 4 and not for example 5?
Needing the InitializeSettings override everywhere seems tedious. Can it not be a default that works for tests?
On 2013/08/22 18:41:11, epenner wrote: > Needing the InitializeSettings override everywhere seems tedious. Can it not be > a default that works for tests? Good point, that's why I put a TODO for myself. reveman@ do you have any suggestions? I covered all of cc_unittests but I'm worried there could be other tests that would need modifications.
On 2013/08/22 18:42:29, kaanb wrote: > On 2013/08/22 18:41:11, epenner wrote: > > Needing the InitializeSettings override everywhere seems tedious. Can it not > be > > a default that works for tests? > > Good point, that's why I put a TODO for myself. reveman@ do you have any > suggestions? I covered all of cc_unittests but I'm worried there could be other > tests that would need modifications. It seems to me there used to be no limit, so a high default limit would be fine, and tests don't need to be bothered? Also, does this effectively set the same limit in the context as we set in CC? So we will use a max of 2X our CC limit? This sounds reasonable but keep in mind that ideal would be 1X the CC limit.
On 2013/08/22 18:42:29, kaanb wrote: > On 2013/08/22 18:41:11, epenner wrote: > > Needing the InitializeSettings override everywhere seems tedious. Can it not > be > > a default that works for tests? > > Good point, that's why I put a TODO for myself. reveman@ do you have any > suggestions? I covered all of cc_unittests but I'm worried there could be other > tests that would need modifications. This is internal to PBRWP so it's probably a bug if some non-PBRWP code relied on it to be of some specific value. I would just make the default "numeric_limits<size_t>::max()".
https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... File cc/trees/layer_tree_host_unittest_context.cc (right): https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... cc/trees/layer_tree_host_unittest_context.cc:223: settings->max_bytes_pending_upload = 16 * 1024 * 1024; Ya, why not just do this in LayerTreeTest before calling InitializeSettings? We do that with other settings.
On 2013/08/22 18:54:03, epenner wrote: > On 2013/08/22 18:42:29, kaanb wrote: > > On 2013/08/22 18:41:11, epenner wrote: > > > Needing the InitializeSettings override everywhere seems tedious. Can it not > > be > > > a default that works for tests? > > > > Good point, that's why I put a TODO for myself. reveman@ do you have any > > suggestions? I covered all of cc_unittests but I'm worried there could be > other > > tests that would need modifications. > > It seems to me there used to be no limit, so a high default limit would be fine, > and tests don't need to be bothered? > > Also, does this effectively set the same limit in the context as we set in CC? > So we will use a max of 2X our CC limit? This sounds reasonable but keep in > mind that ideal would be 1X the CC limit. Context limit is 1/5 of the CC limit.
On 2013/08/22 19:41:19, danakj wrote: > https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... > File cc/trees/layer_tree_host_unittest_context.cc (right): > > https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... > cc/trees/layer_tree_host_unittest_context.cc:223: > settings->max_bytes_pending_upload = 16 * 1024 * 1024; > Ya, why not just do this in LayerTreeTest before calling InitializeSettings? We > do that with other settings. Ok, so I should set settings->max_bytes_pending_upload to numeric_limits<size_t>::max() in LayerTreeTest and not change PBRWP constructor?
On 2013/08/22 20:34:55, kaanb wrote: > On 2013/08/22 19:41:19, danakj wrote: > > > https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... > > File cc/trees/layer_tree_host_unittest_context.cc (right): > > > > > https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... > > cc/trees/layer_tree_host_unittest_context.cc:223: > > settings->max_bytes_pending_upload = 16 * 1024 * 1024; > > Ya, why not just do this in LayerTreeTest before calling InitializeSettings? > We > > do that with other settings. > > Ok, so I should set settings->max_bytes_pending_upload to > numeric_limits<size_t>::max() in LayerTreeTest and not change PBRWP constructor? Ya, you are setting it to this 16 megs in so many tests. If that's the default value set in LayerTreeTest, then you can override it only when needed. This is what we do for values that have different useful defaults in tests than in prod. The PBRWP could keep the DCHECK but shouldn't need the if() thing just for tests.
On 2013/08/22 20:38:07, danakj wrote: > On 2013/08/22 20:34:55, kaanb wrote: > > On 2013/08/22 19:41:19, danakj wrote: > > > > > > https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... > > > File cc/trees/layer_tree_host_unittest_context.cc (right): > > > > > > > > > https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... > > > cc/trees/layer_tree_host_unittest_context.cc:223: > > > settings->max_bytes_pending_upload = 16 * 1024 * 1024; > > > Ya, why not just do this in LayerTreeTest before calling InitializeSettings? > > We > > > do that with other settings. > > > > Ok, so I should set settings->max_bytes_pending_upload to > > numeric_limits<size_t>::max() in LayerTreeTest and not change PBRWP > constructor? > > Ya, you are setting it to this 16 megs in so many tests. If that's the default > value set in LayerTreeTest, then you can override it only when needed. This is > what we do for values that have different useful defaults in tests than in prod. > > The PBRWP could keep the DCHECK but shouldn't need the if() thing just for > tests. The 16 meg default is awkward. Please use something less magic in unit tests like <size_t>::max() if possible.
On 2013/08/22 20:41:43, David Reveman wrote: > On 2013/08/22 20:38:07, danakj wrote: > > On 2013/08/22 20:34:55, kaanb wrote: > > > On 2013/08/22 19:41:19, danakj wrote: > > > > > > > > > > https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... > > > > File cc/trees/layer_tree_host_unittest_context.cc (right): > > > > > > > > > > > > > > https://codereview.chromium.org/22900018/diff/9001/cc/trees/layer_tree_host_u... > > > > cc/trees/layer_tree_host_unittest_context.cc:223: > > > > settings->max_bytes_pending_upload = 16 * 1024 * 1024; > > > > Ya, why not just do this in LayerTreeTest before calling > InitializeSettings? > > > We > > > > do that with other settings. > > > > > > Ok, so I should set settings->max_bytes_pending_upload to > > > numeric_limits<size_t>::max() in LayerTreeTest and not change PBRWP > > constructor? > > > > Ya, you are setting it to this 16 megs in so many tests. If that's the default > > value set in LayerTreeTest, then you can override it only when needed. This is > > what we do for values that have different useful defaults in tests than in > prod. > > > > The PBRWP could keep the DCHECK but shouldn't need the if() thing just for > > tests. > > The 16 meg default is awkward. Please use something less magic in unit tests > like <size_t>::max() if possible. All done, PTAL
https://codereview.chromium.org/22900018/diff/30001/cc/test/impl_side_paintin... File cc/test/impl_side_painting_settings.h (right): https://codereview.chromium.org/22900018/diff/30001/cc/test/impl_side_paintin... cc/test/impl_side_painting_settings.h:18: max_transfer_buffer_usage_bytes = std::numeric_limits<size_t>::max(); needed? this is awkward cuz this class should just turn on impl paint really. can this move out to the places that use this class if needed? https://codereview.chromium.org/22900018/diff/30001/cc/test/layer_tree_test.cc File cc/test/layer_tree_test.cc (right): https://codereview.chromium.org/22900018/diff/30001/cc/test/layer_tree_test.c... cc/test/layer_tree_test.cc:575: = std::numeric_limits<size_t>::max(); nit: = goes on the previous line. https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_host_... File cc/trees/layer_tree_host_impl_unittest.cc (right): https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_host_... cc/trees/layer_tree_host_impl_unittest.cc:88: = std::numeric_limits<size_t>::max(); nit: = on prev line, and all the other places this happens. https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_setti... File cc/trees/layer_tree_settings.cc (right): https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_setti... cc/trees/layer_tree_settings.cc:61: max_transfer_buffer_usage_bytes(0) { Is there a reason for this to default to 0 in prod?
https://codereview.chromium.org/22900018/diff/30001/cc/test/impl_side_paintin... File cc/test/impl_side_painting_settings.h (right): https://codereview.chromium.org/22900018/diff/30001/cc/test/impl_side_paintin... cc/test/impl_side_painting_settings.h:18: max_transfer_buffer_usage_bytes = std::numeric_limits<size_t>::max(); On 2013/08/23 01:21:46, danakj wrote: > needed? this is awkward cuz this class should just turn on impl paint really. > can this move out to the places that use this class if needed? https://code.google.com/p/chromium/codesearch#chromium/src/cc/layers/picture_... It's used here and if I don't set the max_transfer_buffer_usage_bytes the test fails with the DCHECK that checks for max_transfer_buffer_usage_bytes not being 0 in PBRWP https://codereview.chromium.org/22900018/diff/30001/cc/test/layer_tree_test.cc File cc/test/layer_tree_test.cc (right): https://codereview.chromium.org/22900018/diff/30001/cc/test/layer_tree_test.c... cc/test/layer_tree_test.cc:575: = std::numeric_limits<size_t>::max(); On 2013/08/23 01:21:46, danakj wrote: > nit: = goes on the previous line. Done. https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_host_... File cc/trees/layer_tree_host_impl_unittest.cc (right): https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_host_... cc/trees/layer_tree_host_impl_unittest.cc:88: = std::numeric_limits<size_t>::max(); On 2013/08/23 01:21:46, danakj wrote: > nit: = on prev line, and all the other places this happens. Done. https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_setti... File cc/trees/layer_tree_settings.cc (right): https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_setti... cc/trees/layer_tree_settings.cc:61: max_transfer_buffer_usage_bytes(0) { On 2013/08/23 01:21:46, danakj wrote: > Is there a reason for this to default to 0 in prod? We only initialize it to 0, it's always being assigned later when we create the compositor. We also have a DCHECK in PBRWP to guarantee that it is always assigned a valid value.
https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_setti... File cc/trees/layer_tree_settings.cc (right): https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_setti... cc/trees/layer_tree_settings.cc:61: max_transfer_buffer_usage_bytes(0) { On 2013/08/23 01:37:31, kaanb wrote: > On 2013/08/23 01:21:46, danakj wrote: > > Is there a reason for this to default to 0 in prod? > > We only initialize it to 0, it's always being assigned later when we create the > compositor. We also have a DCHECK in PBRWP to guarantee that it is always > assigned a valid value. I'm just wondering cuz if you initialize it to something big here then you don't have to do anything anywhere for tests. If there's some worry about prod places not init'ing it or something?
Exactly, if I initialize to 0 here then I can DCHECK to ensure it's being assigned correctly. It'd be weird to DCHECK against std::numerical_values<size_t>::max() On Thu, Aug 22, 2013 at 6:53 PM, <danakj@chromium.org> wrote: > > https://codereview.chromium.**org/22900018/diff/30001/cc/** > trees/layer_tree_settings.cc<https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_settings.cc> > File cc/trees/layer_tree_settings.**cc (right): > > https://codereview.chromium.**org/22900018/diff/30001/cc/** > trees/layer_tree_settings.cc#**newcode61<https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_settings.cc#newcode61> > cc/trees/layer_tree_settings.**cc:61: max_transfer_buffer_usage_**bytes(0) > { > On 2013/08/23 01:37:31, kaanb wrote: > >> On 2013/08/23 01:21:46, danakj wrote: >> > Is there a reason for this to default to 0 in prod? >> > > We only initialize it to 0, it's always being assigned later when we >> > create the > >> compositor. We also have a DCHECK in PBRWP to guarantee that it is >> > always > >> assigned a valid value. >> > > I'm just wondering cuz if you initialize it to something big here then > you don't have to do anything anywhere for tests. If there's some worry > about prod places not init'ing it or something? > > https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2013/08/23 03:23:17, kaanb wrote: > Exactly, if I initialize to 0 here then I can DCHECK to ensure it's being > assigned correctly. It'd be weird to DCHECK against > std::numerical_values<size_t>::max() > LGTM, but I kind of agree on just setting the default and not impacting tests. ::max() is not much different than zero. How about setting it to ::max() by default, DCHECK that max in ONE test (with a comment that code relies on this to check if the default has been changed), and then DCHECK that it is != ::max() in initialization? > > On Thu, Aug 22, 2013 at 6:53 PM, <mailto:danakj@chromium.org> wrote: > > > > > https://codereview.chromium.**org/22900018/diff/30001/cc/** > > > trees/layer_tree_settings.cc<https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_settings.cc> > > File cc/trees/layer_tree_settings.**cc (right): > > > > https://codereview.chromium.**org/22900018/diff/30001/cc/** > > > trees/layer_tree_settings.cc#**newcode61<https://codereview.chromium.org/22900018/diff/30001/cc/trees/layer_tree_settings.cc#newcode61> > > cc/trees/layer_tree_settings.**cc:61: max_transfer_buffer_usage_**bytes(0) > > { > > On 2013/08/23 01:37:31, kaanb wrote: > > > >> On 2013/08/23 01:21:46, danakj wrote: > >> > Is there a reason for this to default to 0 in prod? > >> > > > > We only initialize it to 0, it's always being assigned later when we > >> > > create the > > > >> compositor. We also have a DCHECK in PBRWP to guarantee that it is > >> > > always > > > >> assigned a valid value. > >> > > > > I'm just wondering cuz if you initialize it to something big here then > > you don't have to do anything anywhere for tests. If there's some worry > > about prod places not init'ing it or something? > > > > > https://codereview.chromium.**org/22900018/%3Chttps://codereview.chromium.org...> > > > > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org.
On 2013/08/23 18:24:11, epenner wrote: > On 2013/08/23 03:23:17, kaanb wrote: > > Exactly, if I initialize to 0 here then I can DCHECK to ensure it's being > > assigned correctly. It'd be weird to DCHECK against > > std::numerical_values<size_t>::max() > > > > LGTM, but I kind of agree on just setting the default and not impacting tests. > > ::max() is not much different than zero. How about setting it to ::max() by > default, DCHECK that max in ONE test (with a comment that code relies on this to > check if the default has been changed), and then DCHECK that it is != ::max() in > initialization? > Or better yet, add a DEFAULT constant. And just DCHECK(val != DEFAULT) in your code when you want to insure it was initialized (default being ::max(), for tests).
On 2013/08/23 18:56:00, epenner wrote: > On 2013/08/23 18:24:11, epenner wrote: > > On 2013/08/23 03:23:17, kaanb wrote: > > > Exactly, if I initialize to 0 here then I can DCHECK to ensure it's being > > > assigned correctly. It'd be weird to DCHECK against > > > std::numerical_values<size_t>::max() > > > > > > > LGTM, but I kind of agree on just setting the default and not impacting tests. > > > > ::max() is not much different than zero. How about setting it to ::max() by > > default, DCHECK that max in ONE test (with a comment that code relies on this > to > > check if the default has been changed), and then DCHECK that it is != ::max() > in > > initialization? > > > > Or better yet, add a DEFAULT constant. And just DCHECK(val != DEFAULT) in your > code when you want to insure it was initialized (default being ::max(), for > tests). Done. I also removed the DCHECK per our discussion. PTAL.
https://codereview.chromium.org/22900018/diff/38001/cc/resources/raster_worke... File cc/resources/raster_worker_pool_unittest.cc (right): https://codereview.chromium.org/22900018/diff/38001/cc/resources/raster_worke... cc/resources/raster_worker_pool_unittest.cc:18: #include "cc/trees/layer_tree_settings.h" not needed? https://codereview.chromium.org/22900018/diff/38001/cc/trees/layer_tree_setti... File cc/trees/layer_tree_settings.h (right): https://codereview.chromium.org/22900018/diff/38001/cc/trees/layer_tree_setti... cc/trees/layer_tree_settings.h:19: static const size_t kDefaultMaxTransferBufferUsageBytes = How about making this a static member of LTSettings, move its initialization to the .cc and the <limits> header there? https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/rend... File content/renderer/gpu/render_widget_compositor.cc (right): https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/rend... content/renderer/gpu/render_widget_compositor.cc:585: max_transfer_buffer_usage_bytes_); You could just pass the value from the layer_tree_host_->settings(). And not need to store anything new here. Or maybe RenderWidget could get the settings value directly too.
https://codereview.chromium.org/22900018/diff/38001/cc/resources/raster_worke... File cc/resources/raster_worker_pool_unittest.cc (right): https://codereview.chromium.org/22900018/diff/38001/cc/resources/raster_worke... cc/resources/raster_worker_pool_unittest.cc:18: #include "cc/trees/layer_tree_settings.h" On 2013/08/23 23:25:39, danakj wrote: > not needed? Include-What-You-Use https://codereview.chromium.org/22900018/diff/38001/cc/trees/layer_tree_setti... File cc/trees/layer_tree_settings.h (right): https://codereview.chromium.org/22900018/diff/38001/cc/trees/layer_tree_setti... cc/trees/layer_tree_settings.h:19: static const size_t kDefaultMaxTransferBufferUsageBytes = On 2013/08/23 23:25:39, danakj wrote: > How about making this a static member of LTSettings, move its initialization to > the .cc and the <limits> header there? Done. https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/rend... File content/renderer/gpu/render_widget_compositor.cc (right): https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/rend... content/renderer/gpu/render_widget_compositor.cc:585: max_transfer_buffer_usage_bytes_); On 2013/08/23 23:25:39, danakj wrote: > You could just pass the value from the layer_tree_host_->settings(). And not > need to store anything new here. > > Or maybe RenderWidget could get the settings value directly too. Apparently there are cases where layer_tree_host_ is NULL when this method is called and we're crashing if I try to access layer_tree_host_->settings()
https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/rend... File content/renderer/gpu/render_widget_compositor.cc (right): https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/rend... content/renderer/gpu/render_widget_compositor.cc:585: max_transfer_buffer_usage_bytes_); On 2013/08/24 00:25:01, kaanb wrote: > On 2013/08/23 23:25:39, danakj wrote: > > You could just pass the value from the layer_tree_host_->settings(). And not > > need to store anything new here. > > > > Or maybe RenderWidget could get the settings value directly too. > > Apparently there are cases where layer_tree_host_ is NULL when this method is > called and we're crashing if I try to access layer_tree_host_->settings() What, this is called from LayertreeHost. Can you give a callstack?
Stack Trace: RELADDR FUNCTION FILE:LINE 00021e90 tgkill+12 /system/lib/libc.so 00012ef1 pthread_kill+48 /system/lib/libc.so 00013105 raise+10 /system/lib/libc.so 00011e3b <unknown> /system/lib/libc.so 00021744 abort+4 /system/lib/libc.so 00012921 <unknown> /system/lib/libc.so 00011e99 __assert2+20 /system/lib/libc.so 009cd955 scoped_ptr<cc::LayerTreeHost, base::DefaultDeleter<cc::LayerTreeHost> >::operator->() const /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:376 01674d1f content::RenderViewImpl::GetURLForGraphicsContext3D() /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/render_view_impl.cc:5826 0154beeb cc::LayerTreeHost::CreateOutputSurface() /usr/local/google/code/clankium-master/src/out/Debug/../../cc/trees/layer_tree_host.cc:411 0154bf45 cc::LayerTreeHost::InitializeProxy(scoped_ptr<cc::Proxy, base::DefaultDeleter<cc::Proxy> >) /usr/local/google/code/clankium-master/src/out/Debug/../../cc/trees/layer_tree_host.cc:134 0154bc45 cc::LayerTreeHost::Initialize(scoped_refptr<base::SingleThreadTaskRunner>) /usr/local/google/code/clankium-master/src/out/Debug/../../cc/trees/layer_tree_host.cc:122 0154b98b cc::LayerTreeHost::Create(cc::LayerTreeHostClient*, cc::LayerTreeSettings const&, scoped_refptr<base::SingleThreadTaskRunner>) /usr/local/google/code/clankium-master/src/out/Debug/../../cc/trees/layer_tree_host.cc:78 v------> base::DefaultDeleter<cc::LayerTreeHost>::operator()(cc::LayerTreeHost*) const /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:137 v------> base::internal::scoped_ptr_impl<cc::LayerTreeHost, base::DefaultDeleter<cc::LayerTreeHost> >::reset(cc::LayerTreeHost*) /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:246 v------> TakeState<cc::LayerTreeHost, base::DefaultDeleter<cc::LayerTreeHost> > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:212 v------> operator=<cc::LayerTreeHost, base::DefaultDeleter<cc::LayerTreeHost> > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:361 01674847 content::RenderWidgetCompositor::initialize(cc::LayerTreeSettings) /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/gpu/render_widget_compositor.cc:413 v------> Data /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:280 v------> scoped_ptr_impl<content::RenderWidgetCompositor, base::DefaultDeleter<content::RenderWidgetCompositor> > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:201 v------> scoped_ptr /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:346 016746bd content::RenderWidgetCompositor::Create(content::RenderWidget*, bool) /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/gpu/render_widget_compositor.cc:318 01673cad content::RenderWidget::initializeLayerTreeView() /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/render_widget.cc:1587 01673d0f content::RenderViewImpl::initializeLayerTreeView() /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/render_view_impl.cc:2968 01673af9 WebKit::WebViewImpl::setIsAcceleratedCompositingActive(bool) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/web/WebViewImpl.cpp:3939 01673825 WebKit::WebViewImpl::setRootGraphicsLayer(WebCore::GraphicsLayer*) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/web/WebViewImpl.cpp:3837 016738df WebCore::RenderLayerCompositor::attachRootLayer(WebCore::RenderLayerCompositor::RootLayerAttachment) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:2371 01671631 WebCore::RenderLayerCompositor::ensureRootLayer() /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:2308 v------> WebCore::RenderLayerCompositor::enableCompositingMode(bool) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:253 01671667 WebCore::RenderLayerCompositor::enableCompositingMode(bool) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:247 0167101d WebCore::RenderLayerCompositor::updateCompositingLayers(WebCore::CompositingUpdateType, WebCore::RenderLayer*) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:356 01670ae9 WebCore::FrameView::updateCompositingLayersAfterStyleChange() /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/page/FrameView.cpp:699 016642c1 WebCore::Document::recalcStyle(WebCore::Node::StyleChange) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/dom/Document.cpp:1668 01663d37 WebCore::Document::attach(WebCore::Node::AttachContext const&) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/dom/Document.cpp:1885 01663bf1 WebCore::DOMWindow::setDocument(WTF::PassRefPtr<WebCore::Document>) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/page/DOMWindow.cpp:344 016606df WebCore::DocumentLoader::createWriterFor(WebCore::Frame*, WebCore::Document const*, WebCore::KURL const&, WTF::String const&, WTF::String const&, bool, bool) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:993 v------> WebCore::DocumentLoader::ensureWriter(WTF::String const&, WebCore::KURL const&) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:598 016604d5 WebCore::DocumentLoader::ensureWriter(WTF::String const&, WebCore::KURL const&) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:591 0166057f WebCore::DocumentLoader::ensureWriter() /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:588 01660441 WebCore::DocumentLoader::commitData(char const*, unsigned int) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:611 0165191d WebCore::DocumentLoader::finishedLoading(double) /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:328 On Fri, Aug 23, 2013 at 5:25 PM, <danakj@chromium.org> wrote: > > https://codereview.chromium.**org/22900018/diff/38001/** > content/renderer/gpu/render_**widget_compositor.cc<https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/render_widget_compositor.cc> > File content/renderer/gpu/render_**widget_compositor.cc (right): > > https://codereview.chromium.**org/22900018/diff/38001/** > content/renderer/gpu/render_**widget_compositor.cc#**newcode585<https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/render_widget_compositor.cc#newcode585> > content/renderer/gpu/render_**widget_compositor.cc:585: > max_transfer_buffer_usage_**bytes_); > On 2013/08/24 00:25:01, kaanb wrote: > >> On 2013/08/23 23:25:39, danakj wrote: >> > You could just pass the value from the layer_tree_host_->settings(). >> > And not > >> > need to store anything new here. >> > >> > Or maybe RenderWidget could get the settings value directly too. >> > > Apparently there are cases where layer_tree_host_ is NULL when this >> > method is > >> called and we're crashing if I try to access >> > layer_tree_host_->settings() > > What, this is called from LayertreeHost. Can you give a callstack? > > https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Oh, it's called synchronously from inside LTH::Create() so we can't have a pointer in the RenderWidgetCompositor yet. :( On Fri, Aug 23, 2013 at 8:29 PM, Kaan Baloglu <kaanb@chromium.org> wrote: > Stack Trace: > RELADDR FUNCTION > > FILE:LINE > 00021e90 tgkill+12 > > /system/lib/libc.so > 00012ef1 pthread_kill+48 > > /system/lib/libc.so > 00013105 raise+10 > > /system/lib/libc.so > 00011e3b <unknown> > > /system/lib/libc.so > 00021744 abort+4 > > /system/lib/libc.so > 00012921 <unknown> > > /system/lib/libc.so > 00011e99 __assert2+20 > > /system/lib/libc.so > 009cd955 scoped_ptr<cc::LayerTreeHost, > base::DefaultDeleter<cc::LayerTreeHost> >::operator->() const > > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:376 > 01674d1f content::RenderViewImpl::GetURLForGraphicsContext3D() > > > /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/render_view_impl.cc:5826 > 0154beeb cc::LayerTreeHost::CreateOutputSurface() > > > /usr/local/google/code/clankium-master/src/out/Debug/../../cc/trees/layer_tree_host.cc:411 > 0154bf45 cc::LayerTreeHost::InitializeProxy(scoped_ptr<cc::Proxy, > base::DefaultDeleter<cc::Proxy> >) > > /usr/local/google/code/clankium-master/src/out/Debug/../../cc/trees/layer_tree_host.cc:134 > 0154bc45 > cc::LayerTreeHost::Initialize(scoped_refptr<base::SingleThreadTaskRunner>) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../cc/trees/layer_tree_host.cc:122 > 0154b98b cc::LayerTreeHost::Create(cc::LayerTreeHostClient*, > cc::LayerTreeSettings const&, scoped_refptr<base::SingleThreadTaskRunner>) > > /usr/local/google/code/clankium-master/src/out/Debug/../../cc/trees/layer_tree_host.cc:78 > v------> > base::DefaultDeleter<cc::LayerTreeHost>::operator()(cc::LayerTreeHost*) > const > > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:137 > v------> base::internal::scoped_ptr_impl<cc::LayerTreeHost, > base::DefaultDeleter<cc::LayerTreeHost> >::reset(cc::LayerTreeHost*) > > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:246 > v------> TakeState<cc::LayerTreeHost, > base::DefaultDeleter<cc::LayerTreeHost> > > > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:212 > v------> operator=<cc::LayerTreeHost, > base::DefaultDeleter<cc::LayerTreeHost> > > > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:361 > 01674847 > content::RenderWidgetCompositor::initialize(cc::LayerTreeSettings) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/gpu/render_widget_compositor.cc:413 > v------> Data > > > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:280 > v------> scoped_ptr_impl<content::RenderWidgetCompositor, > base::DefaultDeleter<content::RenderWidgetCompositor> > > > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:201 > v------> scoped_ptr > > > /usr/local/google/code/clankium-master/src/out/Debug/../../base/memory/scoped_ptr.h:346 > 016746bd > content::RenderWidgetCompositor::Create(content::RenderWidget*, bool) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/gpu/render_widget_compositor.cc:318 > 01673cad content::RenderWidget::initializeLayerTreeView() > > > /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/render_widget.cc:1587 > 01673d0f content::RenderViewImpl::initializeLayerTreeView() > > > /usr/local/google/code/clankium-master/src/out/Debug/../../content/renderer/render_view_impl.cc:2968 > 01673af9 WebKit::WebViewImpl::setIsAcceleratedCompositingActive(bool) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/web/WebViewImpl.cpp:3939 > 01673825 > WebKit::WebViewImpl::setRootGraphicsLayer(WebCore::GraphicsLayer*) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/web/WebViewImpl.cpp:3837 > 016738df > WebCore::RenderLayerCompositor::attachRootLayer(WebCore::RenderLayerCompositor::RootLayerAttachment) > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:2371 > 01671631 WebCore::RenderLayerCompositor::ensureRootLayer() > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:2308 > v------> WebCore::RenderLayerCompositor::enableCompositingMode(bool) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:253 > 01671667 WebCore::RenderLayerCompositor::enableCompositingMode(bool) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:247 > 0167101d > WebCore::RenderLayerCompositor::updateCompositingLayers(WebCore::CompositingUpdateType, > WebCore::RenderLayer*) > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/rendering/RenderLayerCompositor.cpp:356 > 01670ae9 WebCore::FrameView::updateCompositingLayersAfterStyleChange() > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/page/FrameView.cpp:699 > 016642c1 WebCore::Document::recalcStyle(WebCore::Node::StyleChange) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/dom/Document.cpp:1668 > 01663d37 WebCore::Document::attach(WebCore::Node::AttachContext const&) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/dom/Document.cpp:1885 > 01663bf1 > WebCore::DOMWindow::setDocument(WTF::PassRefPtr<WebCore::Document>) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/page/DOMWindow.cpp:344 > 016606df WebCore::DocumentLoader::createWriterFor(WebCore::Frame*, > WebCore::Document const*, WebCore::KURL const&, WTF::String const&, > WTF::String const&, bool, bool) > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:993 > v------> WebCore::DocumentLoader::ensureWriter(WTF::String const&, > WebCore::KURL const&) > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:598 > 016604d5 WebCore::DocumentLoader::ensureWriter(WTF::String const&, > WebCore::KURL const&) > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:591 > 0166057f WebCore::DocumentLoader::ensureWriter() > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:588 > 01660441 WebCore::DocumentLoader::commitData(char const*, unsigned int) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:611 > 0165191d WebCore::DocumentLoader::finishedLoading(double) > > > /usr/local/google/code/clankium-master/src/out/Debug/../../third_party/WebKit/Source/core/loader/DocumentLoader.cpp:328 > > > > On Fri, Aug 23, 2013 at 5:25 PM, <danakj@chromium.org> wrote: > >> >> https://codereview.chromium.**org/22900018/diff/38001/** >> content/renderer/gpu/render_**widget_compositor.cc<https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/render_widget_compositor.cc> >> File content/renderer/gpu/render_**widget_compositor.cc (right): >> >> https://codereview.chromium.**org/22900018/diff/38001/** >> content/renderer/gpu/render_**widget_compositor.cc#**newcode585<https://codereview.chromium.org/22900018/diff/38001/content/renderer/gpu/render_widget_compositor.cc#newcode585> >> content/renderer/gpu/render_**widget_compositor.cc:585: >> max_transfer_buffer_usage_**bytes_); >> On 2013/08/24 00:25:01, kaanb wrote: >> >>> On 2013/08/23 23:25:39, danakj wrote: >>> > You could just pass the value from the layer_tree_host_->settings(). >>> >> And not >> >>> > need to store anything new here. >>> > >>> > Or maybe RenderWidget could get the settings value directly too. >>> >> >> Apparently there are cases where layer_tree_host_ is NULL when this >>> >> method is >> >>> called and we're crashing if I try to access >>> >> layer_tree_host_->settings() >> >> What, this is called from LayertreeHost. Can you give a callstack? >> >> https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... >> > > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
https://codereview.chromium.org/22900018/diff/44001/cc/trees/layer_tree_host_... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/44001/cc/trees/layer_tree_host_... cc/trees/layer_tree_host_impl.cc:1618: settings_.max_transfer_buffer_usage_bytes); So, after going through all of that.. I find it a bit weird that we give this setting to both LayerTreeSettings, and to the OutputSurface that it's using. Generally, settings that are tied to an OutputSurface come into the compositor in the OutputSurface capabilities. When you're here, you have an output surface that you could read off of. So, I'd like to propose moving this from the LayerTreeSettings to the OutputSurface caps.. Then you don't have 2 places caring about the setting in RenderWidgetCompositor, as just the output surface creation path will care. That would have the additional benefit of letting the embedder choose a new value when the output surface changes, if it wanted to. Also, perhaps this value would want to be different for a software vs a hardware mode..
https://codereview.chromium.org/22900018/diff/44001/cc/trees/layer_tree_host_... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/44001/cc/trees/layer_tree_host_... cc/trees/layer_tree_host_impl.cc:1618: settings_.max_transfer_buffer_usage_bytes); On 2013/08/24 00:46:22, danakj wrote: > So, after going through all of that.. I find it a bit weird that we give this > setting to both LayerTreeSettings, and to the OutputSurface that it's using. > > Generally, settings that are tied to an OutputSurface come into the compositor > in the OutputSurface capabilities. When you're here, you have an output surface > that you could read off of. > > So, I'd like to propose moving this from the LayerTreeSettings to the > OutputSurface caps.. Then you don't have 2 places caring about the setting in > RenderWidgetCompositor, as just the output surface creation path will care. That > would have the additional benefit of letting the embedder choose a new value > when the output surface changes, if it wanted to. > > Also, perhaps this value would want to be different for a software vs a hardware > mode.. Well, I don't think this is really tied to the output surface. It is mostly a memory usage optimization parameter. But if it makes sense for everyone I'll move it to OutputSurface::Capabilities
On 2013/08/24 01:12:54, kaanb wrote: > https://codereview.chromium.org/22900018/diff/44001/cc/trees/layer_tree_host_... > File cc/trees/layer_tree_host_impl.cc (right): > > https://codereview.chromium.org/22900018/diff/44001/cc/trees/layer_tree_host_... > cc/trees/layer_tree_host_impl.cc:1618: > settings_.max_transfer_buffer_usage_bytes); > On 2013/08/24 00:46:22, danakj wrote: > > So, after going through all of that.. I find it a bit weird that we give this > > setting to both LayerTreeSettings, and to the OutputSurface that it's using. > > > > Generally, settings that are tied to an OutputSurface come into the compositor > > in the OutputSurface capabilities. When you're here, you have an output > surface > > that you could read off of. > > > > So, I'd like to propose moving this from the LayerTreeSettings to the > > OutputSurface caps.. Then you don't have 2 places caring about the setting in > > RenderWidgetCompositor, as just the output surface creation path will care. > That > > would have the additional benefit of letting the embedder choose a new value > > when the output surface changes, if it wanted to. > > > > Also, perhaps this value would want to be different for a software vs a > hardware > > mode.. > > Well, I don't think this is really tied to the output surface. It is mostly a > memory usage optimization parameter. But if it makes sense for everyone I'll > move it to OutputSurface::Capabilities Done, PTAL
https://codereview.chromium.org/22900018/diff/52001/cc/output/output_surface.h File cc/output/output_surface.h (right): https://codereview.chromium.org/22900018/diff/52001/cc/output/output_surface.... cc/output/output_surface.h:50: size_t max_transfer_buffer_usage_bytes); Since there is only one subclass of this class that actually wants to set this value to something, can we just have the constructor for that class change, and in here, only add it to the caps with a default value? No other output surface impl should need to adjust either unless they want to use a non-default value. I really want to push implementation details out of this class (and make it pure virtual again) into the subclasses, rather than pushing more details into it. This patch should become quite small at that point, I think. I'd also like that each subclass of the OutputSurface be able to determine its own caps internally, rather than passing things to the constructors, if possible. If it's the default value, then they don't need to do anything at all. https://codereview.chromium.org/22900018/diff/52001/cc/output/output_surface.... cc/output/output_surface.h:68: max_transfer_buffer_usage_bytes(std::numeric_limits<size_t>::max()) {} why not kDefault...?
On 2013/08/27 16:55:38, danakj wrote: > https://codereview.chromium.org/22900018/diff/52001/cc/output/output_surface.h > File cc/output/output_surface.h (right): > > https://codereview.chromium.org/22900018/diff/52001/cc/output/output_surface.... > cc/output/output_surface.h:50: size_t max_transfer_buffer_usage_bytes); > Since there is only one subclass of this class that actually wants to set this > value to something, can we just have the constructor for that class change, and > in here, only add it to the caps with a default value? No other output surface > impl should need to adjust either unless they want to use a non-default value. I > really want to push implementation details out of this class (and make it pure > virtual again) into the subclasses, rather than pushing more details into it. > > This patch should become quite small at that point, I think. > > I'd also like that each subclass of the OutputSurface be able to determine its > own caps internally, rather than passing things to the constructors, if > possible. If it's the default value, then they don't need to do anything at all. > > https://codereview.chromium.org/22900018/diff/52001/cc/output/output_surface.... > cc/output/output_surface.h:68: > max_transfer_buffer_usage_bytes(std::numeric_limits<size_t>::max()) {} > why not kDefault...? I have refactored the patch to use ContextProvider::Capabilities, PTAL
Thanks Kaan. I'm immensely happy with this CL. I've leave final review of cc/ to reveman@. https://codereview.chromium.org/22900018/diff/65001/cc/resources/pixel_buffer... File cc/resources/pixel_buffer_raster_worker_pool.cc (right): https://codereview.chromium.org/22900018/diff/65001/cc/resources/pixel_buffer... cc/resources/pixel_buffer_raster_worker_pool.cc:114: // For unit tests: // Software compositing should not use this path in production. Just use a default value when testing this path with software compositor. https://codereview.chromium.org/22900018/diff/65001/cc/resources/pixel_buffer... cc/resources/pixel_buffer_raster_worker_pool.cc:114: // For unit tests: reveman@ can we #if !defined(UNIT_TEST) => NOT_REACHED(); ? If not, can we TODO() to do that? https://codereview.chromium.org/22900018/diff/65001/cc/resources/pixel_buffer... cc/resources/pixel_buffer_raster_worker_pool.cc:115: // TODO(kaanb): make this a static const somewhere? No idea where. If reveman@ has an idea let's do that, otherwise I think just drop the TODO. https://codereview.chromium.org/22900018/diff/65001/content/renderer/gpu/rend... File content/renderer/gpu/render_widget_compositor.cc (right): https://codereview.chromium.org/22900018/diff/65001/content/renderer/gpu/rend... content/renderer/gpu/render_widget_compositor.cc:11: #include "base/android/sys_utils.h" Not needed? https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... File content/renderer/render_widget.cc (right): https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:682: // If we raster too fast we become upload bound, and pending Any reason to do this here instead of inside CreateGraphicsContext3D? https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:701: const size_t max_transfer_buffer_usage_bytes = nit: no const, or kMaxTransferBufferUsageBytes.
+piman for content/common/
https://codereview.chromium.org/22900018/diff/65001/cc/resources/tile_manager.h File cc/resources/tile_manager.h (right): https://codereview.chromium.org/22900018/diff/65001/cc/resources/tile_manager... cc/resources/tile_manager.h:56: ContextProvider* context_provider); hm, I find it awkward that we pass around a ContextProvider pointer here and in PBRWP code that is otherwise resource type agnostic. I would prefer if you turned this into a "size_t max_transfer_buffer_usage_bytes" at the tile management level and removed the conditional in PBRWP. Even if it wouldn't change the value used, I think it will move the decision about what value to use for software compositing and testing to a more appropriate place.
https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... File content/renderer/render_widget.cc (right): https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:695: // For reference Chromebook Pixel can upload 1MB in about 0.5ms. This affects more than the Pixel. It affects the context on all non-android devices (including desktop). I feel uncomfortable limiting the mapped memory everywhere with this CL, it hasn't been a (large) problem beyond Android so far. Other platforms aggressively free mapped memory as we switch tabs, and that seems to be enough. https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:699: // Assuming a two frame deep pipeline. Between what and what? https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:700: const size_t kMsPerFrame = 16; What is this number?
https://codereview.chromium.org/22900018/diff/65001/cc/resources/pixel_buffer... File cc/resources/pixel_buffer_raster_worker_pool.cc (right): https://codereview.chromium.org/22900018/diff/65001/cc/resources/pixel_buffer... cc/resources/pixel_buffer_raster_worker_pool.cc:114: // For unit tests: On 2013/08/29 20:10:28, danakj wrote: > // Software compositing should not use this path in production. Just use a > default value when testing this path with software compositor. Done. https://codereview.chromium.org/22900018/diff/65001/cc/resources/pixel_buffer... cc/resources/pixel_buffer_raster_worker_pool.cc:115: // TODO(kaanb): make this a static const somewhere? On 2013/08/29 20:10:28, danakj wrote: > No idea where. If reveman@ has an idea let's do that, otherwise I think just > drop the TODO. Done. https://codereview.chromium.org/22900018/diff/65001/cc/resources/tile_manager.h File cc/resources/tile_manager.h (right): https://codereview.chromium.org/22900018/diff/65001/cc/resources/tile_manager... cc/resources/tile_manager.h:56: ContextProvider* context_provider); On 2013/08/29 21:06:36, David Reveman wrote: > hm, I find it awkward that we pass around a ContextProvider pointer here and in > PBRWP code that is otherwise resource type agnostic. I would prefer if you > turned this into a "size_t max_transfer_buffer_usage_bytes" at the tile > management level and removed the conditional in PBRWP. Even if it wouldn't > change the value used, I think it will move the decision about what value to use > for software compositing and testing to a more appropriate place. Done. https://codereview.chromium.org/22900018/diff/65001/content/renderer/gpu/rend... File content/renderer/gpu/render_widget_compositor.cc (right): https://codereview.chromium.org/22900018/diff/65001/content/renderer/gpu/rend... content/renderer/gpu/render_widget_compositor.cc:11: #include "base/android/sys_utils.h" On 2013/08/29 20:10:28, danakj wrote: > Not needed? Done. https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... File content/renderer/render_widget.cc (right): https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:682: // If we raster too fast we become upload bound, and pending On 2013/08/29 20:10:28, danakj wrote: > Any reason to do this here instead of inside CreateGraphicsContext3D? Done. https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:695: // For reference Chromebook Pixel can upload 1MB in about 0.5ms. On 2013/08/29 21:58:19, piman wrote: > This affects more than the Pixel. It affects the context on all non-android > devices (including desktop). > I feel uncomfortable limiting the mapped memory everywhere with this CL, it > hasn't been a (large) problem beyond Android so far. Other platforms > aggressively free mapped memory as we switch tabs, and that seems to be enough. Please note that this constant also eventually makes it to the PixelBufferRasterWorkerPool. I added an ifdef to WGC3DCBI to only set the GL mapped memory limit for Android. https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:699: // Assuming a two frame deep pipeline. On 2013/08/29 21:58:19, piman wrote: > Between what and what? Done. https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:700: const size_t kMsPerFrame = 16; On 2013/08/29 21:58:19, piman wrote: > What is this number? Done. https://codereview.chromium.org/22900018/diff/65001/content/renderer/render_w... content/renderer/render_widget.cc:701: const size_t max_transfer_buffer_usage_bytes = On 2013/08/29 20:10:28, danakj wrote: > nit: no const, or kMaxTransferBufferUsageBytes. Done.
https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client... File content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc (right): https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client... content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc:487: #endif noooo! This is taking the problem upside down. You want to decide the amount of uploads the compositor is going to do per frame, and from that, size the command buffer so that it's at least big enough so that you don't have to stall. So, instead of setting the limit on the context, then extracting that value in the compositor and then ignoring the limit on !ANDROID, why don't we: - decide the amount of uploads per frame. pass that explicitly to the compositor - figure out the limit in the context. On Android we may want it tight because we're memory constrained, so set the limit to the kMaxTransferBufferUsageBytes. On other platforms, where we're not constrained, don't set the limit (or set it to a larger value) so that we don't risk stalling. In either case, this decision should be made in the embedder (RenderWidget, so be it), not as an ifdef that affects all contexts for no good reason.
Ok, does the following make sense to everyone: I pass the amount of uploads per frame to RenderWidgetCompositor::Create() then it's set in a field in LayerTreeSettings and makes its way to the PBRWP. In render_widget.cc we compute the mapped_memory_limit based on the amount of uploads and if def(ANDROID) we pass amount of uploads as the limit, otherwise we pass no limit. On Thu, Aug 29, 2013 at 8:31 PM, <piman@chromium.org> wrote: > > https://codereview.chromium.**org/22900018/diff/75001/** > content/common/gpu/client/**webgraphicscontext3d_command_**buffer_impl.cc<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc> > File > content/common/gpu/client/**webgraphicscontext3d_command_**buffer_impl.cc > (right): > > https://codereview.chromium.**org/22900018/diff/75001/** > content/common/gpu/client/**webgraphicscontext3d_command_** > buffer_impl.cc#newcode487<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc#newcode487> > content/common/gpu/client/**webgraphicscontext3d_command_** > buffer_impl.cc:487: > #endif > noooo! > > This is taking the problem upside down. > > You want to decide the amount of uploads the compositor is going to do > per frame, and from that, size the command buffer so that it's at least > big enough so that you don't have to stall. > > So, instead of setting the limit on the context, then extracting that > value in the compositor and then ignoring the limit on !ANDROID, why > don't we: > - decide the amount of uploads per frame. pass that explicitly to the > compositor > - figure out the limit in the context. On Android we may want it tight > because we're memory constrained, so set the limit to the > kMaxTransferBufferUsageBytes. On other platforms, where we're not > constrained, don't set the limit (or set it to a larger value) so that > we don't risk stalling. > > In either case, this decision should be made in the embedder > (RenderWidget, so be it), not as an ifdef that affects all contexts for > no good reason. > > https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On Fri, Aug 30, 2013 at 2:58 PM, Kaan Baloglu <kaanb@chromium.org> wrote: > Ok, does the following make sense to everyone: > > I pass the amount of uploads per frame to RenderWidgetCompositor::Create() > then it's set in a field in LayerTreeSettings and makes its way to the > PBRWP. > Is uploads per frame a count of uploads? PBRWP uses a byte limit not an upload count to determine if it should upload. What does an upload count mean here? Based on an assumed tile size? > > In render_widget.cc we compute the mapped_memory_limit based on the amount > of uploads and if def(ANDROID) we pass amount of uploads as the limit, > otherwise we pass no limit. > > > On Thu, Aug 29, 2013 at 8:31 PM, <piman@chromium.org> wrote: > >> >> https://codereview.chromium.**org/22900018/diff/75001/** >> content/common/gpu/client/**webgraphicscontext3d_command_**buffer_impl.cc<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc> >> File >> content/common/gpu/client/**webgraphicscontext3d_command_**buffer_impl.cc >> (right): >> >> https://codereview.chromium.**org/22900018/diff/75001/** >> content/common/gpu/client/**webgraphicscontext3d_command_** >> buffer_impl.cc#newcode487<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc#newcode487> >> content/common/gpu/client/**webgraphicscontext3d_command_** >> buffer_impl.cc:487: >> #endif >> noooo! >> >> This is taking the problem upside down. >> >> You want to decide the amount of uploads the compositor is going to do >> per frame, and from that, size the command buffer so that it's at least >> big enough so that you don't have to stall. >> >> So, instead of setting the limit on the context, then extracting that >> value in the compositor and then ignoring the limit on !ANDROID, why >> don't we: >> - decide the amount of uploads per frame. pass that explicitly to the >> compositor >> - figure out the limit in the context. On Android we may want it tight >> because we're memory constrained, so set the limit to the >> kMaxTransferBufferUsageBytes. On other platforms, where we're not >> constrained, don't set the limit (or set it to a larger value) so that >> we don't risk stalling. >> >> In either case, this decision should be made in the embedder >> (RenderWidget, so be it), not as an ifdef that affects all contexts for >> no good reason. >> >> https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... >> > > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
I meant a byte limit similar to max_transfer_buffer_usage_bytes. On Fri, Aug 30, 2013 at 12:08 PM, Dana Jansens <danakj@chromium.org> wrote: > On Fri, Aug 30, 2013 at 2:58 PM, Kaan Baloglu <kaanb@chromium.org> wrote: > >> Ok, does the following make sense to everyone: >> >> I pass the amount of uploads per frame to >> RenderWidgetCompositor::Create() then it's set in a field in >> LayerTreeSettings and makes its way to the PBRWP. >> > > Is uploads per frame a count of uploads? PBRWP uses a byte limit not an > upload count to determine if it should upload. What does an upload count > mean here? Based on an assumed tile size? > > >> >> In render_widget.cc we compute the mapped_memory_limit based on the >> amount of uploads and if def(ANDROID) we pass amount of uploads as the >> limit, otherwise we pass no limit. >> >> >> On Thu, Aug 29, 2013 at 8:31 PM, <piman@chromium.org> wrote: >> >>> >>> https://codereview.chromium.**org/22900018/diff/75001/** >>> content/common/gpu/client/**webgraphicscontext3d_command_** >>> buffer_impl.cc<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc> >>> File >>> content/common/gpu/client/**webgraphicscontext3d_command_** >>> buffer_impl.cc >>> (right): >>> >>> https://codereview.chromium.**org/22900018/diff/75001/** >>> content/common/gpu/client/**webgraphicscontext3d_command_** >>> buffer_impl.cc#newcode487<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc#newcode487> >>> content/common/gpu/client/**webgraphicscontext3d_command_** >>> buffer_impl.cc:487: >>> #endif >>> noooo! >>> >>> This is taking the problem upside down. >>> >>> You want to decide the amount of uploads the compositor is going to do >>> per frame, and from that, size the command buffer so that it's at least >>> big enough so that you don't have to stall. >>> >>> So, instead of setting the limit on the context, then extracting that >>> value in the compositor and then ignoring the limit on !ANDROID, why >>> don't we: >>> - decide the amount of uploads per frame. pass that explicitly to the >>> compositor >>> - figure out the limit in the context. On Android we may want it tight >>> because we're memory constrained, so set the limit to the >>> kMaxTransferBufferUsageBytes. On other platforms, where we're not >>> constrained, don't set the limit (or set it to a larger value) so that >>> we don't risk stalling. >>> >>> In either case, this decision should be made in the embedder >>> (RenderWidget, so be it), not as an ifdef that affects all contexts for >>> no good reason. >>> >>> https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... >>> >> >> > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On Thu, Aug 29, 2013 at 11:31 PM, <piman@chromium.org> wrote: > > https://codereview.chromium.**org/22900018/diff/75001/** > content/common/gpu/client/**webgraphicscontext3d_command_**buffer_impl.cc<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc> > File > content/common/gpu/client/**webgraphicscontext3d_command_**buffer_impl.cc > (right): > > https://codereview.chromium.**org/22900018/diff/75001/** > content/common/gpu/client/**webgraphicscontext3d_command_** > buffer_impl.cc#newcode487<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc#newcode487> > content/common/gpu/client/**webgraphicscontext3d_command_** > buffer_impl.cc:487: > #endif > noooo! > > This is taking the problem upside down. > > You want to decide the amount of uploads the compositor is going to do > per frame, and from that, size the command buffer so that it's at least > big enough so that you don't have to stall. > What if you're computing how much you want to allow the context to upload at a time, and the compositor looks at that and doesn't upload more than will be useful? Is your main objection the ifdef code in the context? If the RenderWidget sets a value on the context on android, and the context just uses a default unlimited value otherwise, is that okay? We got here because I was against sending a value into the compositor and into the context and hoping they match, it seemed like excessive plumbing, redundancy, and easier to mess up. > > So, instead of setting the limit on the context, then extracting that > value in the compositor and then ignoring the limit on !ANDROID, why > don't we: > - decide the amount of uploads per frame. pass that explicitly to the > compositor > - figure out the limit in the context. On Android we may want it tight > because we're memory constrained, so set the limit to the > kMaxTransferBufferUsageBytes. On other platforms, where we're not > constrained, don't set the limit (or set it to a larger value) so that > we don't risk stalling. > > In either case, this decision should be made in the embedder > (RenderWidget, so be it), not as an ifdef that affects all contexts for > no good reason. > > https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
From offline convo with piman@, how about this: The context has a default with no limit. On android, RenderWidget sets a limit on the context. In the compositor, we use a limit for uploads that is MAX{limit from context, some hard coded default limit}. That'd give us the same behaviour in the compositor but add the limit to the context3d. SG? On Fri, Aug 30, 2013 at 4:24 PM, Dana Jansens <danakj@chromium.org> wrote: > On Thu, Aug 29, 2013 at 11:31 PM, <piman@chromium.org> wrote: > >> >> https://codereview.chromium.**org/22900018/diff/75001/** >> content/common/gpu/client/**webgraphicscontext3d_command_**buffer_impl.cc<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc> >> File >> content/common/gpu/client/**webgraphicscontext3d_command_**buffer_impl.cc >> (right): >> >> https://codereview.chromium.**org/22900018/diff/75001/** >> content/common/gpu/client/**webgraphicscontext3d_command_** >> buffer_impl.cc#newcode487<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc#newcode487> >> content/common/gpu/client/**webgraphicscontext3d_command_** >> buffer_impl.cc:487: >> #endif >> noooo! >> >> This is taking the problem upside down. >> >> You want to decide the amount of uploads the compositor is going to do >> per frame, and from that, size the command buffer so that it's at least >> big enough so that you don't have to stall. >> > > What if you're computing how much you want to allow the context to upload > at a time, and the compositor looks at that and doesn't upload more than > will be useful? > > Is your main objection the ifdef code in the context? If the RenderWidget > sets a value on the context on android, and the context just uses a default > unlimited value otherwise, is that okay? > > We got here because I was against sending a value into the compositor and > into the context and hoping they match, it seemed like excessive plumbing, > redundancy, and easier to mess up. > > >> >> So, instead of setting the limit on the context, then extracting that >> value in the compositor and then ignoring the limit on !ANDROID, why >> don't we: >> - decide the amount of uploads per frame. pass that explicitly to the >> compositor >> - figure out the limit in the context. On Android we may want it tight >> because we're memory constrained, so set the limit to the >> kMaxTransferBufferUsageBytes. On other platforms, where we're not >> constrained, don't set the limit (or set it to a larger value) so that >> we don't risk stalling. >> >> In either case, this decision should be made in the embedder >> (RenderWidget, so be it), not as an ifdef that affects all contexts for >> no good reason. >> >> https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... >> > > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On Fri, Aug 30, 2013 at 1:41 PM, Dana Jansens <danakj@chromium.org> wrote: > From offline convo with piman@, how about this: > > The context has a default with no limit. On android, RenderWidget sets a > limit on the context. > > In the compositor, we use a limit for uploads that is MAX{limit from > context, some hard coded default limit}. That'd give us the same behaviour > in the compositor but add the limit to the context3d. > > SG? > SGTM, with s/MAX/MIN/ Antoine > > On Fri, Aug 30, 2013 at 4:24 PM, Dana Jansens <danakj@chromium.org> wrote: > >> On Thu, Aug 29, 2013 at 11:31 PM, <piman@chromium.org> wrote: >> >>> >>> https://codereview.chromium.**org/22900018/diff/75001/** >>> content/common/gpu/client/**webgraphicscontext3d_command_** >>> buffer_impl.cc<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc> >>> File >>> content/common/gpu/client/**webgraphicscontext3d_command_** >>> buffer_impl.cc >>> (right): >>> >>> https://codereview.chromium.**org/22900018/diff/75001/** >>> content/common/gpu/client/**webgraphicscontext3d_command_** >>> buffer_impl.cc#newcode487<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc#newcode487> >>> content/common/gpu/client/**webgraphicscontext3d_command_** >>> buffer_impl.cc:487: >>> #endif >>> noooo! >>> >>> This is taking the problem upside down. >>> >>> You want to decide the amount of uploads the compositor is going to do >>> per frame, and from that, size the command buffer so that it's at least >>> big enough so that you don't have to stall. >>> >> >> What if you're computing how much you want to allow the context to upload >> at a time, and the compositor looks at that and doesn't upload more than >> will be useful? >> >> Is your main objection the ifdef code in the context? If the RenderWidget >> sets a value on the context on android, and the context just uses a default >> unlimited value otherwise, is that okay? >> >> We got here because I was against sending a value into the compositor and >> into the context and hoping they match, it seemed like excessive plumbing, >> redundancy, and easier to mess up. >> >> >>> >>> So, instead of setting the limit on the context, then extracting that >>> value in the compositor and then ignoring the limit on !ANDROID, why >>> don't we: >>> - decide the amount of uploads per frame. pass that explicitly to the >>> compositor >>> - figure out the limit in the context. On Android we may want it tight >>> because we're memory constrained, so set the limit to the >>> kMaxTransferBufferUsageBytes. On other platforms, where we're not >>> constrained, don't set the limit (or set it to a larger value) so that >>> we don't risk stalling. >>> >>> In either case, this decision should be made in the embedder >>> (RenderWidget, so be it), not as an ifdef that affects all contexts for >>> no good reason. >>> >>> https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... >>> >> >> > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On Fri, Aug 30, 2013 at 5:46 PM, Antoine Labour <piman@chromium.org> wrote: > > > > On Fri, Aug 30, 2013 at 1:41 PM, Dana Jansens <danakj@chromium.org> wrote: > >> From offline convo with piman@, how about this: >> >> The context has a default with no limit. On android, RenderWidget sets a >> limit on the context. >> >> In the compositor, we use a limit for uploads that is MAX{limit from >> context, some hard coded default limit}. That'd give us the same behaviour >> in the compositor but add the limit to the context3d. >> >> SG? >> > > SGTM, with s/MAX/MIN/ > Oops :) Ya, thanks. > > Antoine > > >> >> On Fri, Aug 30, 2013 at 4:24 PM, Dana Jansens <danakj@chromium.org>wrote: >> >>> On Thu, Aug 29, 2013 at 11:31 PM, <piman@chromium.org> wrote: >>> >>>> >>>> https://codereview.chromium.**org/22900018/diff/75001/** >>>> content/common/gpu/client/**webgraphicscontext3d_command_** >>>> buffer_impl.cc<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc> >>>> File >>>> content/common/gpu/client/**webgraphicscontext3d_command_** >>>> buffer_impl.cc >>>> (right): >>>> >>>> https://codereview.chromium.**org/22900018/diff/75001/** >>>> content/common/gpu/client/**webgraphicscontext3d_command_** >>>> buffer_impl.cc#newcode487<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc#newcode487> >>>> content/common/gpu/client/**webgraphicscontext3d_command_** >>>> buffer_impl.cc:487: >>>> #endif >>>> noooo! >>>> >>>> This is taking the problem upside down. >>>> >>>> You want to decide the amount of uploads the compositor is going to do >>>> per frame, and from that, size the command buffer so that it's at least >>>> big enough so that you don't have to stall. >>>> >>> >>> What if you're computing how much you want to allow the context to upload >>> at a time, and the compositor looks at that and doesn't upload more than >>> will be useful? >>> >>> Is your main objection the ifdef code in the context? If the >>> RenderWidget sets a value on the context on android, and the context just >>> uses a default unlimited value otherwise, is that okay? >>> >>> We got here because I was against sending a value into the compositor >>> and into the context and hoping they match, it seemed like excessive >>> plumbing, redundancy, and easier to mess up. >>> >>> >>>> >>>> So, instead of setting the limit on the context, then extracting that >>>> value in the compositor and then ignoring the limit on !ANDROID, why >>>> don't we: >>>> - decide the amount of uploads per frame. pass that explicitly to the >>>> compositor >>>> - figure out the limit in the context. On Android we may want it tight >>>> because we're memory constrained, so set the limit to the >>>> kMaxTransferBufferUsageBytes. On other platforms, where we're not >>>> constrained, don't set the limit (or set it to a larger value) so that >>>> we don't risk stalling. >>>> >>>> In either case, this decision should be made in the embedder >>>> (RenderWidget, so be it), not as an ifdef that affects all contexts for >>>> no good reason. >>>> >>>> https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... >>>> >>> >>> >> > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Done, PTAL. On Fri, Aug 30, 2013 at 2:48 PM, Dana Jansens <danakj@chromium.org> wrote: > On Fri, Aug 30, 2013 at 5:46 PM, Antoine Labour <piman@chromium.org>wrote: > >> >> >> >> On Fri, Aug 30, 2013 at 1:41 PM, Dana Jansens <danakj@chromium.org>wrote: >> >>> From offline convo with piman@, how about this: >>> >>> The context has a default with no limit. On android, RenderWidget sets a >>> limit on the context. >>> >>> In the compositor, we use a limit for uploads that is MAX{limit from >>> context, some hard coded default limit}. That'd give us the same behaviour >>> in the compositor but add the limit to the context3d. >>> >>> SG? >>> >> >> SGTM, with s/MAX/MIN/ >> > > Oops :) Ya, thanks. > > >> >> Antoine >> >> >>> >>> On Fri, Aug 30, 2013 at 4:24 PM, Dana Jansens <danakj@chromium.org>wrote: >>> >>>> On Thu, Aug 29, 2013 at 11:31 PM, <piman@chromium.org> wrote: >>>> >>>>> >>>>> https://codereview.chromium.**org/22900018/diff/75001/** >>>>> content/common/gpu/client/**webgraphicscontext3d_command_** >>>>> buffer_impl.cc<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc> >>>>> File >>>>> content/common/gpu/client/**webgraphicscontext3d_command_** >>>>> buffer_impl.cc >>>>> (right): >>>>> >>>>> https://codereview.chromium.**org/22900018/diff/75001/** >>>>> content/common/gpu/client/**webgraphicscontext3d_command_** >>>>> buffer_impl.cc#newcode487<https://codereview.chromium.org/22900018/diff/75001/content/common/gpu/client/webgraphicscontext3d_command_buffer_impl.cc#newcode487> >>>>> content/common/gpu/client/**webgraphicscontext3d_command_** >>>>> buffer_impl.cc:487: >>>>> #endif >>>>> noooo! >>>>> >>>>> This is taking the problem upside down. >>>>> >>>>> You want to decide the amount of uploads the compositor is going to do >>>>> per frame, and from that, size the command buffer so that it's at least >>>>> big enough so that you don't have to stall. >>>>> >>>> >>>> What if you're computing how much you want to allow the context to >>>> upload >>>> at a time, and the compositor looks at that and doesn't upload more than >>>> will be useful? >>>> >>>> Is your main objection the ifdef code in the context? If the >>>> RenderWidget sets a value on the context on android, and the context just >>>> uses a default unlimited value otherwise, is that okay? >>>> >>>> We got here because I was against sending a value into the compositor >>>> and into the context and hoping they match, it seemed like excessive >>>> plumbing, redundancy, and easier to mess up. >>>> >>>> >>>>> >>>>> So, instead of setting the limit on the context, then extracting that >>>>> value in the compositor and then ignoring the limit on !ANDROID, why >>>>> don't we: >>>>> - decide the amount of uploads per frame. pass that explicitly to the >>>>> compositor >>>>> - figure out the limit in the context. On Android we may want it tight >>>>> because we're memory constrained, so set the limit to the >>>>> kMaxTransferBufferUsageBytes. On other platforms, where we're not >>>>> constrained, don't set the limit (or set it to a larger value) so that >>>>> we don't risk stalling. >>>>> >>>>> In either case, this decision should be made in the embedder >>>>> (RenderWidget, so be it), not as an ifdef that affects all contexts for >>>>> no good reason. >>>>> >>>>> https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... >>>>> >>>> >>>> >>> >> > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Just nits from me. -> reveman@,piman@ https://codereview.chromium.org/22900018/diff/73001/cc/resources/pixel_buffer... File cc/resources/pixel_buffer_raster_worker_pool.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/pixel_buffer... cc/resources/pixel_buffer_raster_worker_pool.cc:7: #include <limits> not needed https://codereview.chromium.org/22900018/diff/73001/cc/resources/pixel_buffer... File cc/resources/pixel_buffer_raster_worker_pool.h (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/pixel_buffer... cc/resources/pixel_buffer_raster_worker_pool.h:16: class ContextProvider; not needed https://codereview.chromium.org/22900018/diff/73001/cc/resources/raster_worke... File cc/resources/raster_worker_pool_unittest.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/raster_worke... cc/resources/raster_worker_pool_unittest.cc:107: resource_provider(), 1, NULL); this is a size_t now right? https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager.cc File cc/resources/tile_manager.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager... cc/resources/tile_manager.cc:861: size_t TileManager::GetMaxBytesPendingUpload( this could be a file-static method in anonymous namespace. https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager.h File cc/resources/tile_manager.h (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager... cc/resources/tile_manager.h:155: static size_t GetMaxBytesPendingUpload(ContextProvider* context_provider); should this have a forward declaration in this file?
https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager.cc File cc/resources/tile_manager.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager... cc/resources/tile_manager.cc:861: size_t TileManager::GetMaxBytesPendingUpload( Can we move this logic to the OutputSurface? And consistently refer to it as TransferBufferUsage?
https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager.cc File cc/resources/tile_manager.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager... cc/resources/tile_manager.cc:861: size_t TileManager::GetMaxBytesPendingUpload( On 2013/09/03 18:02:03, David Reveman wrote: > Can we move this logic to the OutputSurface? And consistently refer to it as > TransferBufferUsage? -1 to putting more code in output surface. that should not contain cc-internal logic.
https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager.cc File cc/resources/tile_manager.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager... cc/resources/tile_manager.cc:861: size_t TileManager::GetMaxBytesPendingUpload( On 2013/09/03 18:06:04, danakj wrote: > On 2013/09/03 18:02:03, David Reveman wrote: > > Can we move this logic to the OutputSurface? And consistently refer to it as > > TransferBufferUsage? > > -1 to putting more code in output surface. that should not contain cc-internal > logic. I don't want the TileManager to be aware of if software resources are used. The logic doesn't have to move to OutputSurface, just some place where we're not resource type agnostic like TileManager.
https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager.cc File cc/resources/tile_manager.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager... cc/resources/tile_manager.cc:861: size_t TileManager::GetMaxBytesPendingUpload( On 2013/09/03 18:37:24, David Reveman wrote: > On 2013/09/03 18:06:04, danakj wrote: > > On 2013/09/03 18:02:03, David Reveman wrote: > > > Can we move this logic to the OutputSurface? And consistently refer to it as > > > TransferBufferUsage? > > > > -1 to putting more code in output surface. that should not contain cc-internal > > logic. > > I don't want the TileManager to be aware of if software resources are used. The > logic doesn't have to move to OutputSurface, just some place where we're not > resource type agnostic like TileManager. So, this limit is the max upload that the tile manager is going to use. I'm not sure where would make more sense? Do you have any ideas? The next step out from here is LayerTreeHostImpl::CreateAndSetTileManager()
On 2013/09/03 19:06:30, danakj wrote: > https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager.cc > File cc/resources/tile_manager.cc (right): > > https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager... > cc/resources/tile_manager.cc:861: size_t TileManager::GetMaxBytesPendingUpload( > On 2013/09/03 18:37:24, David Reveman wrote: > > On 2013/09/03 18:06:04, danakj wrote: > > > On 2013/09/03 18:02:03, David Reveman wrote: > > > > Can we move this logic to the OutputSurface? And consistently refer to it > as > > > > TransferBufferUsage? > > > > > > -1 to putting more code in output surface. that should not contain > cc-internal > > > logic. > > > > I don't want the TileManager to be aware of if software resources are used. > The > > logic doesn't have to move to OutputSurface, just some place where we're not > > resource type agnostic like TileManager. > > So, this limit is the max upload that the tile manager is going to use. I'm not > sure where would make more sense? Do you have any ideas? The next step out from > here is LayerTreeHostImpl::CreateAndSetTileManager() Yes, that's much better than TileManager. We already make "if (software) bla; else blaha" decisions in LTHI.
https://codereview.chromium.org/22900018/diff/73001/cc/resources/pixel_buffer... File cc/resources/pixel_buffer_raster_worker_pool.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/pixel_buffer... cc/resources/pixel_buffer_raster_worker_pool.cc:7: #include <limits> On 2013/09/03 17:18:57, danakj wrote: > not needed Done. https://codereview.chromium.org/22900018/diff/73001/cc/resources/pixel_buffer... File cc/resources/pixel_buffer_raster_worker_pool.h (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/pixel_buffer... cc/resources/pixel_buffer_raster_worker_pool.h:16: class ContextProvider; On 2013/09/03 17:18:57, danakj wrote: > not needed Done. https://codereview.chromium.org/22900018/diff/73001/cc/resources/raster_worke... File cc/resources/raster_worker_pool_unittest.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/raster_worke... cc/resources/raster_worker_pool_unittest.cc:107: resource_provider(), 1, NULL); On 2013/09/03 17:18:57, danakj wrote: > this is a size_t now right? Done. https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager.cc File cc/resources/tile_manager.cc (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager... cc/resources/tile_manager.cc:861: size_t TileManager::GetMaxBytesPendingUpload( On 2013/09/03 19:06:31, danakj wrote: > On 2013/09/03 18:37:24, David Reveman wrote: > > On 2013/09/03 18:06:04, danakj wrote: > > > On 2013/09/03 18:02:03, David Reveman wrote: > > > > Can we move this logic to the OutputSurface? And consistently refer to it > as > > > > TransferBufferUsage? > > > > > > -1 to putting more code in output surface. that should not contain > cc-internal > > > logic. > > > > I don't want the TileManager to be aware of if software resources are used. > The > > logic doesn't have to move to OutputSurface, just some place where we're not > > resource type agnostic like TileManager. > > So, this limit is the max upload that the tile manager is going to use. I'm not > sure where would make more sense? Do you have any ideas? The next step out from > here is LayerTreeHostImpl::CreateAndSetTileManager() Moved to the anonymous namespace in LTHI https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager.h File cc/resources/tile_manager.h (right): https://codereview.chromium.org/22900018/diff/73001/cc/resources/tile_manager... cc/resources/tile_manager.h:155: static size_t GetMaxBytesPendingUpload(ContextProvider* context_provider); On 2013/09/03 17:18:57, danakj wrote: > should this have a forward declaration in this file? Moved to anonymous namespace in LTHI
Seems fine to me, the comment's not quite right anymore though. https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:85: // Software compositing should not use this path in production. Just use a s/path/value/ https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:86: // default value when testing this path with software compositor. ...when testing uploads with the software compositor.
https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... File cc/resources/pixel_buffer_raster_worker_pool.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... cc/resources/pixel_buffer_raster_worker_pool.cc:98: size_t max_bytes_pending_upload) nit: max_transfer_buffer_usage_bytes https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... cc/resources/pixel_buffer_raster_worker_pool.cc:103: max_bytes_pending_upload_(max_bytes_pending_upload), I think this a good place to take the general concept of transfer buffer usage and converting it to "bytes pending upload" which is PBRWP specific: max_bytes_pending_upload_(max_transfer_buffer_usage_bytes) https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... File cc/resources/pixel_buffer_raster_worker_pool.h (right): https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... cc/resources/pixel_buffer_raster_worker_pool.h:23: size_t max_bytes_pending_upload) { nit: max_transfer_buffer_usage_bytes https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... cc/resources/pixel_buffer_raster_worker_pool.h:43: size_t max_bytes_pending_upload); nit: max_transfer_buffer_usage_bytes https://codereview.chromium.org/22900018/diff/105001/cc/resources/tile_manage... File cc/resources/tile_manager.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/resources/tile_manage... cc/resources/tile_manager.cc:128: size_t max_bytes_pending_upload) { nit: max_transfer_buffer_usage_bytes https://codereview.chromium.org/22900018/diff/105001/cc/resources/tile_manager.h File cc/resources/tile_manager.h (right): https://codereview.chromium.org/22900018/diff/105001/cc/resources/tile_manage... cc/resources/tile_manager.h:56: size_t max_bytes_pending_upload); max_transfer_buffer_usage_bytes https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:74: if (context_provider != NULL) { is this really the correct way to determine the limit? What if output_surface->ForcedDrawToSoftwareDevice() is true? Should it be based on the return value of GetDrawMode instead? https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:76: const size_t kMaxBytesUploadedPerMs = 1024 * 1024 * 2; nit: kMaxTransferBufferUsageBytes here and below
https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:74: if (context_provider != NULL) { On 2013/09/03 22:01:50, David Reveman wrote: > is this really the correct way to determine the limit? What if > output_surface->ForcedDrawToSoftwareDevice() is true? Should it be based on the > return value of GetDrawMode instead? AIUI This limit is how much the compositor should upload at a time, uploading only happens if you have a context3d. The value is ignored if you are doing software compositing, cuz you're not uploading. Are you proposing that this function is returning something different?
m https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... File cc/resources/pixel_buffer_raster_worker_pool.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... cc/resources/pixel_buffer_raster_worker_pool.cc:98: size_t max_bytes_pending_upload) On 2013/09/03 22:01:50, David Reveman wrote: > nit: max_transfer_buffer_usage_bytes Done. https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... cc/resources/pixel_buffer_raster_worker_pool.cc:103: max_bytes_pending_upload_(max_bytes_pending_upload), On 2013/09/03 22:01:50, David Reveman wrote: > I think this a good place to take the general concept of transfer buffer usage > and converting it to "bytes pending upload" which is PBRWP specific: > max_bytes_pending_upload_(max_transfer_buffer_usage_bytes) Done. https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... File cc/resources/pixel_buffer_raster_worker_pool.h (right): https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... cc/resources/pixel_buffer_raster_worker_pool.h:23: size_t max_bytes_pending_upload) { On 2013/09/03 22:01:50, David Reveman wrote: > nit: max_transfer_buffer_usage_bytes Done. https://codereview.chromium.org/22900018/diff/105001/cc/resources/pixel_buffe... cc/resources/pixel_buffer_raster_worker_pool.h:43: size_t max_bytes_pending_upload); On 2013/09/03 22:01:50, David Reveman wrote: > nit: max_transfer_buffer_usage_bytes Done. https://codereview.chromium.org/22900018/diff/105001/cc/resources/tile_manage... File cc/resources/tile_manager.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/resources/tile_manage... cc/resources/tile_manager.cc:128: size_t max_bytes_pending_upload) { On 2013/09/03 22:01:50, David Reveman wrote: > nit: max_transfer_buffer_usage_bytes Done. https://codereview.chromium.org/22900018/diff/105001/cc/resources/tile_manager.h File cc/resources/tile_manager.h (right): https://codereview.chromium.org/22900018/diff/105001/cc/resources/tile_manage... cc/resources/tile_manager.h:56: size_t max_bytes_pending_upload); On 2013/09/03 22:01:50, David Reveman wrote: > max_transfer_buffer_usage_bytes Done. https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:76: const size_t kMaxBytesUploadedPerMs = 1024 * 1024 * 2; On 2013/09/03 22:01:50, David Reveman wrote: > nit: kMaxTransferBufferUsageBytes here and below Done. https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:85: // Software compositing should not use this path in production. Just use a On 2013/09/03 21:48:29, danakj wrote: > s/path/value/ Done. https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:86: // default value when testing this path with software compositor. On 2013/09/03 21:48:29, danakj wrote: > ...when testing uploads with the software compositor. Done.
ping! does the latest patch look good?
https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:74: if (context_provider != NULL) { On 2013/09/03 22:19:25, danakj wrote: > On 2013/09/03 22:01:50, David Reveman wrote: > > is this really the correct way to determine the limit? What if > > output_surface->ForcedDrawToSoftwareDevice() is true? Should it be based on > the > > return value of GetDrawMode instead? > > AIUI This limit is how much the compositor should upload at a time, uploading > only happens if you have a context3d. The value is ignored if you are doing > software compositing, cuz you're not uploading. Are you proposing that this > function is returning something different? we should always be using std::numeric_limits<size_t>::max() with sw compositing. that doesn't seem to be the case at the moment unless I'm missing something?
https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:74: if (context_provider != NULL) { On 2013/09/04 23:22:15, David Reveman wrote: > On 2013/09/03 22:19:25, danakj wrote: > > On 2013/09/03 22:01:50, David Reveman wrote: > > > is this really the correct way to determine the limit? What if > > > output_surface->ForcedDrawToSoftwareDevice() is true? Should it be based on > > the > > > return value of GetDrawMode instead? > > > > AIUI This limit is how much the compositor should upload at a time, uploading > > only happens if you have a context3d. The value is ignored if you are doing > > software compositing, cuz you're not uploading. Are you proposing that this > > function is returning something different? > > we should always be using std::numeric_limits<size_t>::max() with sw > compositing. that doesn't seem to be the case at the moment unless I'm missing > something? We don't recreate the TileManager when we draw a software resourceless frame in the middle of a bunch of GL frames. Maybe that's the confusion. Output Surface can change its ForceDrawToSoftwareDevice on/off at the time of each draw in cc. So that is independent from and can't be used to determine how to set up the TileManager. It's kinda odd, but on webview I believe we start we start in SOFTWARE mode, with a zero budget. Then move to HARDWARE mode, with intermixed RESOURCELESS_SOFTWARE frames intermixed where we create a SoftwareRenderer on the fly for each frame. So I think this does exactly what you're looking for?
kaan, how about a test for this? Like a layer tree host test that: 1) creates an OutputSurface with a context3d. Sees the context caps value in the PBRWP. 2) loses the OutputSurface, creates an OutputSurface with no context3d. Sees max in the PBRWP. 3) Does a deferred gl init on the OutputSurface. Sees the context caps value in the PBRWP.
https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:74: if (context_provider != NULL) { On 2013/09/05 14:36:17, danakj wrote: > On 2013/09/04 23:22:15, David Reveman wrote: > > On 2013/09/03 22:19:25, danakj wrote: > > > On 2013/09/03 22:01:50, David Reveman wrote: > > > > is this really the correct way to determine the limit? What if > > > > output_surface->ForcedDrawToSoftwareDevice() is true? Should it be based > on > > > the > > > > return value of GetDrawMode instead? > > > > > > AIUI This limit is how much the compositor should upload at a time, > uploading > > > only happens if you have a context3d. The value is ignored if you are doing > > > software compositing, cuz you're not uploading. Are you proposing that this > > > function is returning something different? > > > > we should always be using std::numeric_limits<size_t>::max() with sw > > compositing. that doesn't seem to be the case at the moment unless I'm missing > > something? > > We don't recreate the TileManager when we draw a software resourceless frame in > the middle of a bunch of GL frames. Maybe that's the confusion. Output Surface > can change its ForceDrawToSoftwareDevice on/off at the time of each draw in cc. > So that is independent from and can't be used to determine how to set up the > TileManager. > > It's kinda odd, but on webview I believe we start we start in SOFTWARE mode, > with a zero budget. Then move to HARDWARE mode, with intermixed > RESOURCELESS_SOFTWARE frames intermixed where we create a SoftwareRenderer on > the fly for each frame. > > So I think this does exactly what you're looking for? Hm, we're still using the tile manager to initialize tiles in software mode, right? In that case, using a limit less than numeric_limits<size_t>::max() will effect the behavior of PBRWP. I generally find it confusing that the code and comments here seem to assume that "context_provider != NULL" means not using software while that is not the case. I think some renaming is required at a minimum.
https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:74: if (context_provider != NULL) { On 2013/09/05 15:32:14, David Reveman wrote: > On 2013/09/05 14:36:17, danakj wrote: > > On 2013/09/04 23:22:15, David Reveman wrote: > > > On 2013/09/03 22:19:25, danakj wrote: > > > > On 2013/09/03 22:01:50, David Reveman wrote: > > > > > is this really the correct way to determine the limit? What if > > > > > output_surface->ForcedDrawToSoftwareDevice() is true? Should it be based > > on > > > > the > > > > > return value of GetDrawMode instead? > > > > > > > > AIUI This limit is how much the compositor should upload at a time, > > uploading > > > > only happens if you have a context3d. The value is ignored if you are > doing > > > > software compositing, cuz you're not uploading. Are you proposing that > this > > > > function is returning something different? > > > > > > we should always be using std::numeric_limits<size_t>::max() with sw > > > compositing. that doesn't seem to be the case at the moment unless I'm > missing > > > something? > > > > We don't recreate the TileManager when we draw a software resourceless frame > in > > the middle of a bunch of GL frames. Maybe that's the confusion. Output Surface > > can change its ForceDrawToSoftwareDevice on/off at the time of each draw in > cc. > > So that is independent from and can't be used to determine how to set up the > > TileManager. > > > > It's kinda odd, but on webview I believe we start we start in SOFTWARE mode, > > with a zero budget. Then move to HARDWARE mode, with intermixed > > RESOURCELESS_SOFTWARE frames intermixed where we create a SoftwareRenderer on > > the fly for each frame. > > > > So I think this does exactly what you're looking for? > > Hm, we're still using the tile manager to initialize tiles in software mode, > right? In that case, using a limit less than numeric_limits<size_t>::max() will > effect the behavior of PBRWP. No, resourceless software mode means there are no resources, it just draws from the SkPictures. There's no interaction with tile manager at all AFAIK. > I generally find it confusing that the code and comments here seem to assume > that "context_provider != NULL" means not using software while that is not the > case. I think some renaming is required at a minimum. The whole situation is complicated and hard to understand. I want to improve it. https://docs.google.com/a/chromium.org/document/d/1YcEU09zkVvg86C05MbLjeKFLIG... context_provider != NULL means using GL resources. context_provider == NULL means using software resources. This is currently so for the life of the TileManager/Renderer/ResourceProvider.
https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:73: size_t GetMaxBytesPendingUpload(cc::ContextProvider* context_provider) { GetMaxTransferBufferUsageBytes instead please and update constants and comments below https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:74: if (context_provider != NULL) { On 2013/09/05 15:35:07, danakj wrote: > On 2013/09/05 15:32:14, David Reveman wrote: > > On 2013/09/05 14:36:17, danakj wrote: > > > On 2013/09/04 23:22:15, David Reveman wrote: > > > > On 2013/09/03 22:19:25, danakj wrote: > > > > > On 2013/09/03 22:01:50, David Reveman wrote: > > > > > > is this really the correct way to determine the limit? What if > > > > > > output_surface->ForcedDrawToSoftwareDevice() is true? Should it be > based > > > on > > > > > the > > > > > > return value of GetDrawMode instead? > > > > > > > > > > AIUI This limit is how much the compositor should upload at a time, > > > uploading > > > > > only happens if you have a context3d. The value is ignored if you are > > doing > > > > > software compositing, cuz you're not uploading. Are you proposing that > > this > > > > > function is returning something different? > > > > > > > > we should always be using std::numeric_limits<size_t>::max() with sw > > > > compositing. that doesn't seem to be the case at the moment unless I'm > > missing > > > > something? > > > > > > We don't recreate the TileManager when we draw a software resourceless frame > > in > > > the middle of a bunch of GL frames. Maybe that's the confusion. Output > Surface > > > can change its ForceDrawToSoftwareDevice on/off at the time of each draw in > > cc. > > > So that is independent from and can't be used to determine how to set up the > > > TileManager. > > > > > > It's kinda odd, but on webview I believe we start we start in SOFTWARE mode, > > > with a zero budget. Then move to HARDWARE mode, with intermixed > > > RESOURCELESS_SOFTWARE frames intermixed where we create a SoftwareRenderer > on > > > the fly for each frame. > > > > > > So I think this does exactly what you're looking for? > > > > Hm, we're still using the tile manager to initialize tiles in software mode, > > right? In that case, using a limit less than numeric_limits<size_t>::max() > will > > effect the behavior of PBRWP. > > No, resourceless software mode means there are no resources, it just draws from > the SkPictures. There's no interaction with tile manager at all AFAIK. > > > I generally find it confusing that the code and comments here seem to assume > > that "context_provider != NULL" means not using software while that is not the > > case. I think some renaming is required at a minimum. > > The whole situation is complicated and hard to understand. I want to improve it. > https://docs.google.com/a/chromium.org/document/d/1YcEU09zkVvg86C05MbLjeKFLIG... > > context_provider != NULL means using GL resources. context_provider == NULL > means using software resources. This is currently so for the life of the > TileManager/Renderer/ResourceProvider. Ok, got it. nit: "if (context_provider)" instead
To add a unittest we'll need to change the RasterWorkerPool API to add a virtual method to return the MaxTransferUsageBytes/MaxBytesPendingUpload value. ImageRasterWorkerPool could return 0 for now. reveman@ how does this sound? https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.cc (right): https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:74: if (context_provider != NULL) { On 2013/09/05 17:52:10, David Reveman wrote: > On 2013/09/05 15:35:07, danakj wrote: > > On 2013/09/05 15:32:14, David Reveman wrote: > > > On 2013/09/05 14:36:17, danakj wrote: > > > > On 2013/09/04 23:22:15, David Reveman wrote: > > > > > On 2013/09/03 22:19:25, danakj wrote: > > > > > > On 2013/09/03 22:01:50, David Reveman wrote: > > > > > > > is this really the correct way to determine the limit? What if > > > > > > > output_surface->ForcedDrawToSoftwareDevice() is true? Should it be > > based > > > > on > > > > > > the > > > > > > > return value of GetDrawMode instead? > > > > > > > > > > > > AIUI This limit is how much the compositor should upload at a time, > > > > uploading > > > > > > only happens if you have a context3d. The value is ignored if you are > > > doing > > > > > > software compositing, cuz you're not uploading. Are you proposing that > > > this > > > > > > function is returning something different? > > > > > > > > > > we should always be using std::numeric_limits<size_t>::max() with sw > > > > > compositing. that doesn't seem to be the case at the moment unless I'm > > > missing > > > > > something? > > > > > > > > We don't recreate the TileManager when we draw a software resourceless > frame > > > in > > > > the middle of a bunch of GL frames. Maybe that's the confusion. Output > > Surface > > > > can change its ForceDrawToSoftwareDevice on/off at the time of each draw > in > > > cc. > > > > So that is independent from and can't be used to determine how to set up > the > > > > TileManager. > > > > > > > > It's kinda odd, but on webview I believe we start we start in SOFTWARE > mode, > > > > with a zero budget. Then move to HARDWARE mode, with intermixed > > > > RESOURCELESS_SOFTWARE frames intermixed where we create a SoftwareRenderer > > on > > > > the fly for each frame. > > > > > > > > So I think this does exactly what you're looking for? > > > > > > Hm, we're still using the tile manager to initialize tiles in software mode, > > > right? In that case, using a limit less than numeric_limits<size_t>::max() > > will > > > effect the behavior of PBRWP. > > > > No, resourceless software mode means there are no resources, it just draws > from > > the SkPictures. There's no interaction with tile manager at all AFAIK. > > > > > I generally find it confusing that the code and comments here seem to assume > > > that "context_provider != NULL" means not using software while that is not > the > > > case. I think some renaming is required at a minimum. > > > > The whole situation is complicated and hard to understand. I want to improve > it. > > > https://docs.google.com/a/chromium.org/document/d/1YcEU09zkVvg86C05MbLjeKFLIG... > > > > context_provider != NULL means using GL resources. context_provider == NULL > > means using software resources. This is currently so for the life of the > > TileManager/Renderer/ResourceProvider. > > Ok, got it. > > nit: "if (context_provider)" instead Done. https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.cc:76: const size_t kMaxBytesUploadedPerMs = 1024 * 1024 * 2; On 2013/09/03 23:09:29, kaanb wrote: > On 2013/09/03 22:01:50, David Reveman wrote: > > nit: kMaxTransferBufferUsageBytes here and below > > Done.
On 2013/09/05 19:26:56, kaanb wrote: > To add a unittest we'll need to change the RasterWorkerPool API to add a virtual > method to return the MaxTransferUsageBytes/MaxBytesPendingUpload value. > ImageRasterWorkerPool could return 0 for now. reveman@ how does this sound? Not sure what you mean. Generally adding a unit test that check if the return value of a getter is what you expect doesn't seem very useful. It would be much better if you could construct a unit test with a fake context3d that check if RWP impl actually stays within the transfer buffer memory limit. > > https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... > File cc/trees/layer_tree_host_impl.cc (right): > > https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... > cc/trees/layer_tree_host_impl.cc:74: if (context_provider != NULL) { > On 2013/09/05 17:52:10, David Reveman wrote: > > On 2013/09/05 15:35:07, danakj wrote: > > > On 2013/09/05 15:32:14, David Reveman wrote: > > > > On 2013/09/05 14:36:17, danakj wrote: > > > > > On 2013/09/04 23:22:15, David Reveman wrote: > > > > > > On 2013/09/03 22:19:25, danakj wrote: > > > > > > > On 2013/09/03 22:01:50, David Reveman wrote: > > > > > > > > is this really the correct way to determine the limit? What if > > > > > > > > output_surface->ForcedDrawToSoftwareDevice() is true? Should it be > > > based > > > > > on > > > > > > > the > > > > > > > > return value of GetDrawMode instead? > > > > > > > > > > > > > > AIUI This limit is how much the compositor should upload at a time, > > > > > uploading > > > > > > > only happens if you have a context3d. The value is ignored if you > are > > > > doing > > > > > > > software compositing, cuz you're not uploading. Are you proposing > that > > > > this > > > > > > > function is returning something different? > > > > > > > > > > > > we should always be using std::numeric_limits<size_t>::max() with sw > > > > > > compositing. that doesn't seem to be the case at the moment unless I'm > > > > missing > > > > > > something? > > > > > > > > > > We don't recreate the TileManager when we draw a software resourceless > > frame > > > > in > > > > > the middle of a bunch of GL frames. Maybe that's the confusion. Output > > > Surface > > > > > can change its ForceDrawToSoftwareDevice on/off at the time of each draw > > in > > > > cc. > > > > > So that is independent from and can't be used to determine how to set up > > the > > > > > TileManager. > > > > > > > > > > It's kinda odd, but on webview I believe we start we start in SOFTWARE > > mode, > > > > > with a zero budget. Then move to HARDWARE mode, with intermixed > > > > > RESOURCELESS_SOFTWARE frames intermixed where we create a > SoftwareRenderer > > > on > > > > > the fly for each frame. > > > > > > > > > > So I think this does exactly what you're looking for? > > > > > > > > Hm, we're still using the tile manager to initialize tiles in software > mode, > > > > right? In that case, using a limit less than numeric_limits<size_t>::max() > > > will > > > > effect the behavior of PBRWP. > > > > > > No, resourceless software mode means there are no resources, it just draws > > from > > > the SkPictures. There's no interaction with tile manager at all AFAIK. > > > > > > > I generally find it confusing that the code and comments here seem to > assume > > > > that "context_provider != NULL" means not using software while that is not > > the > > > > case. I think some renaming is required at a minimum. > > > > > > The whole situation is complicated and hard to understand. I want to improve > > it. > > > > > > https://docs.google.com/a/chromium.org/document/d/1YcEU09zkVvg86C05MbLjeKFLIG... > > > > > > context_provider != NULL means using GL resources. context_provider == NULL > > > means using software resources. This is currently so for the life of the > > > TileManager/Renderer/ResourceProvider. > > > > Ok, got it. > > > > nit: "if (context_provider)" instead > > Done. > > https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host... > cc/trees/layer_tree_host_impl.cc:76: const size_t kMaxBytesUploadedPerMs = 1024 > * 1024 * 2; > On 2013/09/03 23:09:29, kaanb wrote: > > On 2013/09/03 22:01:50, David Reveman wrote: > > > nit: kMaxTransferBufferUsageBytes here and below > > > > Done.
How do we check if the impl stays within the transfer buffer memory limit? By checking the task queue size? On Thu, Sep 5, 2013 at 12:51 PM, <reveman@chromium.org> wrote: > On 2013/09/05 19:26:56, kaanb wrote: > >> To add a unittest we'll need to change the RasterWorkerPool API to add a >> > virtual > >> method to return the MaxTransferUsageBytes/**MaxBytesPendingUpload value. >> ImageRasterWorkerPool could return 0 for now. reveman@ how does this >> sound? >> > > Not sure what you mean. Generally adding a unit test that check if the > return > value of a getter is what you expect doesn't seem very useful. It would be > much > better if you could construct a unit test with a fake context3d that check > if > RWP impl actually stays within the transfer buffer memory limit. > > > > > https://codereview.chromium.**org/22900018/diff/105001/cc/** > trees/layer_tree_host_impl.cc<https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host_impl.cc> > >> File cc/trees/layer_tree_host_impl.**cc (right): >> > > > https://codereview.chromium.**org/22900018/diff/105001/cc/** > trees/layer_tree_host_impl.cc#**newcode74<https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host_impl.cc#newcode74> > >> cc/trees/layer_tree_host_impl.**cc:74: if (context_provider != NULL) { >> On 2013/09/05 17:52:10, David Reveman wrote: >> > On 2013/09/05 15:35:07, danakj wrote: >> > > On 2013/09/05 15:32:14, David Reveman wrote: >> > > > On 2013/09/05 14:36:17, danakj wrote: >> > > > > On 2013/09/04 23:22:15, David Reveman wrote: >> > > > > > On 2013/09/03 22:19:25, danakj wrote: >> > > > > > > On 2013/09/03 22:01:50, David Reveman wrote: >> > > > > > > > is this really the correct way to determine the limit? What >> if >> > > > > > > > output_surface->**ForcedDrawToSoftwareDevice() is true? >> Should it >> > be > >> > > based >> > > > > on >> > > > > > > the >> > > > > > > > return value of GetDrawMode instead? >> > > > > > > >> > > > > > > AIUI This limit is how much the compositor should upload at a >> > time, > >> > > > > uploading >> > > > > > > only happens if you have a context3d. The value is ignored if >> you >> are >> > > > doing >> > > > > > > software compositing, cuz you're not uploading. Are you >> proposing >> that >> > > > this >> > > > > > > function is returning something different? >> > > > > > >> > > > > > we should always be using std::numeric_limits<size_t>::**max() >> with sw >> > > > > > compositing. that doesn't seem to be the case at the moment >> unless >> > I'm > >> > > > missing >> > > > > > something? >> > > > > >> > > > > We don't recreate the TileManager when we draw a software >> resourceless >> > frame >> > > > in >> > > > > the middle of a bunch of GL frames. Maybe that's the confusion. >> Output >> > > Surface >> > > > > can change its ForceDrawToSoftwareDevice on/off at the time of >> each >> > draw > >> > in >> > > > cc. >> > > > > So that is independent from and can't be used to determine how to >> set >> > up > >> > the >> > > > > TileManager. >> > > > > >> > > > > It's kinda odd, but on webview I believe we start we start in >> SOFTWARE >> > mode, >> > > > > with a zero budget. Then move to HARDWARE mode, with intermixed >> > > > > RESOURCELESS_SOFTWARE frames intermixed where we create a >> SoftwareRenderer >> > > on >> > > > > the fly for each frame. >> > > > > >> > > > > So I think this does exactly what you're looking for? >> > > > >> > > > Hm, we're still using the tile manager to initialize tiles in >> software >> mode, >> > > > right? In that case, using a limit less than >> > numeric_limits<size_t>::max() > >> > > will >> > > > effect the behavior of PBRWP. >> > > >> > > No, resourceless software mode means there are no resources, it just >> draws >> > from >> > > the SkPictures. There's no interaction with tile manager at all AFAIK. >> > > >> > > > I generally find it confusing that the code and comments here seem >> to >> assume >> > > > that "context_provider != NULL" means not using software while that >> is >> > not > >> > the >> > > > case. I think some renaming is required at a minimum. >> > > >> > > The whole situation is complicated and hard to understand. I want to >> > improve > >> > it. >> > > >> > >> > > https://docs.google.com/a/**chromium.org/document/d/** > 1YcEU09zkVvg86C05MbLjeKFLIG1p6**93Fl1gwol4wT5c/edit#<https://docs.google.com/a/chromium.org/document/d/1YcEU09zkVvg86C05MbLjeKFLIG1p693Fl1gwol4wT5c/edit#> > >> > > >> > > context_provider != NULL means using GL resources. context_provider == >> > NULL > >> > > means using software resources. This is currently so for the life of >> the >> > > TileManager/Renderer/**ResourceProvider. >> > >> > Ok, got it. >> > >> > nit: "if (context_provider)" instead >> > > Done. >> > > > https://codereview.chromium.**org/22900018/diff/105001/cc/** > trees/layer_tree_host_impl.cc#**newcode76<https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host_impl.cc#newcode76> > >> cc/trees/layer_tree_host_impl.**cc:76: const size_t >> kMaxBytesUploadedPerMs = >> > 1024 > >> * 1024 * 2; >> On 2013/09/03 23:09:29, kaanb wrote: >> > On 2013/09/03 22:01:50, David Reveman wrote: >> > > nit: kMaxTransferBufferUsageBytes here and below >> > >> > Done. >> > > > > https://codereview.chromium.**org/22900018/<https://codereview.chromium.org/2... > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2013/09/05 20:48:26, kaanb wrote: > How do we check if the impl stays within the transfer buffer memory limit? > By checking the task queue size? The transfer buffer memory used is the sum of all buffer storage allocated for GL_PIXEL_UNPACK_TRANSFER_BUFFER_CHROMIUM target using WebGraphicsContext3D::bufferData(). I think TestWebGraphicsContext3D has most of what you need already. > > > On Thu, Sep 5, 2013 at 12:51 PM, <mailto:reveman@chromium.org> wrote: > > > On 2013/09/05 19:26:56, kaanb wrote: > > > >> To add a unittest we'll need to change the RasterWorkerPool API to add a > >> > > virtual > > > >> method to return the MaxTransferUsageBytes/**MaxBytesPendingUpload value. > >> ImageRasterWorkerPool could return 0 for now. reveman@ how does this > >> sound? > >> > > > > Not sure what you mean. Generally adding a unit test that check if the > > return > > value of a getter is what you expect doesn't seem very useful. It would be > > much > > better if you could construct a unit test with a fake context3d that check > > if > > RWP impl actually stays within the transfer buffer memory limit. > > > > > > > > > > https://codereview.chromium.**org/22900018/diff/105001/cc/** > > > trees/layer_tree_host_impl.cc<https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host_impl.cc> > > > >> File cc/trees/layer_tree_host_impl.**cc (right): > >> > > > > > > https://codereview.chromium.**org/22900018/diff/105001/cc/** > > > trees/layer_tree_host_impl.cc#**newcode74<https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host_impl.cc#newcode74> > > > >> cc/trees/layer_tree_host_impl.**cc:74: if (context_provider != NULL) { > >> On 2013/09/05 17:52:10, David Reveman wrote: > >> > On 2013/09/05 15:35:07, danakj wrote: > >> > > On 2013/09/05 15:32:14, David Reveman wrote: > >> > > > On 2013/09/05 14:36:17, danakj wrote: > >> > > > > On 2013/09/04 23:22:15, David Reveman wrote: > >> > > > > > On 2013/09/03 22:19:25, danakj wrote: > >> > > > > > > On 2013/09/03 22:01:50, David Reveman wrote: > >> > > > > > > > is this really the correct way to determine the limit? What > >> if > >> > > > > > > > output_surface->**ForcedDrawToSoftwareDevice() is true? > >> Should it > >> > > be > > > >> > > based > >> > > > > on > >> > > > > > > the > >> > > > > > > > return value of GetDrawMode instead? > >> > > > > > > > >> > > > > > > AIUI This limit is how much the compositor should upload at a > >> > > time, > > > >> > > > > uploading > >> > > > > > > only happens if you have a context3d. The value is ignored if > >> you > >> are > >> > > > doing > >> > > > > > > software compositing, cuz you're not uploading. Are you > >> proposing > >> that > >> > > > this > >> > > > > > > function is returning something different? > >> > > > > > > >> > > > > > we should always be using std::numeric_limits<size_t>::**max() > >> with sw > >> > > > > > compositing. that doesn't seem to be the case at the moment > >> unless > >> > > I'm > > > >> > > > missing > >> > > > > > something? > >> > > > > > >> > > > > We don't recreate the TileManager when we draw a software > >> resourceless > >> > frame > >> > > > in > >> > > > > the middle of a bunch of GL frames. Maybe that's the confusion. > >> Output > >> > > Surface > >> > > > > can change its ForceDrawToSoftwareDevice on/off at the time of > >> each > >> > > draw > > > >> > in > >> > > > cc. > >> > > > > So that is independent from and can't be used to determine how to > >> set > >> > > up > > > >> > the > >> > > > > TileManager. > >> > > > > > >> > > > > It's kinda odd, but on webview I believe we start we start in > >> SOFTWARE > >> > mode, > >> > > > > with a zero budget. Then move to HARDWARE mode, with intermixed > >> > > > > RESOURCELESS_SOFTWARE frames intermixed where we create a > >> SoftwareRenderer > >> > > on > >> > > > > the fly for each frame. > >> > > > > > >> > > > > So I think this does exactly what you're looking for? > >> > > > > >> > > > Hm, we're still using the tile manager to initialize tiles in > >> software > >> mode, > >> > > > right? In that case, using a limit less than > >> > > numeric_limits<size_t>::max() > > > >> > > will > >> > > > effect the behavior of PBRWP. > >> > > > >> > > No, resourceless software mode means there are no resources, it just > >> draws > >> > from > >> > > the SkPictures. There's no interaction with tile manager at all AFAIK. > >> > > > >> > > > I generally find it confusing that the code and comments here seem > >> to > >> assume > >> > > > that "context_provider != NULL" means not using software while that > >> is > >> > > not > > > >> > the > >> > > > case. I think some renaming is required at a minimum. > >> > > > >> > > The whole situation is complicated and hard to understand. I want to > >> > > improve > > > >> > it. > >> > > > >> > > >> > > > > https://docs.google.com/a/**chromium.org/document/d/** > > > 1YcEU09zkVvg86C05MbLjeKFLIG1p6**93Fl1gwol4wT5c/edit#<https://docs.google.com/a/chromium.org/document/d/1YcEU09zkVvg86C05MbLjeKFLIG1p693Fl1gwol4wT5c/edit#> > > > >> > > > >> > > context_provider != NULL means using GL resources. context_provider == > >> > > NULL > > > >> > > means using software resources. This is currently so for the life of > >> the > >> > > TileManager/Renderer/**ResourceProvider. > >> > > >> > Ok, got it. > >> > > >> > nit: "if (context_provider)" instead > >> > > > > Done. > >> > > > > > > https://codereview.chromium.**org/22900018/diff/105001/cc/** > > > trees/layer_tree_host_impl.cc#**newcode76<https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host_impl.cc#newcode76> > > > >> cc/trees/layer_tree_host_impl.**cc:76: const size_t > >> kMaxBytesUploadedPerMs = > >> > > 1024 > > > >> * 1024 * 2; > >> On 2013/09/03 23:09:29, kaanb wrote: > >> > On 2013/09/03 22:01:50, David Reveman wrote: > >> > > nit: kMaxTransferBufferUsageBytes here and below > >> > > >> > Done. > >> > > > > > > > > > https://codereview.chromium.**org/22900018/%3Chttps://codereview.chromium.org...> > > > > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org.
On 2013/09/05 21:08:06, David Reveman wrote: > On 2013/09/05 20:48:26, kaanb wrote: > > How do we check if the impl stays within the transfer buffer memory limit? > > By checking the task queue size? > > The transfer buffer memory used is the sum of all buffer storage allocated for > GL_PIXEL_UNPACK_TRANSFER_BUFFER_CHROMIUM target using > WebGraphicsContext3D::bufferData(). I think TestWebGraphicsContext3D has most of > what you need already. > > > > > > > On Thu, Sep 5, 2013 at 12:51 PM, <mailto:reveman@chromium.org> wrote: > > > > > On 2013/09/05 19:26:56, kaanb wrote: > > > > > >> To add a unittest we'll need to change the RasterWorkerPool API to add a > > >> > > > virtual > > > > > >> method to return the MaxTransferUsageBytes/**MaxBytesPendingUpload value. > > >> ImageRasterWorkerPool could return 0 for now. reveman@ how does this > > >> sound? > > >> > > > > > > Not sure what you mean. Generally adding a unit test that check if the > > > return > > > value of a getter is what you expect doesn't seem very useful. It would be > > > much > > > better if you could construct a unit test with a fake context3d that check > > > if > > > RWP impl actually stays within the transfer buffer memory limit. > > > > > > > > > > > > > > > https://codereview.chromium.**org/22900018/diff/105001/cc/** > > > > > > trees/layer_tree_host_impl.cc<https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host_impl.cc> > > > > > >> File cc/trees/layer_tree_host_impl.**cc (right): > > >> > > > > > > > > > https://codereview.chromium.**org/22900018/diff/105001/cc/** > > > > > > trees/layer_tree_host_impl.cc#**newcode74<https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host_impl.cc#newcode74> > > > > > >> cc/trees/layer_tree_host_impl.**cc:74: if (context_provider != NULL) { > > >> On 2013/09/05 17:52:10, David Reveman wrote: > > >> > On 2013/09/05 15:35:07, danakj wrote: > > >> > > On 2013/09/05 15:32:14, David Reveman wrote: > > >> > > > On 2013/09/05 14:36:17, danakj wrote: > > >> > > > > On 2013/09/04 23:22:15, David Reveman wrote: > > >> > > > > > On 2013/09/03 22:19:25, danakj wrote: > > >> > > > > > > On 2013/09/03 22:01:50, David Reveman wrote: > > >> > > > > > > > is this really the correct way to determine the limit? What > > >> if > > >> > > > > > > > output_surface->**ForcedDrawToSoftwareDevice() is true? > > >> Should it > > >> > > > be > > > > > >> > > based > > >> > > > > on > > >> > > > > > > the > > >> > > > > > > > return value of GetDrawMode instead? > > >> > > > > > > > > >> > > > > > > AIUI This limit is how much the compositor should upload at a > > >> > > > time, > > > > > >> > > > > uploading > > >> > > > > > > only happens if you have a context3d. The value is ignored if > > >> you > > >> are > > >> > > > doing > > >> > > > > > > software compositing, cuz you're not uploading. Are you > > >> proposing > > >> that > > >> > > > this > > >> > > > > > > function is returning something different? > > >> > > > > > > > >> > > > > > we should always be using std::numeric_limits<size_t>::**max() > > >> with sw > > >> > > > > > compositing. that doesn't seem to be the case at the moment > > >> unless > > >> > > > I'm > > > > > >> > > > missing > > >> > > > > > something? > > >> > > > > > > >> > > > > We don't recreate the TileManager when we draw a software > > >> resourceless > > >> > frame > > >> > > > in > > >> > > > > the middle of a bunch of GL frames. Maybe that's the confusion. > > >> Output > > >> > > Surface > > >> > > > > can change its ForceDrawToSoftwareDevice on/off at the time of > > >> each > > >> > > > draw > > > > > >> > in > > >> > > > cc. > > >> > > > > So that is independent from and can't be used to determine how to > > >> set > > >> > > > up > > > > > >> > the > > >> > > > > TileManager. > > >> > > > > > > >> > > > > It's kinda odd, but on webview I believe we start we start in > > >> SOFTWARE > > >> > mode, > > >> > > > > with a zero budget. Then move to HARDWARE mode, with intermixed > > >> > > > > RESOURCELESS_SOFTWARE frames intermixed where we create a > > >> SoftwareRenderer > > >> > > on > > >> > > > > the fly for each frame. > > >> > > > > > > >> > > > > So I think this does exactly what you're looking for? > > >> > > > > > >> > > > Hm, we're still using the tile manager to initialize tiles in > > >> software > > >> mode, > > >> > > > right? In that case, using a limit less than > > >> > > > numeric_limits<size_t>::max() > > > > > >> > > will > > >> > > > effect the behavior of PBRWP. > > >> > > > > >> > > No, resourceless software mode means there are no resources, it just > > >> draws > > >> > from > > >> > > the SkPictures. There's no interaction with tile manager at all AFAIK. > > >> > > > > >> > > > I generally find it confusing that the code and comments here seem > > >> to > > >> assume > > >> > > > that "context_provider != NULL" means not using software while that > > >> is > > >> > > > not > > > > > >> > the > > >> > > > case. I think some renaming is required at a minimum. > > >> > > > > >> > > The whole situation is complicated and hard to understand. I want to > > >> > > > improve > > > > > >> > it. > > >> > > > > >> > > > >> > > > > > > https://docs.google.com/a/**chromium.org/document/d/** > > > > > > 1YcEU09zkVvg86C05MbLjeKFLIG1p6**93Fl1gwol4wT5c/edit#<https://docs.google.com/a/chromium.org/document/d/1YcEU09zkVvg86C05MbLjeKFLIG1p693Fl1gwol4wT5c/edit#> > > > > > >> > > > > >> > > context_provider != NULL means using GL resources. context_provider == > > >> > > > NULL > > > > > >> > > means using software resources. This is currently so for the life of > > >> the > > >> > > TileManager/Renderer/**ResourceProvider. > > >> > > > >> > Ok, got it. > > >> > > > >> > nit: "if (context_provider)" instead > > >> > > > > > > Done. > > >> > > > > > > > > > https://codereview.chromium.**org/22900018/diff/105001/cc/** > > > > > > trees/layer_tree_host_impl.cc#**newcode76<https://codereview.chromium.org/22900018/diff/105001/cc/trees/layer_tree_host_impl.cc#newcode76> > > > > > >> cc/trees/layer_tree_host_impl.**cc:76: const size_t > > >> kMaxBytesUploadedPerMs = > > >> > > > 1024 > > > > > >> * 1024 * 2; > > >> On 2013/09/03 23:09:29, kaanb wrote: > > >> > On 2013/09/03 22:01:50, David Reveman wrote: > > >> > > nit: kMaxTransferBufferUsageBytes here and below > > >> > > > >> > Done. > > >> > > > > > > > > > > > > > > > https://codereview.chromium.**org/22900018/%253Chttps://codereview.chromium.o...> > > > > > > > To unsubscribe from this group and stop receiving emails from it, send an > email > > to mailto:chromium-reviews+unsubscribe@chromium.org. Added the unittest, PTAL.
https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_unittest.cc (right): https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_unittest.cc:4489: // The default MaxTransferBufferUsageBytes value is 64MB. Can you assert this value somewhere int he test so we know the test is testing what it claims?
https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_unittest.cc (right): https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_unittest.cc:4489: // The default MaxTransferBufferUsageBytes value is 64MB. On 2013/09/06 21:48:17, danakj wrote: > Can you assert this value somewhere int he test so we know the test is testing > what it claims? If I add a getter for to TileManager yes, but reveman wasn't in favor of it. That's why I'm using the transfer buffer memory allocated in the context as a proxy for it.
https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_unittest.cc (right): https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_unittest.cc:4489: // The default MaxTransferBufferUsageBytes value is 64MB. On 2013/09/06 21:58:45, kaanb wrote: > On 2013/09/06 21:48:17, danakj wrote: > > Can you assert this value somewhere int he test so we know the test is testing > > what it claims? > > If I add a getter for to TileManager yes, but reveman wasn't in favor of it. > That's why I'm using the transfer buffer memory allocated in the context as a > proxy for it. What I mean is the limit that the we're passing into the tile manager. What if we changed the limit to 32MB one day, and broke the PBRWP so we were using 2*the limit, then this test would still pass. How about overriding the caps limit on the Context3d (by overriding CreateOutputSurface())? And verify we're using some value "close" to the limit, for two different limits on the context3d?
https://codereview.chromium.org/22900018/diff/130001/cc/debug/test_web_graphi... File cc/debug/test_web_graphics_context_3d.cc (right): https://codereview.chromium.org/22900018/diff/130001/cc/debug/test_web_graphi... cc/debug/test_web_graphics_context_3d.cc:490: transfer_buffer_memory_used_bytes_ += size; This is not going to do the right thing if the buffer is redefined. I recommend removing the |transfer_buffer_memory_used_bytes_| state and track the size of each buffer in the "Buffer" struct. GetTransferBufferMemoryUsedBytes() can just iterate over all buffers.. https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.h (right): https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.h:425: ContextProvider* context_provider); nit: I would change the order of these parameters to: ResourceProvider* resource_provider, ContextProvider* context_provider, bool using_map_image); https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_unittest.cc (right): https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_unittest.cc:4489: // The default MaxTransferBufferUsageBytes value is 64MB. On 2013/09/06 22:01:13, danakj wrote: > On 2013/09/06 21:58:45, kaanb wrote: > > On 2013/09/06 21:48:17, danakj wrote: > > > Can you assert this value somewhere int he test so we know the test is > testing > > > what it claims? > > > > If I add a getter for to TileManager yes, but reveman wasn't in favor of it. > > That's why I'm using the transfer buffer memory allocated in the context as a > > proxy for it. > > What I mean is the limit that the we're passing into the tile manager. What if > we changed the limit to 32MB one day, and broke the PBRWP so we were using 2*the > limit, then this test would still pass. > > How about overriding the caps limit on the Context3d (by overriding > CreateOutputSurface())? And verify we're using some value "close" to the limit, > for two different limits on the context3d? +1
https://codereview.chromium.org/22900018/diff/130001/cc/debug/test_web_graphi... File cc/debug/test_web_graphics_context_3d.cc (right): https://codereview.chromium.org/22900018/diff/130001/cc/debug/test_web_graphi... cc/debug/test_web_graphics_context_3d.cc:490: transfer_buffer_memory_used_bytes_ += size; On 2013/09/06 22:28:27, David Reveman wrote: > This is not going to do the right thing if the buffer is redefined. > > I recommend removing the |transfer_buffer_memory_used_bytes_| state and track > the size of each buffer in the "Buffer" struct. > GetTransferBufferMemoryUsedBytes() can just iterate over all buffers.. Done. After this change the number returned by GetTransferBufferMemoryUsedBytes() went down to 2MB from 40MB. https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_impl.h (right): https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_impl.h:425: ContextProvider* context_provider); On 2013/09/06 22:28:27, David Reveman wrote: > nit: I would change the order of these parameters to: > ResourceProvider* resource_provider, > ContextProvider* context_provider, > bool using_map_image); Done. https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... File cc/trees/layer_tree_host_unittest.cc (right): https://codereview.chromium.org/22900018/diff/130001/cc/trees/layer_tree_host... cc/trees/layer_tree_host_unittest.cc:4489: // The default MaxTransferBufferUsageBytes value is 64MB. On 2013/09/06 22:28:27, David Reveman wrote: > On 2013/09/06 22:01:13, danakj wrote: > > On 2013/09/06 21:58:45, kaanb wrote: > > > On 2013/09/06 21:48:17, danakj wrote: > > > > Can you assert this value somewhere int he test so we know the test is > > testing > > > > what it claims? > > > > > > If I add a getter for to TileManager yes, but reveman wasn't in favor of it. > > > That's why I'm using the transfer buffer memory allocated in the context as > a > > > proxy for it. > > > > What I mean is the limit that the we're passing into the tile manager. What if > > we changed the limit to 32MB one day, and broke the PBRWP so we were using > 2*the > > limit, then this test would still pass. > > > > How about overriding the caps limit on the Context3d (by overriding > > CreateOutputSurface())? And verify we're using some value "close" to the > limit, > > for two different limits on the context3d? > > +1 Done.
cc/ lgtm https://codereview.chromium.org/22900018/diff/104013/cc/debug/test_web_graphi... File cc/debug/test_web_graphics_context_3d.cc (right): https://codereview.chromium.org/22900018/diff/104013/cc/debug/test_web_graphi... cc/debug/test_web_graphics_context_3d.cc:609: base::ScopedPtrHashMap<unsigned, Buffer>::iterator itr = buffers.begin(); nit: |it| or |iter| is more commonly used as variable name for iterators https://codereview.chromium.org/22900018/diff/104013/cc/debug/test_web_graphi... File cc/debug/test_web_graphics_context_3d.h (right): https://codereview.chromium.org/22900018/diff/104013/cc/debug/test_web_graphi... cc/debug/test_web_graphics_context_3d.h:208: size_t transfer_buffer_memory_used_bytes() const; nit: GetMaxTransferBufferUsageBytes as this is no longer a simple getter
piman and jamesr, can you take a look for content/common/gpu and content/renderer respectively? https://codereview.chromium.org/22900018/diff/104013/cc/debug/test_web_graphi... File cc/debug/test_web_graphics_context_3d.cc (right): https://codereview.chromium.org/22900018/diff/104013/cc/debug/test_web_graphi... cc/debug/test_web_graphics_context_3d.cc:609: base::ScopedPtrHashMap<unsigned, Buffer>::iterator itr = buffers.begin(); On 2013/09/09 16:40:47, David Reveman wrote: > nit: |it| or |iter| is more commonly used as variable name for iterators Done. https://codereview.chromium.org/22900018/diff/104013/cc/debug/test_web_graphi... File cc/debug/test_web_graphics_context_3d.h (right): https://codereview.chromium.org/22900018/diff/104013/cc/debug/test_web_graphi... cc/debug/test_web_graphics_context_3d.h:208: size_t transfer_buffer_memory_used_bytes() const; On 2013/09/09 16:40:47, David Reveman wrote: > nit: GetMaxTransferBufferUsageBytes as this is no longer a simple getter Done.
Given that this review is 70+ comments long I imagine this has been covered, but can you please explain why you're adding all of these hardcoded values to render_widget.cc and why they belong there, or point to where this was discussed? It's really gross and fragile to pile logic into there. https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... File content/renderer/render_widget.cc (right): https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... content/renderer/render_widget.cc:2541: const size_t kMillisecondsPerFrame = 16; not all displays are 60Hz and 60Hz isn't 16ms https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... content/renderer/render_widget.cc:2542: // Assuming a two frame deep pipeline between the CPU and the GPU. yuck! this code doesn't know anything about what the pipeline depth is
On 2013/09/09 18:34:09, jamesr wrote: > Given that this review is 70+ comments long I imagine this has been covered, but > can you please explain why you're adding all of these hardcoded values to > render_widget.cc and why they belong there, or point to where this was > discussed? It's really gross and fragile to pile logic into there. > > https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... > File content/renderer/render_widget.cc (right): > > https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... > content/renderer/render_widget.cc:2541: const size_t kMillisecondsPerFrame = 16; > not all displays are 60Hz and 60Hz isn't 16ms > > https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... > content/renderer/render_widget.cc:2542: // Assuming a two frame deep pipeline > between the CPU and the GPU. > yuck! this code doesn't know anything about what the pipeline depth is danakj@ and piman@ discussed this approach offline, here's the relevant comment: https://codereview.chromium.org/22900018/#msg40 The code you're pointing out has been moved from PixelBufferRasterWorkerPool constructor to RenderWidget. I created named constants to make the calculations more readable.
This seems very ugly but reluctantly lgtm once you fix the variable names https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... File content/renderer/render_widget.cc (right): https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... content/renderer/render_widget.cc:2539: const size_t kMaxBytesUploadedPerMs = (2 * 1024 * 1024) / (5 * divider); These are not compile-time constants, they're computed based on runtime data, so they shouldn't be called kFooBar
https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... File content/renderer/render_widget.cc (right): https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... content/renderer/render_widget.cc:2550: WebGraphicsContext3DCommandBufferImpl::kNoLimit; Oh, is this changing the behavior on non-Android?! we currently have this on non-android: const size_t kMaxBytesUploadedPerMs = 1024 * 1024 * 2; max_bytes_pending_upload_ = 16 * 2 * kMaxBytesUploadedPerMs; looks like that is no longer the case. Btw, could you create a bug related to removing these constants and refer to it in a comment here. We might remove async uploads completely before we get to it but still nice to have a tracking bug.
LGTM for content/common/gpu
https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... File content/renderer/render_widget.cc (right): https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... content/renderer/render_widget.cc:2539: const size_t kMaxBytesUploadedPerMs = (2 * 1024 * 1024) / (5 * divider); On 2013/09/09 19:35:37, jamesr wrote: > These are not compile-time constants, they're computed based on runtime data, so > they shouldn't be called kFooBar Done. https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... content/renderer/render_widget.cc:2550: WebGraphicsContext3DCommandBufferImpl::kNoLimit; On 2013/09/09 19:51:20, David Reveman wrote: > Oh, is this changing the behavior on non-Android?! > > we currently have this on non-android: > const size_t kMaxBytesUploadedPerMs = 1024 * 1024 * 2; > max_bytes_pending_upload_ = 16 * 2 * kMaxBytesUploadedPerMs; > > looks like that is no longer the case. > > Btw, could you create a bug related to removing these constants and refer to it > in a comment here. We might remove async uploads completely before we get to it > but still nice to have a tracking bug. No, it doesn't change the behavior on non-Android. kNoLimit tells MappedMemoryManager to ignore the limit and in GetMaxTransferBufferUsageBytes in LTHI we choose the current max_bytes_pending_upload_ for TileManager and PBRWP. (context_provider->ContextCapabilities() is std::numeric_limits<size_t>::max() if context is initialized with kNoLimit and we choose the min of std::numeric_limit<size_t>::max() and the current max_bytes_pending_upload_ for TileManager/PBRWP which is max_bytes_pending_upload_)
On 2013/09/09 21:04:35, kaanb wrote: > https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... > File content/renderer/render_widget.cc (right): > > https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... > content/renderer/render_widget.cc:2539: const size_t kMaxBytesUploadedPerMs = (2 > * 1024 * 1024) / (5 * divider); > On 2013/09/09 19:35:37, jamesr wrote: > > These are not compile-time constants, they're computed based on runtime data, > so > > they shouldn't be called kFooBar > > Done. > > https://codereview.chromium.org/22900018/diff/140001/content/renderer/render_... > content/renderer/render_widget.cc:2550: > WebGraphicsContext3DCommandBufferImpl::kNoLimit; > On 2013/09/09 19:51:20, David Reveman wrote: > > Oh, is this changing the behavior on non-Android?! > > > > we currently have this on non-android: > > const size_t kMaxBytesUploadedPerMs = 1024 * 1024 * 2; > > max_bytes_pending_upload_ = 16 * 2 * kMaxBytesUploadedPerMs; > > > > looks like that is no longer the case. > > > > Btw, could you create a bug related to removing these constants and refer to > it > > in a comment here. We might remove async uploads completely before we get to > it > > but still nice to have a tracking bug. > > No, it doesn't change the behavior on non-Android. kNoLimit tells > MappedMemoryManager to ignore the limit and in GetMaxTransferBufferUsageBytes in > LTHI we choose the current max_bytes_pending_upload_ for TileManager and PBRWP. > (context_provider->ContextCapabilities() is std::numeric_limits<size_t>::max() > if context is initialized with kNoLimit and we choose the min of > std::numeric_limit<size_t>::max() and the current max_bytes_pending_upload_ for > TileManager/PBRWP which is max_bytes_pending_upload_) Ok, got it. Thanks.
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/kaanb@chromium.org/22900018/94001
Message was sent while issue was closed.
Change committed as 222158 |