|
|
DescriptionCC: Tile-size cleanup.
Two tile-size cleanups in this patch:
- Fixes a bug where the minimum tile-height for GPU-raster
pages is 512 until any amount of zoom is applied (all
layers we being considered 'long-and-skinny' by the old
logic).
- Reserves default_tile_size/max_untiled_layer_size for
software-raster. GPU-raster now computes both of these,
leaving only tile-grid-size which re-uses the setting.
That will be removed in my next patch.
BUG=365877
Committed: https://crrev.com/9ee9c855721329f1cccacedc66ab9690ee9440f8
Cr-Commit-Position: refs/heads/master@{#299027}
Patch Set 1 : #
Total comments: 3
Patch Set 2 : Match code/comment. #Patch Set 3 : Add min tile height. #Patch Set 4 : Remove tile-grid-size changes. #Patch Set 5 : Add tests. #
Total comments: 4
Patch Set 6 : Address feedback. #Patch Set 7 : Rebase. #
Total comments: 5
Patch Set 8 : Rewrite. #Patch Set 9 : Comments. #Patch Set 10 : Cleanup. #
Messages
Total messages: 39 (14 generated)
epenner@chromium.org changed reviewers: + danakj@chromium.org, enne@chromium.org, ernstm@chromium.org
Ptal. I had some cycles to wrap up outstanding bugs. I've taken the remaining suggestions from crbug.com/365877, plus one bug I found along the way. If there's more to be done let's discuss on the bug, but I think just having hardware-raster override everything works fine. Then we can just remove the default-tile-size settings someday when everything is GPU-rastered.
Patchset #1 (id:1) has been deleted
Patchset #1 (id:20001) has been deleted
> Tile-grid is in a different coordinate space, so it > doesn't really make sense to use the raster tile size. Doesn't it? It lets us tell if things are in a tile or not, so if it's larger than what we raster at a time, does it lose efficiency at culling and make us look at more draw ops per tile raster? (Also, no tests?)
> Then we can just remove the default-tile-size settings someday > when everything is GPU-rastered. Unfortunately, I think this is a state that we will not be able to get to. It seems fairly certain we'll always need software raster. > Tile-grid is in a different coordinate space, so it > doesn't really make sense to use the raster tile size. Tile grid has always been a bit unfortunately placed in the pipeline. It's a rasterization-time optimization that you have to make at record time, when you don't know the tile size or the final raster scale. I'm honestly not sure what the right size should be for tile grid.
https://codereview.chromium.org/605773004/diff/40001/cc/layers/picture_layer_... File cc/layers/picture_layer_impl.cc (right): https://codereview.chromium.org/605773004/diff/40001/cc/layers/picture_layer_... cc/layers/picture_layer_impl.cc:685: int height = layer_tree_impl()->device_viewport_size().height() / 4; Can you do something like std::min(1, layer_tree_impl()->device_viewport_size().height() / 4) here? Or maybe (...) + 1;? Or am I worried too much about viewport size being 3 pixels tall?
We should determine the default tile grid size by running benchmarks with different settings. https://codereview.chromium.org/605773004/diff/40001/cc/layers/picture_layer_... File cc/layers/picture_layer_impl.cc (right): https://codereview.chromium.org/605773004/diff/40001/cc/layers/picture_layer_... cc/layers/picture_layer_impl.cc:685: int height = layer_tree_impl()->device_viewport_size().height() / 4; On 2014/10/07 17:49:47, vmpstr wrote: > Can you do something like std::min(1, > layer_tree_impl()->device_viewport_size().height() / 4) here? Or maybe (...) + > 1;? Or am I worried too much about viewport size being 3 pixels tall? How about rounding up: ((...)+3)/4. https://codereview.chromium.org/605773004/diff/40001/cc/layers/picture_layer_... cc/layers/picture_layer_impl.cc:689: max_untiled_content_size = gfx::Size(height, height); This sets the max_untiled_content_size to height^2, but the comment above says something else?
Regarding tile-grid size: Unless we calculate what the scale will end up being, and apply that when when choosing the tile-grid-size, there will almost always be a scale difference between tile-grid size and tile size. As soon as there is a scale difference there is no predictable performance benefit from having them be coupled. I totally agree with using a perf metric to tune the independent tile-grid size, but from Manfred's testing on the bug there was no performance difference seen. Tuning this independently of tile-size can continue as we perform and improve our measurements. Regarding rounding up gpu-raster tile-size: How about max(256, viewport_height/4). 256 is the smallest dimension we've ever used, and smaller than that seems silly to me. If there's a better number than 256 just let me know and I'll substitute that in. Regarding testing: I've tested it manually by pinch-zooming. Since this code is just choosing constants, I think a unit test of this function would just be a change detector. I don't see the risk from changing this code so long as it's manually tested when it is changed. Some backup for my opinion on that: (https://big.corp.google.com/~jmcmaster/testing/2014/09/episode-350-change-det...)
On Tue, Oct 7, 2014 at 4:37 PM, <epenner@google.com> wrote: > Regarding tile-grid size: > > Unless we calculate what the scale will end up being, and apply that when > when > choosing the tile-grid-size, there will almost always be a scale difference > between tile-grid size and tile size. As soon as there is a scale > difference > there is no predictable performance benefit from having them be coupled. > > I totally agree with using a perf metric to tune the independent tile-grid > size, > but from Manfred's testing on the bug there was no performance difference > seen. > Tuning this independently of tile-size can continue as we perform and > improve > our measurements. > I support making it tunable, but I don't think we should change the tunable value arbitrarily (and in a much larger CL). > > Regarding rounding up gpu-raster tile-size: > > How about max(256, viewport_height/4). 256 is the smallest dimension we've > ever > used, and smaller than that seems silly to me. If there's a better number > than > 256 just let me know and I'll substitute that in. > > Regarding testing: > > I've tested it manually by pinch-zooming. Since this code is just choosing > constants, I think a unit test of this function would just be a change > detector. > I don't see the risk from changing this code so long as it's manually > tested > when it is changed. Some backup for my opinion on that: > (https://big.corp.google.com/~jmcmaster/testing/2014/09/ > episode-350-change-detector-tests.html) > I was thinking of a test that sets a size in the LayerTreeSettings, and verifies the tile grid ends up with that size, so we know the setting actually does something. Do you think that falls under change-detection? Regarding manual testing, I agree that helps verify your change works, but it also builds to the collective things everyone must manually test when they change said function in order to not regress your work, also? To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
> I support making it tunable, but I don't think we should change the tunable > value arbitrarily (and in a much larger CL). > I think it's worse currently in that it's effectively arbitrary while appearing to be automatically tuned. I pulled it out for another patch though, so we can discuss further before I land that. > I was thinking of a test that sets a size in the LayerTreeSettings, and > verifies the tile grid ends up with that size, so we know the setting > actually does something. Do you think that falls under change-detection? That one sounds reasonable. I figured the test request was for the bug I was fixing, and that part (what's remaining in this patch) I don't see the point in testing. But I like the other idea so if you've got similar idea for the remaining code I'm all ears. > > Regarding manual testing, I agree that helps verify your change works, but > it also builds to the collective things everyone must manually test when > they change said function in order to not regress your work, also? > Reasonable. I think I just misunderstood what code you wanted to test. Let me know if you have a similar good idea for the remaining code. > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org.
> Regarding rounding up gpu-raster tile-size: > > How about max(256, viewport_height/4). 256 is the smallest dimension we've ever > used, and smaller than that seems silly to me. If there's a better number than > 256 just let me know and I'll substitute that in. If the tile height is rounded down, we might create one small extra tile at the bottom of a page. That causes overhead, in particular if the page fits exactly into the viewport. Using a minimum of 256 is also a good idea, so maybe we should combine it with rounding up?
On 2014/10/07 22:47:28, ernstm wrote: > > Regarding rounding up gpu-raster tile-size: > > > > How about max(256, viewport_height/4). 256 is the smallest dimension we've > ever > > used, and smaller than that seems silly to me. If there's a better number than > > 256 just let me know and I'll substitute that in. > > If the tile height is rounded down, we might create one small extra tile at the > bottom of a page. That causes overhead, in particular if the page fits exactly > into the viewport. Using a minimum of 256 is also a good idea, so maybe we > should combine it with rounding up? Do you mean rounded down due to integer divide? And do you mean that ideally we could have exactly four tiles match the viewport when the content is exactly the size of the viewport? I liked that idea, however it's more complex than just the divide rounding down, since there is also buffer pixels on each tile. Also, the use case isn't nearly as relevant on Android, since the viewport doesn't have the URL bar subtracted, and so there won't be an extra tile to raster anyway (neither buffer pixels nor the divide rounding is relevant compared to the URL bar height). I have most of the code written to do the exact viewport matching, and also wrote tests after all since that's more complex, but given it doesn't help much on Android I'm not sure we want to add that complexity yet. So, ptal for the added tests and for the code minus exact viewport matching. The tests are indeed mostly change detectors, but will help if we make it more complex for viewport matching.
Oh, one last thing, I made the minimum height 128 rather than 256 to keep the prior behavior in landscape. 256 actually changes the area of each tile quite a bit in landscape I realized, so that can be adjusted later. And, here's the next patch if we want to match the viewport exactly. Or I can roll it into this patch: https://codereview.chromium.org/626113004
danakj@chromium.org changed reviewers: + vmpstr@chromium.org
> - Reserves default_tile_size/max_untiled_layer_size for You'll need to remove that from the description. Removing myself as reviewer on this CL looks like vmpstr's got the PLI change.
epenner@google.com changed reviewers: - danakj@chromium.org
> > - Reserves default_tile_size/max_untiled_layer_size for > You'll need to remove that from the description. Updated that. Vlad/Manfred, any other thoughts? Manfred, see this patch for what I think you were suggesting (https://codereview.chromium.org/626113004), but I'd rather do that as another patch.
Overall I think this looks OK from the correctness perspective. From the code perspective it's a bit complicated for my taste, but I'm not sure if that's something you should fix or something better saved of a separate patch. https://codereview.chromium.org/605773004/diff/120001/cc/layers/picture_layer... File cc/layers/picture_layer_impl.cc (right): https://codereview.chromium.org/605773004/diff/120001/cc/layers/picture_layer... cc/layers/picture_layer_impl.cc:687: height = std::max(height, 128); nit: Consider making this a constant. https://codereview.chromium.org/605773004/diff/120001/cc/layers/picture_layer... cc/layers/picture_layer_impl.cc:703: !use_gpu && (content_bounds.width() <= default_tile_size.width() || Can you make a comment here or something like that to explain why this isn't appropriate for gpu? It took me way too long of a time staring at this to convince myself that "yeah I think that's fine". I think maybe the "any_dimension_too_large" should be "both_dimensions_are_small" instead, so the if below is if (a || b) instead of if (a || !b)
Patchset #6 (id:140001) has been deleted
Patchset #6 (id:160001) has been deleted
Patchset #6 (id:180001) has been deleted
Patchset #6 (id:200001) has been deleted
epenner@google.com changed reviewers: + epenner@google.com
Regarding complexity, I'm up for simplifying it in this patch if you have any thoughts on that. I considered making two functions for CPU/GPU, but they would share quite a bit of the same code... Also all ears for a different strategy. https://codereview.chromium.org/605773004/diff/120001/cc/layers/picture_layer... File cc/layers/picture_layer_impl.cc (right): https://codereview.chromium.org/605773004/diff/120001/cc/layers/picture_layer... cc/layers/picture_layer_impl.cc:687: height = std::max(height, 128); On 2014/10/08 23:30:28, vmpstr wrote: > nit: Consider making this a constant. Done. https://codereview.chromium.org/605773004/diff/120001/cc/layers/picture_layer... cc/layers/picture_layer_impl.cc:703: !use_gpu && (content_bounds.width() <= default_tile_size.width() || On 2014/10/08 23:30:28, vmpstr wrote: > Can you make a comment here or something like that to explain why this isn't > appropriate for gpu? It took me way too long of a time staring at this to > convince myself that "yeah I think that's fine". I think maybe the > "any_dimension_too_large" should be "both_dimensions_are_small" instead, so the > if below is if (a || b) instead of if (a || !b) Done. While adding the comment, I realized this can be slightly better by still expanding gpu-rastered tiles vertically if they are 'skinny' horizontally. But we can't use the tile-width to determine that. So in the new patch I use the tile-height, changed both names, and add a comment.
On 2014/10/08 23:08:05, epennerAtGoogle wrote: > > > - Reserves default_tile_size/max_untiled_layer_size for > > You'll need to remove that from the description. > > Updated that. > > Vlad/Manfred, any other thoughts? Manfred, see this patch for what I think you > were suggesting (https://codereview.chromium.org/626113004), but I'd rather do > that as another patch. It's fine to improve the rounding for viewport-sized content in a separate patch. But we should definitely do it. I would expect full-screen viewport-sized content to be common for web apps.
https://codereview.chromium.org/605773004/diff/150003/cc/layers/picture_layer... File cc/layers/picture_layer_impl.cc (right): https://codereview.chromium.org/605773004/diff/150003/cc/layers/picture_layer... cc/layers/picture_layer_impl.cc:689: if (use_gpu) { For complexity thing, maybe just have the two variables default initialized and have an explicit if/else.. something like Foo foo; if (gpu) foo = expr1; else foo = expr2; instead of Foo foo = expr2; if (gpu) { /* disregard all variables above */ foo = expr1; } That and maybe max_untiled_content_size needs a better name. This might be me, but it's not evident to me what max_untiled_content_size means.. Maybe largest_tile_size_possible or something? I don't know, this is a weak point for me so it doesn't really matter. Largest tile size possible name also conflicts with what we set it in gpu since 2 * height can be smaller than the default size of width. https://codereview.chromium.org/605773004/diff/150003/cc/layers/picture_layer... cc/layers/picture_layer_impl.cc:714: long_and_skinny = content_bounds.width() <= default_tile_size.height(); typo? This should probably compare height with height. This would only make a difference in content bounds width is larger than default tile size width right? That is, if content bounds width is also smaller than or equal to viewport, then even without long and skinny optimization 1 tile would cover the whole content (I'm just trying to understand the cases when this hits)
https://codereview.chromium.org/605773004/diff/150003/cc/layers/picture_layer... File cc/layers/picture_layer_impl.cc (right): https://codereview.chromium.org/605773004/diff/150003/cc/layers/picture_layer... cc/layers/picture_layer_impl.cc:689: if (use_gpu) { On 2014/10/09 16:35:43, vmpstr wrote: > For complexity thing, maybe just have the two variables default initialized and > have an explicit if/else.. something like > > Foo foo; > if (gpu) > foo = expr1; > else > foo = expr2; > > instead of > > Foo foo = expr2; > if (gpu) { > /* disregard all variables above */ > foo = expr1; > } > > That and maybe max_untiled_content_size needs a better name. This might be me, > but it's not evident to me what max_untiled_content_size means.. Maybe > largest_tile_size_possible or something? I don't know, this is a weak point for > me so it doesn't really matter. Largest tile size possible name also conflicts > with what we set it in gpu since 2 * height can be smaller than the default size > of width. I dunno, I think once you see the code you might not like this as much. I like if/else if there is actually some instructions worth saving, but otherwise it just adds three negatives IMO: More braces, more indentation, and more default-or-uninitialized variables that need to be mentally tracked through all branches. I did the same thing below for long_and_skinny, and if I made that a full if/else there would be more code lines etc. https://codereview.chromium.org/605773004/diff/150003/cc/layers/picture_layer... cc/layers/picture_layer_impl.cc:714: long_and_skinny = content_bounds.width() <= default_tile_size.height(); This is what I intended. But perhaps I need to improve this further and make it more clear. This is used to expand a tile beyond it's typical size in response to content being skinny in the other direction. If the content has a skinny width, we expand the tile height and vice versa... However, we never want to expand tile-width (it's already ginormous), and the tile-width is not useful to determine if the content-width is skinny (tile-width is ginormous). I'll take another pass at making this better/clear.
https://codereview.chromium.org/605773004/diff/150003/cc/layers/picture_layer... File cc/layers/picture_layer_impl.cc (right): https://codereview.chromium.org/605773004/diff/150003/cc/layers/picture_layer... cc/layers/picture_layer_impl.cc:689: if (use_gpu) { On 2014/10/09 18:36:33, epenner wrote: > On 2014/10/09 16:35:43, vmpstr wrote: > > For complexity thing, maybe just have the two variables default initialized > and > > have an explicit if/else.. something like > > > > Foo foo; > > if (gpu) > > foo = expr1; > > else > > foo = expr2; > > > > instead of > > > > Foo foo = expr2; > > if (gpu) { > > /* disregard all variables above */ > > foo = expr1; > > } > > > > That and maybe max_untiled_content_size needs a better name. This might be me, > > but it's not evident to me what max_untiled_content_size means.. Maybe > > largest_tile_size_possible or something? I don't know, this is a weak point > for > > me so it doesn't really matter. Largest tile size possible name also conflicts > > with what we set it in gpu since 2 * height can be smaller than the default > size > > of width. > > I dunno, I think once you see the code you might not like this as much. > > I like if/else if there is actually some instructions worth saving, but > otherwise it just adds three negatives IMO: More braces, more indentation, and > more default-or-uninitialized variables that need to be mentally tracked through > all branches. > > I did the same thing below for long_and_skinny, and if I made that a full > if/else there would be more code lines etc. Fair enough. That's why I'm not really sure how to make it too much better :) Let me know when the new patch is up, but I'm pretty happy with this as is.
> Fair enough. That's why I'm not really sure how to make it too much better :) > Let me know when the new patch is up, but I'm pretty happy with this as is. Okay, I actually did a complete rewrite in response to your other comment about the confusing comparison of height and width. Now the GPU uses a more rational 'half the viewport' to determine when to expand tiles vertically. With the rewrite it's down to one conditional for CPU/GPU, and then some unconditional clamping etc. This should be identical for CPU tiles as before but I think it's more clear. Ptal.
I think that looks pretty good! Thanks. It's lgtm from me, but please ensure that other people are cool with this as well.
On 2014/10/09 20:30:15, vmpstr wrote: > I think that looks pretty good! Thanks. It's lgtm from me, but please ensure > that other people are cool with this as well. Nice. LGTM.
Patchset #9 (id:320001) has been deleted
Patchset #9 (id:340001) has been deleted
Patchset #10 (id:400001) has been deleted
The CQ bit was checked by epenner@chromium.org
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/605773004/420001
Message was sent while issue was closed.
Committed patchset #10 (id:420001)
Message was sent while issue was closed.
Patchset 10 (id:??) landed as https://crrev.com/9ee9c855721329f1cccacedc66ab9690ee9440f8 Cr-Commit-Position: refs/heads/master@{#299027} |