|
|
Created:
6 years, 5 months ago by mfomitchev Modified:
6 years, 3 months ago CC:
chromium-reviews, jam, sievers+watch_chromium.org, jbauman+watch_chromium.org, darin-cc_chromium.org, kalyank, piman+watch_chromium.org, danakj+watch_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Project:
chromium Visibility:
Public. |
DescriptionImproving GestureNav screenshotting performance - Part 1 - gl_helper
Pixel readback in GL_RGBA format can be a very expensive operation on
some hardware configurations.
Adding support for kAlpha_8_SkColorType to CropScaleReadbackAndCleanTexture().
For this color type, the texture is converted to grayscale on the GPU and then
read back using GL_BGRA_EXT format.
This improves the readback performance because we read four times less
data and we don't have to do any format conversions as we are reading.
Original review: https://codereview.chromium.org/384453002/
Part 2: https://codereview.chromium.org/389683002/
BUG=379983, 380779
Committed: https://crrev.com/a73ebd2e88a5a0e310d9741d56129827195afa27
Cr-Commit-Position: refs/heads/master@{#291768}
Patch Set 1 #Patch Set 2 : Minor change #
Total comments: 13
Patch Set 3 : Addressing review feedback. #Patch Set 4 : Removing unwanted change. #
Total comments: 4
Patch Set 5 : Rebase and cleanup. #Patch Set 6 : Unit Test for CropScaleReadbackAndCleanTexture #
Total comments: 40
Patch Set 7 : Addressing review feedback + git cl format #Patch Set 8 : Addressing review feedback - pt. 2 #Patch Set 9 : git cl format #
Total comments: 2
Patch Set 10 : Zero error tolerance for no-scale/no-transform scenarios. #Patch Set 11 : Adding consts for YUV test. #
Messages
Total messages: 40 (0 generated)
I was thinking about declaring A8 as the "supported" color type in gl helper. I can probably take care of all the readback-related methods in GLHelper (by reading the x4 smaller texture in GL_BGRA), but I don't think ScaleTexture() will work for A8, at least not without a bunch of changes in GLHelperScaling, which seems like too much work. Also, GLHelper::IsReadbackConfigSupported currently just calls into GLHelperReadbackSupport::IsReadbackConfigSupported. We can't change the latter, so we'd have to special-case A8 in GLHelper::IsReadbackConfigSupported and the semantics of these two methods would diverge which could be confusing. Perhaps we could split GLHelper::IsReadbackConfigSupported into two - one for native support and one for readback (not scaling) support in GLHelper. It would be useful - e.g. RenderWidgetHostViewAndroid uses it to determine if it can take a screenshot in a given format. However the semantics would be confusing/hard to explain. E.g. CropScaleReadbackAndCleanTexture() can accept A8, whereas ScaleTexture() can't.. What do you think?
https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:427: GLuint dst_texture = 1u; Should we set the id to 0 instead of 1 here? Does it matter?
sievers@chromium.org: Can you please take a look? danakj started reviewing this change, but she is sheriffing the next couple of days. This is wip. I will add unit tests and clean up the change once I get an okay for the overall approach. One of the questions right now is what approach to take on a8 color type "support" in GL helper - please see the first message for details. Thanks!
https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:423: const gfx::Rect& src_subrect, |src_subrect| is ignored below? https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:427: GLuint dst_texture = 1u; On 2014/07/11 20:21:33, mfomitchev wrote: > Should we set the id to 0 instead of 1 here? Does it matter? Yea I'd init to 0, it's what we do in other places. https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:435: src_size.height(), This is the wrong size, shouldn't it be a fourth of it? https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:548: out_color_type == kAlpha_8_SkColorType ? kN32_SkColorType This seems wrong for Alpha_8. If you are using the native byte order (kN32_SkColorType), doesn't that mean |swizzle| should be false? https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:555: TransformTextureToGrayscale(texture, src_size, src_subrect, swizzle); +hubbe to comment if this can be done in a single pass with the scaling above
@sievers: Thanks for the quick review! I'll make the changes. Can you please comment re A8 "support" in GL helper? Dana has commented in the original review that it would be good to modify GLHelper::IsReadbackConfigSupported to return true for A8, however there are some issues with that. Below is the re-post of the comment I made on this. If you think the way I am handling it in this CL is okay - then great! I was thinking about declaring A8 as the "supported" color type in gl helper. I can take care of all the readback-related methods in GLHelper (by reading the x4 smaller texture in GL_BGRA), but I don't think ScaleTexture() will work for A8, at least not without a bunch of changes in GLHelperScaling, which seems like too much work. Also, GLHelper::IsReadbackConfigSupported currently just calls into GLHelperReadbackSupport::IsReadbackConfigSupported. We can't change the latter, so we'd have to special-case A8 in GLHelper::IsReadbackConfigSupported and the semantics of these two methods would diverge which could be confusing. Perhaps we could split GLHelper::IsReadbackConfigSupported into two - one for native support and one for readback (not scaling) support in GLHelper. It would be useful - e.g. RenderWidgetHostViewAndroid uses it to determine if it can take a screenshot in a given format. However the semantics would be confusing/hard to explain. E.g. CropScaleReadbackAndCleanTexture() can accept A8, whereas ScaleTexture() can't.. What do you think?
https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:555: TransformTextureToGrayscale(texture, src_size, src_subrect, swizzle); On 2014/07/15 19:31:49, sievers wrote: > +hubbe to comment if this can be done in a single pass with the scaling above It can, iff quality is SCALER_QUALITY_FAST, search for "optimization:" below for another example of this.
On 2014/07/15 19:55:21, mfomitchev wrote: > @sievers: Thanks for the quick review! I'll make the changes. Can you please > comment re A8 "support" in GL helper? Dana has commented in the original review > that it would be good to modify GLHelper::IsReadbackConfigSupported to return > true for A8, however there are some issues with that. Below is the re-post of > the comment I made on this. If you think the way I am handling it in this CL is > okay - then great! > > I was thinking about declaring A8 as the "supported" color type in gl helper. I > can take care of all the readback-related methods in GLHelper (by > reading the x4 smaller texture in GL_BGRA), but I don't think ScaleTexture() > will work for A8, at least not without a bunch of changes in GLHelperScaling, > which seems like too much work. > > Also, GLHelper::IsReadbackConfigSupported currently just calls into > GLHelperReadbackSupport::IsReadbackConfigSupported. We can't change the latter, > so we'd have to special-case A8 in GLHelper::IsReadbackConfigSupported and the > semantics of these two methods would diverge which could be confusing. > > Perhaps we could split GLHelper::IsReadbackConfigSupported into two - one for > native support and one for readback (not scaling) support in GLHelper. It would > be useful - e.g. RenderWidgetHostViewAndroid uses it to determine if it can take > a screenshot in a given format. However the semantics would be confusing/hard to > explain. E.g. CropScaleReadbackAndCleanTexture() can accept A8, whereas > ScaleTexture() can't.. > > What do you think? Right now 'IsReadbackConfigSupported(fmt)' means whether glReadPixels() is supported, given the format of the texture being read is |fmt|. Since you are using a 32bit however (I assume because GLES does not seem to guarantee luminance support?), it's maybe better to say that it does not (and I assume it doesn't on GLES). But maybe we could change it slightly so that you cannot call ReadbackTextureSync and ReadbackTextureAsync with A8. And you can also call Readback*() in the proper way from the internal call site, i.e. reading back the texture as 32bit and removing the changes from ReadbackAsync() to handle it there.
> Right now 'IsReadbackConfigSupported(fmt)' means whether glReadPixels() is > supported, given the format of the texture being read is |fmt|. Since you are > using a 32bit however (I assume because GLES does not seem to guarantee > luminance support?), it's maybe better to say that it does not (and I assume it > doesn't on GLES). > > But maybe we could change it slightly so that you cannot call > ReadbackTextureSync and ReadbackTextureAsync with A8. > And you can also call Readback*() in the proper way from the internal call site, > i.e. reading back the texture as 32bit and removing the changes from > ReadbackAsync() to handle it there. This is actually the approach in my original CL (https://codereview.chromium.org/384453002). The awkward part about that was that I had to return the effective texture size from TransformTextureToGrayscale (it was called TransformTextureToA8 in the original CL), and the last couple of channels in the last pixel could be invalid (see comment on line 450 in gl_helper.cc in the original CL). Dana suggested we try handling this in ReadbackAsync, which does solve that particular issue. IMHO the fact that we handle A8 in ReadbackAsync could actually be viewed as a good thing (albeit a bit inconsistent)..
On 2014/07/15 21:37:36, mfomitchev wrote: > > Right now 'IsReadbackConfigSupported(fmt)' means whether glReadPixels() is > > supported, given the format of the texture being read is |fmt|. Since you are > > using a 32bit however (I assume because GLES does not seem to guarantee > > luminance support?), it's maybe better to say that it does not (and I assume > it > > doesn't on GLES). > > > > But maybe we could change it slightly so that you cannot call > > ReadbackTextureSync and ReadbackTextureAsync with A8. > > And you can also call Readback*() in the proper way from the internal call > site, > > i.e. reading back the texture as 32bit and removing the changes from > > ReadbackAsync() to handle it there. > > This is actually the approach in my original CL > (https://codereview.chromium.org/384453002). The awkward part about that was > that I had to return the effective texture size from TransformTextureToGrayscale > (it was called TransformTextureToA8 in the original CL), and the last couple of > channels in the last pixel could be invalid (see comment on line 450 in > gl_helper.cc in the original CL). Dana suggested we try handling this in > ReadbackAsync, which does solve that particular issue. IMHO the fact that we > handle A8 in ReadbackAsync could actually be viewed as a good thing (albeit a > bit inconsistent).. But the code that Dana commented on still exists in the same place in this version. You have code in ReadbackTextureAsync() that rounds up the width to a multiple of 4 and overides bpp (from 1) to 4. What else changes if you do that in the caller?
On 2014/07/15 22:02:38, sievers wrote: > On 2014/07/15 21:37:36, mfomitchev wrote: > > > Right now 'IsReadbackConfigSupported(fmt)' means whether glReadPixels() is > > > supported, given the format of the texture being read is |fmt|. Since you > are > > > using a 32bit however (I assume because GLES does not seem to guarantee > > > luminance support?), it's maybe better to say that it does not (and I assume > > it > > > doesn't on GLES). > > > > > > But maybe we could change it slightly so that you cannot call > > > ReadbackTextureSync and ReadbackTextureAsync with A8. > > > And you can also call Readback*() in the proper way from the internal call > > site, > > > i.e. reading back the texture as 32bit and removing the changes from > > > ReadbackAsync() to handle it there. > > > > This is actually the approach in my original CL > > (https://codereview.chromium.org/384453002). The awkward part about that was > > that I had to return the effective texture size from > TransformTextureToGrayscale > > (it was called TransformTextureToA8 in the original CL), and the last couple > of > > channels in the last pixel could be invalid (see comment on line 450 in > > gl_helper.cc in the original CL). Dana suggested we try handling this in > > ReadbackAsync, which does solve that particular issue. IMHO the fact that we > > handle A8 in ReadbackAsync could actually be viewed as a good thing (albeit a > > bit inconsistent).. > > But the code that Dana commented on still exists in the same place in this > version. > You have code in ReadbackTextureAsync() that rounds up the width to a multiple > of 4 and overides bpp (from 1) to 4. > What else changes if you do that in the caller? Well, TransformTextureToGrayscale no longer returns the texture size which is what Dana was commenting on. CropScaleReadbackAndCleanTexture() no longer knows that the texture we are working with is a 4bpp texture with width (dst_size.width() + 3) / 4 - it is operating as if the texture was 1bpp with width dst_size.width(). And readback_size is calculated in ReadbackAsync(), not passed in. I think this along the lines of what Dana was suggesting to try.. Now that I lay I out I am not sure it's better than what I had. I can change it back to the way it was, or perhaps you can suggest something better?
On 2014/07/15 23:02:56, mfomitchev wrote: > On 2014/07/15 22:02:38, sievers wrote: > > On 2014/07/15 21:37:36, mfomitchev wrote: > > > > Right now 'IsReadbackConfigSupported(fmt)' means whether glReadPixels() is > > > > supported, given the format of the texture being read is |fmt|. Since you > > are > > > > using a 32bit however (I assume because GLES does not seem to guarantee > > > > luminance support?), it's maybe better to say that it does not (and I > assume > > > it > > > > doesn't on GLES). > > > > > > > > But maybe we could change it slightly so that you cannot call > > > > ReadbackTextureSync and ReadbackTextureAsync with A8. > > > > And you can also call Readback*() in the proper way from the internal call > > > site, > > > > i.e. reading back the texture as 32bit and removing the changes from > > > > ReadbackAsync() to handle it there. > > > > > > This is actually the approach in my original CL > > > (https://codereview.chromium.org/384453002). The awkward part about that was > > > that I had to return the effective texture size from > > TransformTextureToGrayscale > > > (it was called TransformTextureToA8 in the original CL), and the last couple > > of > > > channels in the last pixel could be invalid (see comment on line 450 in > > > gl_helper.cc in the original CL). Dana suggested we try handling this in > > > ReadbackAsync, which does solve that particular issue. IMHO the fact that we > > > handle A8 in ReadbackAsync could actually be viewed as a good thing (albeit > a > > > bit inconsistent).. > > > > But the code that Dana commented on still exists in the same place in this > > version. > > You have code in ReadbackTextureAsync() that rounds up the width to a multiple > > of 4 and overides bpp (from 1) to 4. > > What else changes if you do that in the caller? > > Well, TransformTextureToGrayscale no longer returns the texture size which is > what Dana was commenting on. CropScaleReadbackAndCleanTexture() no longer knows > that the texture we are working with is a 4bpp texture with width > (dst_size.width() + 3) / 4 - it is operating as if the texture was 1bpp with > width dst_size.width(). And readback_size is calculated in ReadbackAsync(), not > passed in. I think this along the lines of what Dana was suggesting to try.. > > Now that I lay I out I am not sure it's better than what I had. I can change it > back to the way it was, or perhaps you can suggest something better? I think it's fine either way. But I would make sure that nobody can call ReadbackTextureAsync() with A8 as format, because it would not work (the caller would have to bind a 32bit texture that is rounded up instead of a single channel texture).
https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:423: const gfx::Rect& src_subrect, On 2014/07/15 19:31:49, sievers wrote: > |src_subrect| is ignored below? Done. https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:427: GLuint dst_texture = 1u; On 2014/07/15 19:31:49, sievers wrote: > On 2014/07/11 20:21:33, mfomitchev wrote: > > Should we set the id to 0 instead of 1 here? Does it matter? > > Yea I'd init to 0, it's what we do in other places. Done. https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:435: src_size.height(), On 2014/07/15 19:31:49, sievers wrote: > This is the wrong size, shouldn't it be a fourth of it? Done. https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:548: out_color_type == kAlpha_8_SkColorType ? kN32_SkColorType On 2014/07/15 19:31:49, sievers wrote: > This seems wrong for Alpha_8. If you are using the native byte order > (kN32_SkColorType), doesn't that mean |swizzle| should be false? Done. https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:555: TransformTextureToGrayscale(texture, src_size, src_subrect, swizzle); Done. Also added vertically_flip_texture to EncodeTextureAsGrayscale to take care of the flip when we don't do scaling. https://codereview.chromium.org/388953002/diff/20001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:555: TransformTextureToGrayscale(texture, src_size, src_subrect, swizzle); Also fixed: the second arg should be dst_size, not src_size.
Looks good, I find it hard to wrap my head around all the swizzling involved though :) Can you add a test that checks some odd-sized cases and makes sure we get the swizzling and readback right? Including maybe checking that we don't overrun the output buffer by putting some marker at the end. https://codereview.chromium.org/388953002/diff/60001/content/common/gpu/clien... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/60001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:449: GL_BGRA_EXT, Is it more straightforward to use GL_RGBA here if we are on Android and then never swizzle below in line 461? https://codereview.chromium.org/388953002/diff/60001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:461: (swizzle == kSwizzleBGRA), Here you are comparing a bool to an enum. Did you mean 'swizzle ? kSwizzleBGRA : kSwizzleNone'?
https://codereview.chromium.org/388953002/diff/60001/content/common/gpu/clien... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/60001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:449: GL_BGRA_EXT, I am getting the same result regardless of whether I use BGRA or RGBA. I am not entirely sure why.. I replaced this with GL_RGBA since this is what we are using elsewhere. https://codereview.chromium.org/388953002/diff/60001/content/common/gpu/clien... content/common/gpu/client/gl_helper.cc:461: (swizzle == kSwizzleBGRA), Just using the bool swizzle that's passed in now.
@hubbe: You've made a point in crbug.com/379983 that readbacks should happen using GL_BGRA_EXT to ensure good performance. Does it matter what is the format of the texture we are reading from? I.e. if we use GL_RGBA vs. GL_BGRA_EXT does it have any effect on the readback? I noticed that when I change the texture format used for gl_->TexImage2D in CopyTextureToImpl::EncodeTextureAsGrayscale() between GL_RGBA and GL_BGRA_EXT, it doesn't have any effect on the final result, but perhaps it has an effect on the performance? Thanks!
@sievers: The unit test is ready - can you please take another look? Thanks!
On 2014/08/13 18:05:26, mfomitchev_OOO_Aug16-24 wrote: > @hubbe: You've made a point in crbug.com/379983 that readbacks should happen > using GL_BGRA_EXT to ensure good performance. Does it matter what is the format > of the texture we are reading from? I.e. if we use GL_RGBA vs. GL_BGRA_EXT does > it have any effect on the readback? I noticed that when I change the texture > format used for gl_->TexImage2D in CopyTextureToImpl::EncodeTextureAsGrayscale() > between GL_RGBA and GL_BGRA_EXT, it doesn't have any effect on the final result, > but perhaps it has an effect on the performance? Thanks! Unfortunately I don't know. I suspect that it doesn't matter (because all RGBA formats will be stored as GL_BGRA_EXT internally), and that GL_RGBA is the better choice, but you'll have to find someone with more specific knowledge of the intel/mesa drivers to be sure.
https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper.cc:521: readback_color_type = kN32_SkColorType; I don't think it makes sense to check for readback support for the platform-specific preferred native format here (kN32_SkColorType = RGBA on Android, BGRA otherwise I believe), since we are not using that format. You hardcode the format for the alpha texture to RGBA, so we should check that here. Esp. since the additional format we query (and might prefer) depends on the format of the bound texture when we query it. And the way this check works is that the texture is created with the format matching the SkColorType we pass into GetReadbackConfig() (so if you pass BGRA here it tells you whether you can read back a native BGRA framebuffer which is not what we want to know)... https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper.cc:570: // read back as RGBA, so need to swizzle if the readback format is BGRA. ...and this check is still correct then, I think. If format == BGRA here then GetReadbackConfig() believes it's better to read back a RGBA framebuffer as BGRA (according to the comment there). For that to yield the correct results, we have to swizzle the data first to neutralize the swizzle that will then happen during readback. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper.cc:572: texture = EncodeTextureAsGrayscale( Do we leak |texture| here if we scaled above, as in the scaled RGBA version? https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... File content/common/gpu/client/gl_helper_unittest.cc (right): https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:429: LOG(ERROR) << "-------original--------"; nit: indent https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:468: const float kRGBtoGrayscaleColorWeights[3] = {0.213f, 0.715f, 0.072f}; Should these constants be shared? https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:486: static_cast<int>(value * 255.0f + 0.5f)); hmm why +0.5? https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:718: size_t quality) { nit: maybe less error-prone to just pass the ScalerQuality in here. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:759: FlipSKBitmap(&output_pixels); Why do we end up with an upside down image? That seems odd. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:761: // Now transofrm the bitmap using the reference implementation. nit, typo: 'transform' https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:777: case kBGRA_8888_SkColorType: nit: indent here and below https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:778: // Swizzle pixels of the scaled bitmap if the output needs to be BGRA or could Compare() just learn to handle the format of the bitmap? https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:802: 2, Just wondering, why 2? Maybe hubbe knows. I did notice btw that the scaler tests which we don't run on mobile on the waterfall actually fail with larger errors if you do try to run them. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:1829: SkColorType color_types[] = nit: const int kSizes[] = ... const SkColorType kColorTypes[] =... https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:1832: for (size_t q = 0; q < arraysize(kQualities); q += 2) { why q+=2 and not q++? https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:1833: for (int color_type = 0; color_type < 3; color_type++) { color_type < arraysize(kColorTypes)
https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper.cc:521: readback_color_type = kN32_SkColorType; GetReadbackConfig() returns the readback format. hubbe@ was making a point in crbug.com/379983 that we should be doing the readback using BGRA on CrOS. So this is what I wanted to ensure. My naive assumption was that if I pass the native format to GetReadbackConfig(), it would also return the readback format which is more efficient. I don't really care which format it tells me to use, I just want the processing + readback to be as fast as possible. Although I do need to swizzle if the readback is using BGRA since EncodeAsGrayscale writes the pixels in RGBA order. > You hardcode the format for the alpha texture to RGBA, so we should check that here. Ok, this is the part I am pretty confused about and asked for clarification from hubbe@ above. It sounds like you are saying that the format we are using with TexImage2D() does matter and we should match it to the format we pass to GetReadbackConfig()? I tried changing the format of the alpha texture to BGRA, and it didn't seem to make any difference at all. Do you mind explaining this? I ended up making it RGBA only because we use RGBA or RGB everywhere else in gl helper. We don't seem to use BGRA anywhere here with TexImage2D() (even though we do use BGRA for readback). Why is that? https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper.cc:572: texture = EncodeTextureAsGrayscale( On 2014/08/20 20:29:51, sievers wrote: > Do we leak |texture| here if we scaled above, as in the scaled RGBA version? Done. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... File content/common/gpu/client/gl_helper_unittest.cc (right): https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:429: LOG(ERROR) << "-------original--------"; On 2014/08/20 20:29:52, sievers wrote: > nit: indent Done. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:468: const float kRGBtoGrayscaleColorWeights[3] = {0.213f, 0.715f, 0.072f}; It wouldn't be terribly straightforward. This array is of length 3, and the one in gl_helper is of length 4. Also, the one in GL Helper is defined within CopyTextureToImpl, so it's not currently visible outside of gl_helper.cc. If you still think the values should be shared, what would be the best way to achieve that? Move the const to GLHelper? https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:486: static_cast<int>(value * 255.0f + 0.5f)); So that it rounds to the closest int rather than round down. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:718: size_t quality) { Yeah, this is modeled after TestScale() below, but I thought about changing this too. However we also use |quality| to fetch the string 'name' from the kQualityNames array.. Perhaps just changing the var name to quality_index would be good enough? If not, I guess I could write a helper function converting ScalerQuality to a string. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:759: FlipSKBitmap(&output_pixels); The old version of CropScaleReadbackAndCleanTexture() also always flipped the texture (https://code.google.com/p/chromium/codesearch#chromium/src/content/common/gpu...). I have no idea why, but if I don't do it, the image is upside down. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:761: // Now transofrm the bitmap using the reference implementation. On 2014/08/20 20:29:52, sievers wrote: > nit, typo: 'transform' Done. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:777: case kBGRA_8888_SkColorType: On 2014/08/20 20:29:52, sievers wrote: > nit: indent here and below Done. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:778: // Swizzle pixels of the scaled bitmap if the output needs to be BGRA Problem is, scale tests here use Compare() to do a pixel-by-pixel comparison of RGBA to BGRA bitmaps, and they do not swizzle. If I make Compare() learn to handle the format, I'd have to change those tests as well or they will fail.. I can still do it if you think it's worth the effort. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:802: 2, It does fail on my machine for 16x16 -> 3x6 best-quality scale if I set it to 1. My guess would be it comes down to CPU vs GPU floating point precision. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:1829: SkColorType color_types[] = On 2014/08/20 20:29:52, sievers wrote: > nit: > const int kSizes[] = ... > const SkColorType kColorTypes[] =... Done. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:1832: for (size_t q = 0; q < arraysize(kQualities); q += 2) { FAST uses a different code path in CropScaleReadbackAndCleanTexture() than GOOD and BEST (it skips the scale step). So I wanted to test FAST and one of GOOD/BEST. I figured I don't have to test both GOOD and BEST because scale-specific testing is already covered by ScaleTest. And the debug version of this test already takes 46s, so I didn't want to make it unnecessarily slower. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:1833: for (int color_type = 0; color_type < 3; color_type++) { On 2014/08/20 20:29:52, sievers wrote: > color_type < arraysize(kColorTypes) Done.
https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper.cc:521: readback_color_type = kN32_SkColorType; On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: > GetReadbackConfig() returns the readback format. hubbe@ was making a point in > crbug.com/379983 that we should be doing the readback using BGRA on CrOS. So > this is what I wanted to ensure. My naive assumption was that if I pass the > native format to GetReadbackConfig(), it would also return the readback format > which is more efficient. I don't really care which format it tells me to use, I > just want the processing + readback to be as fast as possible. Although I do > need to swizzle if the readback is using BGRA since EncodeAsGrayscale writes the > pixels in RGBA order. > > > You hardcode the format for the alpha texture to RGBA, so we should check that > here. > > Ok, this is the part I am pretty confused about and asked for clarification from > hubbe@ above. It sounds like you are saying that the format we are using with > TexImage2D() does matter and we should match it to the format we pass to > GetReadbackConfig()? I tried changing the format of the alpha texture to BGRA, > and it didn't seem to make any difference at all. Do you mind explaining this? I > ended up making it RGBA only because we use RGBA or RGB everywhere else in gl > helper. We don't seem to use BGRA anywhere here with TexImage2D() (even though > we do use BGRA for readback). Why is that? I kow this is really confusing :/ GetReadbackFormat() returns what it thinks is the best readback format for getting back readback_color_type assuming that the internal format of the texture also matches readback_color_type. For the 'additional format' it queries see how it creates a dummy texture that matches the skia color format you pass in. Note that the GL query we do there is dependent on the format of the bound framebuffer. I'm still a bit confused about this, also see comments in https://codereview.chromium.org/412453002/. I'm not sure why reading back a texture, whose internal format is RGBA, as BGRA would ever be faster. However, I think you need to - either set |readback_color_type| to RGBA above since that is the format of your texture being read back for the query to do the right thing - or allocate the texture as BGRA or RGBA respectively so that it matches the kN32_SkColorType layout https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... File content/common/gpu/client/gl_helper_unittest.cc (right): https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:468: const float kRGBtoGrayscaleColorWeights[3] = {0.213f, 0.715f, 0.072f}; On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: > It wouldn't be terribly straightforward. This array is of length 3, and the one > in gl_helper is of length 4. Also, the one in GL Helper is defined within > CopyTextureToImpl, so it's not currently visible outside of gl_helper.cc. If you > still think the values should be shared, what would be the best way to achieve > that? Move the const to GLHelper? If you expose them in gl_helper.h somehow, can they be shared here and inside gl_helper.cc? https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:718: size_t quality) { On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: > Yeah, this is modeled after TestScale() below, but I thought about changing this > too. However we also use |quality| to fetch the string 'name' from the > kQualityNames array.. Perhaps just changing the var name to quality_index would > be good enough? If not, I guess I could write a helper function converting > ScalerQuality to a string. I see, maybe just keep it as is then if it matches the function below. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:759: FlipSKBitmap(&output_pixels); On 2014/08/22 18:01:34, mfomitchev_OOO_Aug16-24 wrote: > The old version of CropScaleReadbackAndCleanTexture() also always flipped the > texture > (https://code.google.com/p/chromium/codesearch#chromium/src/content/common/gpu...). > I have no idea why, but if I don't do it, the image is upside down. This is intriguing, as I believe we use this path for readbacks on Android and the images come out with the expected orientation, so I'll try to follow the code and see why this is hardcoded to flip in CropScaleReadbackAndCleanTexture :) https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:778: // Swizzle pixels of the scaled bitmap if the output needs to be BGRA On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: > Problem is, scale tests here use Compare() to do a pixel-by-pixel comparison of > RGBA to BGRA bitmaps, and they do not swizzle. If I make Compare() learn to > handle the format, I'd have to change those tests as well or they will fail.. I > can still do it if you think it's worth the effort. hmm maybe put a TODO then? https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:1832: for (size_t q = 0; q < arraysize(kQualities); q += 2) { On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: > FAST uses a different code path in CropScaleReadbackAndCleanTexture() than GOOD > and BEST (it skips the scale step). So I wanted to test FAST and one of > GOOD/BEST. I figured I don't have to test both GOOD and BEST because > scale-specific testing is already covered by ScaleTest. And the debug version of > this test already takes 46s, so I didn't want to make it unnecessarily slower. Ok, maybe it's more straightforward to define your own array of qualities to test in here then.
https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper.cc:521: readback_color_type = kN32_SkColorType; The reason reading back as BGRA is faster than RGBA is that some intel/mesa drivers always stores the textures as BGRA internally, so weather you ask for RGBA or BGRA makes no difference. Since it makes no difference, we should use RGBA since that is what GLES is guaranteed to support. (Unless I'm wrong, but this is what I've understood from other people.) On 2014/08/22 18:44:13, sievers wrote: > On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: > > GetReadbackConfig() returns the readback format. hubbe@ was making a point in > > crbug.com/379983 that we should be doing the readback using BGRA on CrOS. So > > this is what I wanted to ensure. My naive assumption was that if I pass the > > native format to GetReadbackConfig(), it would also return the readback format > > which is more efficient. I don't really care which format it tells me to use, > I > > just want the processing + readback to be as fast as possible. Although I do > > need to swizzle if the readback is using BGRA since EncodeAsGrayscale writes > the > > pixels in RGBA order. > > > > > You hardcode the format for the alpha texture to RGBA, so we should check > that > > here. > > > > Ok, this is the part I am pretty confused about and asked for clarification > from > > hubbe@ above. It sounds like you are saying that the format we are using with > > TexImage2D() does matter and we should match it to the format we pass to > > GetReadbackConfig()? I tried changing the format of the alpha texture to BGRA, > > and it didn't seem to make any difference at all. Do you mind explaining this? > I > > ended up making it RGBA only because we use RGBA or RGB everywhere else in gl > > helper. We don't seem to use BGRA anywhere here with TexImage2D() (even though > > we do use BGRA for readback). Why is that? > > I kow this is really confusing :/ > > GetReadbackFormat() returns what it thinks is the best readback format for > getting back readback_color_type assuming that the internal format of the > texture also matches readback_color_type. > > For the 'additional format' it queries see how it creates a dummy texture that > matches the skia color format you pass in. Note that the GL query we do there is > dependent on the format of the bound framebuffer. > > I'm still a bit confused about this, also see comments in > https://codereview.chromium.org/412453002/. > > I'm not sure why reading back a texture, whose internal format is RGBA, as BGRA > would ever be faster. > > However, I think you need to > - either set |readback_color_type| to RGBA above since that is the format of > your texture being read back for the query to do the right thing > - or allocate the texture as BGRA or RGBA respectively so that it matches the > kN32_SkColorType layout
On Fri, Aug 22, 2014 at 12:19 PM, <hubbe@chromium.org> wrote: > > https://codereview.chromium.org/388953002/diff/100001/ > content/common/gpu/client/gl_helper.cc > File content/common/gpu/client/gl_helper.cc (right): > > https://codereview.chromium.org/388953002/diff/100001/ > content/common/gpu/client/gl_helper.cc#newcode521 > content/common/gpu/client/gl_helper.cc:521: readback_color_type = > kN32_SkColorType; > The reason reading back as BGRA is faster than RGBA is that some > intel/mesa drivers always stores the textures as BGRA internally, right. > so weather you ask for RGBA or BGRA makes no difference. Specifying RGBA or BGRA as the texture format makes no difference. Specifying RGBA or BGRA as the readback format makes a lot of difference. There are, actually, significant perf differences when reading RGBA vs BGRA in a PBO on some intel/mesa drivers. Essentially RGBA goes through the slow path which involves the CPU and GPU->CPU synchronization. Using BGRA is critical for perf. Antoine > Since it makes no > difference, we should use RGBA since that is what GLES is guaranteed to > support. > > (Unless I'm wrong, but this is what I've understood from other people.) > > On 2014/08/22 18:44:13, sievers wrote: > >> On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: >> > GetReadbackConfig() returns the readback format. hubbe@ was making >> > a point in > >> > crbug.com/379983 that we should be doing the readback using BGRA on >> > CrOS. So > >> > this is what I wanted to ensure. My naive assumption was that if I >> > pass the > >> > native format to GetReadbackConfig(), it would also return the >> > readback format > >> > which is more efficient. I don't really care which format it tells >> > me to use, > >> I >> > just want the processing + readback to be as fast as possible. >> > Although I do > >> > need to swizzle if the readback is using BGRA since >> > EncodeAsGrayscale writes > >> the >> > pixels in RGBA order. >> > >> > > You hardcode the format for the alpha texture to RGBA, so we >> > should check > >> that >> > here. >> > >> > Ok, this is the part I am pretty confused about and asked for >> > clarification > >> from >> > hubbe@ above. It sounds like you are saying that the format we are >> > using with > >> > TexImage2D() does matter and we should match it to the format we >> > pass to > >> > GetReadbackConfig()? I tried changing the format of the alpha >> > texture to BGRA, > >> > and it didn't seem to make any difference at all. Do you mind >> > explaining this? > >> I >> > ended up making it RGBA only because we use RGBA or RGB everywhere >> > else in gl > >> > helper. We don't seem to use BGRA anywhere here with TexImage2D() >> > (even though > >> > we do use BGRA for readback). Why is that? >> > > I kow this is really confusing :/ >> > > GetReadbackFormat() returns what it thinks is the best readback format >> > for > >> getting back readback_color_type assuming that the internal format of >> > the > >> texture also matches readback_color_type. >> > > For the 'additional format' it queries see how it creates a dummy >> > texture that > >> matches the skia color format you pass in. Note that the GL query we >> > do there is > >> dependent on the format of the bound framebuffer. >> > > I'm still a bit confused about this, also see comments in >> https://codereview.chromium.org/412453002/. >> > > I'm not sure why reading back a texture, whose internal format is >> > RGBA, as BGRA > >> would ever be faster. >> > > However, I think you need to >> - either set |readback_color_type| to RGBA above since that is the >> > format of > >> your texture being read back for the query to do the right thing >> - or allocate the texture as BGRA or RGBA respectively so that it >> > matches the > >> kN32_SkColorType layout >> > > https://codereview.chromium.org/388953002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... File content/common/gpu/client/gl_helper.cc (right): https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper.cc:521: readback_color_type = kN32_SkColorType; On 2014/08/22 19:19:20, hubbe wrote: > The reason reading back as BGRA is faster than RGBA is that some intel/mesa > drivers always stores the textures as BGRA internally, so weather you ask for > RGBA or BGRA makes no difference. Since it makes no difference, we should use > RGBA since that is what GLES is guaranteed to support. > > (Unless I'm wrong, but this is what I've understood from other people.) > > On 2014/08/22 18:44:13, sievers wrote: > > On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: > > > GetReadbackConfig() returns the readback format. hubbe@ was making a point > in > > > crbug.com/379983 that we should be doing the readback using BGRA on CrOS. So > > > this is what I wanted to ensure. My naive assumption was that if I pass the > > > native format to GetReadbackConfig(), it would also return the readback > format > > > which is more efficient. I don't really care which format it tells me to > use, > > I > > > just want the processing + readback to be as fast as possible. Although I do > > > need to swizzle if the readback is using BGRA since EncodeAsGrayscale writes > > the > > > pixels in RGBA order. > > > > > > > You hardcode the format for the alpha texture to RGBA, so we should check > > that > > > here. > > > > > > Ok, this is the part I am pretty confused about and asked for clarification > > from > > > hubbe@ above. It sounds like you are saying that the format we are using > with > > > TexImage2D() does matter and we should match it to the format we pass to > > > GetReadbackConfig()? I tried changing the format of the alpha texture to > BGRA, > > > and it didn't seem to make any difference at all. Do you mind explaining > this? > > I > > > ended up making it RGBA only because we use RGBA or RGB everywhere else in > gl > > > helper. We don't seem to use BGRA anywhere here with TexImage2D() (even > though > > > we do use BGRA for readback). Why is that? > > > > I kow this is really confusing :/ > > > > GetReadbackFormat() returns what it thinks is the best readback format for > > getting back readback_color_type assuming that the internal format of the > > texture also matches readback_color_type. > > > > For the 'additional format' it queries see how it creates a dummy texture that > > matches the skia color format you pass in. Note that the GL query we do there > is > > dependent on the format of the bound framebuffer. > > > > I'm still a bit confused about this, also see comments in > > https://codereview.chromium.org/412453002/. > > > > I'm not sure why reading back a texture, whose internal format is RGBA, as > BGRA > > would ever be faster. > > > > However, I think you need to > > - either set |readback_color_type| to RGBA above since that is the format of > > your texture being read back for the query to do the right thing > > - or allocate the texture as BGRA or RGBA respectively so that it matches the > > kN32_SkColorType layout > Thanks hubbe+piman, that makes sense. In that case we should still use kRGBA_8888_SkColorType in line 521 to get the correct answer for which readback format to use for the texture that we created with RGBA. The driver overriding the format internally is handled by this code in GetReadbackConfig(): case kRGBA_8888_SkColorType: *format = GL_RGBA; if (can_swizzle) { // If GL_BGRA_EXT is advertised as the readback format through // GL_IMPLEMENTATION_COLOR_READ_FORMAT then assume it is preferred by // the implementation for performance. GetAdditionalFormat(*format, *type, &new_format, &new_type);
On 2014/08/22 20:54:48, sievers wrote: > https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... > File content/common/gpu/client/gl_helper.cc (right): > > https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... > content/common/gpu/client/gl_helper.cc:521: readback_color_type = > kN32_SkColorType; > On 2014/08/22 19:19:20, hubbe wrote: > > The reason reading back as BGRA is faster than RGBA is that some intel/mesa > > drivers always stores the textures as BGRA internally, so weather you ask for > > RGBA or BGRA makes no difference. Since it makes no difference, we should use > > RGBA since that is what GLES is guaranteed to support. > > > > (Unless I'm wrong, but this is what I've understood from other people.) > > > > On 2014/08/22 18:44:13, sievers wrote: > > > On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: > > > > GetReadbackConfig() returns the readback format. hubbe@ was making a > point > > in > > > > crbug.com/379983 that we should be doing the readback using BGRA on CrOS. > So > > > > this is what I wanted to ensure. My naive assumption was that if I pass > the > > > > native format to GetReadbackConfig(), it would also return the readback > > format > > > > which is more efficient. I don't really care which format it tells me to > > use, > > > I > > > > just want the processing + readback to be as fast as possible. Although I > do > > > > need to swizzle if the readback is using BGRA since EncodeAsGrayscale > writes > > > the > > > > pixels in RGBA order. > > > > > > > > > You hardcode the format for the alpha texture to RGBA, so we should > check > > > that > > > > here. > > > > > > > > Ok, this is the part I am pretty confused about and asked for > clarification > > > from > > > > hubbe@ above. It sounds like you are saying that the format we are using > > with > > > > TexImage2D() does matter and we should match it to the format we pass to > > > > GetReadbackConfig()? I tried changing the format of the alpha texture to > > BGRA, > > > > and it didn't seem to make any difference at all. Do you mind explaining > > this? > > > I > > > > ended up making it RGBA only because we use RGBA or RGB everywhere else in > > gl > > > > helper. We don't seem to use BGRA anywhere here with TexImage2D() (even > > though > > > > we do use BGRA for readback). Why is that? > > > > > > I kow this is really confusing :/ > > > > > > GetReadbackFormat() returns what it thinks is the best readback format for > > > getting back readback_color_type assuming that the internal format of the > > > texture also matches readback_color_type. > > > > > > For the 'additional format' it queries see how it creates a dummy texture > that > > > matches the skia color format you pass in. Note that the GL query we do > there > > is > > > dependent on the format of the bound framebuffer. > > > > > > I'm still a bit confused about this, also see comments in > > > https://codereview.chromium.org/412453002/. > > > > > > I'm not sure why reading back a texture, whose internal format is RGBA, as > > BGRA > > > would ever be faster. > > > > > > However, I think you need to > > > - either set |readback_color_type| to RGBA above since that is the format of > > > your texture being read back for the query to do the right thing > > > - or allocate the texture as BGRA or RGBA respectively so that it matches > the > > > kN32_SkColorType layout > > > > Thanks hubbe+piman, that makes sense. > > In that case we should still use kRGBA_8888_SkColorType in line 521 to get the > correct answer for which readback format to use for the texture that we created > with RGBA. The driver overriding the format internally is handled by this code > in GetReadbackConfig(): > > case kRGBA_8888_SkColorType: > *format = GL_RGBA; > if (can_swizzle) { > // If GL_BGRA_EXT is advertised as the readback format through > // GL_IMPLEMENTATION_COLOR_READ_FORMAT then assume it is preferred by > // the implementation for performance. > GetAdditionalFormat(*format, *type, &new_format, &new_type); Otherwise we might end up swizzling during readback on certain configs that store RGBA as RGBA, but still support BGRA also.
On Fri, Aug 22, 2014 at 1:56 PM, <sievers@chromium.org> wrote: > On 2014/08/22 20:54:48, sievers wrote: > > https://codereview.chromium.org/388953002/diff/100001/ > content/common/gpu/client/gl_helper.cc > >> File content/common/gpu/client/gl_helper.cc (right): >> > > > https://codereview.chromium.org/388953002/diff/100001/ > content/common/gpu/client/gl_helper.cc#newcode521 > >> content/common/gpu/client/gl_helper.cc:521: readback_color_type = >> kN32_SkColorType; >> On 2014/08/22 19:19:20, hubbe wrote: >> > The reason reading back as BGRA is faster than RGBA is that some >> intel/mesa >> > drivers always stores the textures as BGRA internally, so weather you >> ask >> > for > >> > RGBA or BGRA makes no difference. Since it makes no difference, we >> should >> > use > >> > RGBA since that is what GLES is guaranteed to support. >> > >> > (Unless I'm wrong, but this is what I've understood from other people.) >> > >> > On 2014/08/22 18:44:13, sievers wrote: >> > > On 2014/08/22 18:01:33, mfomitchev_OOO_Aug16-24 wrote: >> > > > GetReadbackConfig() returns the readback format. hubbe@ was >> making a >> point >> > in >> > > > crbug.com/379983 that we should be doing the readback using BGRA on >> > CrOS. > >> So >> > > > this is what I wanted to ensure. My naive assumption was that if I >> pass >> the >> > > > native format to GetReadbackConfig(), it would also return the >> readback >> > format >> > > > which is more efficient. I don't really care which format it tells >> me to >> > use, >> > > I >> > > > just want the processing + readback to be as fast as possible. >> Although >> > I > >> do >> > > > need to swizzle if the readback is using BGRA since >> EncodeAsGrayscale >> writes >> > > the >> > > > pixels in RGBA order. >> > > > >> > > > > You hardcode the format for the alpha texture to RGBA, so we >> should >> check >> > > that >> > > > here. >> > > > >> > > > Ok, this is the part I am pretty confused about and asked for >> clarification >> > > from >> > > > hubbe@ above. It sounds like you are saying that the format we are >> using >> > with >> > > > TexImage2D() does matter and we should match it to the format we >> pass to >> > > > GetReadbackConfig()? I tried changing the format of the alpha >> texture to >> > BGRA, >> > > > and it didn't seem to make any difference at all. Do you mind >> explaining >> > this? >> > > I >> > > > ended up making it RGBA only because we use RGBA or RGB everywhere >> else >> > in > >> > gl >> > > > helper. We don't seem to use BGRA anywhere here with TexImage2D() >> (even >> > though >> > > > we do use BGRA for readback). Why is that? >> > > >> > > I kow this is really confusing :/ >> > > >> > > GetReadbackFormat() returns what it thinks is the best readback >> format for >> > > getting back readback_color_type assuming that the internal format of >> the >> > > texture also matches readback_color_type. >> > > >> > > For the 'additional format' it queries see how it creates a dummy >> texture >> that >> > > matches the skia color format you pass in. Note that the GL query we >> do >> there >> > is >> > > dependent on the format of the bound framebuffer. >> > > >> > > I'm still a bit confused about this, also see comments in >> > > https://codereview.chromium.org/412453002/. >> > > >> > > I'm not sure why reading back a texture, whose internal format is >> RGBA, as >> > BGRA >> > > would ever be faster. >> > > >> > > However, I think you need to >> > > - either set |readback_color_type| to RGBA above since that is the >> format >> > of > >> > > your texture being read back for the query to do the right thing >> > > - or allocate the texture as BGRA or RGBA respectively so that it >> matches >> the >> > > kN32_SkColorType layout >> > >> > > Thanks hubbe+piman, that makes sense. >> > > In that case we should still use kRGBA_8888_SkColorType in line 521 to >> get the >> correct answer for which readback format to use for the texture that we >> > created > >> with RGBA. The driver overriding the format internally is handled by this >> code >> in GetReadbackConfig(): >> > > case kRGBA_8888_SkColorType: >> *format = GL_RGBA; >> if (can_swizzle) { >> // If GL_BGRA_EXT is advertised as the readback format through >> // GL_IMPLEMENTATION_COLOR_READ_FORMAT then assume it is >> preferred by >> // the implementation for performance. >> GetAdditionalFormat(*format, *type, &new_format, &new_type); >> > > Otherwise we might end up swizzling during readback on certain configs that > store RGBA as RGBA, but still support BGRA also. > Right, the skia ordering should have no impact on this code. > > > https://codereview.chromium.org/388953002/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Ok, this makes sense for the case when out_color_type is A8. Now consider the case when out_color_type is BGRA. Note that the format we use to create the texture in ScaleTexture() is either RGBA or RGB (never BGRA). So we have a mismatch between the format we pass to GetReadbackConfig() and the format we use to create the texture we are going to be reading from. If I understood @sievers correctly, this might cause a problem (an extra swizzle?) on some drivers. If so, should we pass the format to ScaleTexture() and use it when we call TexImage2D? Can we then stop passing down color_type?
On 2014/08/24 21:39:31, mfomitchev_OOO_Aug16-24 wrote: > Ok, this makes sense for the case when out_color_type is A8. Now consider the > case when out_color_type is BGRA. Note that the format we use to create the > texture in ScaleTexture() is either RGBA or RGB (never BGRA). So we have a > mismatch between the format we pass to GetReadbackConfig() and the format we use > to create the texture we are going to be reading from. If I understood @sievers > correctly, this might cause a problem (an extra swizzle?) on some drivers. If > so, should we pass the format to ScaleTexture() and use it when we call > TexImage2D? Can we then stop passing down color_type? I think you are right. The existing code also seems mismatched. It also means that the check and possible early-out can be wrong where we think the readback format is not supported.
On 2014/08/25 17:48:22, sievers wrote: > On 2014/08/24 21:39:31, mfomitchev_OOO_Aug16-24 wrote: > > Ok, this makes sense for the case when out_color_type is A8. Now consider the > > case when out_color_type is BGRA. Note that the format we use to create the > > texture in ScaleTexture() is either RGBA or RGB (never BGRA). So we have a > > mismatch between the format we pass to GetReadbackConfig() and the format we > use > > to create the texture we are going to be reading from. If I understood > @sievers > > correctly, this might cause a problem (an extra swizzle?) on some drivers. If > > so, should we pass the format to ScaleTexture() and use it when we call > > TexImage2D? Can we then stop passing down color_type? > > I think you are right. The existing code also seems mismatched. > It also means that the check and possible early-out can be wrong where we think > the readback format is not supported. Are you saying that BGRA may be reported as unsupported by GetReadbackConfig() even though we can always swizzle and the read back as RGBA?
On 2014/08/25 17:58:22, mfomitchev_OOO_Aug16-24 wrote: > On 2014/08/25 17:48:22, sievers wrote: > > On 2014/08/24 21:39:31, mfomitchev_OOO_Aug16-24 wrote: > > > Ok, this makes sense for the case when out_color_type is A8. Now consider > the > > > case when out_color_type is BGRA. Note that the format we use to create the > > > texture in ScaleTexture() is either RGBA or RGB (never BGRA). So we have a > > > mismatch between the format we pass to GetReadbackConfig() and the format we > > use > > > to create the texture we are going to be reading from. If I understood > > @sievers > > > correctly, this might cause a problem (an extra swizzle?) on some drivers. > If > > > so, should we pass the format to ScaleTexture() and use it when we call > > > TexImage2D? Can we then stop passing down color_type? > > > > I think you are right. The existing code also seems mismatched. > > It also means that the check and possible early-out can be wrong where we > think > > the readback format is not supported. > > Are you saying that BGRA may be reported as unsupported by GetReadbackConfig() > even though we can always swizzle and the read back as RGBA? Yes, but the code is even more confused: If you ask CropScaleAndReadbackTexture() to give you back BGRA, it will still create an RGBA texture (which is fine). It then asks GetReadbackConfig() whether it supports BGRA (and the confusion begins in what that means): a) If GL_EXT_read_format_bgra is supported it will return |GLHelperReadbackSupport::SUPPORTED| and set format to |BGRA|. (Because of the early-out in SupportsFormat() for BGRA.) b) If that extension is not supported, it will do the additional format check, where it creates a *BGRA* texture and queries GL_IMPLEMENTATION_COLOR_READ_FORMAT. c) If a) and b) were not successful but |can_swizzle| was passed in as |true|, it will return |GLHelperReadbackSupport::SWIZZLE| and set the format to |RGBA|. d) NOT_SUPPORTED otherwise I think a) is wrong because the extension only adds the format enumerant and says that with the extension a query for COLOR_READ_FORMAT is now allowed to return BGRA_EXT. I think b) would end also strictly be doing the wrong thing in how it's used for CropScaleAndReadbackTexture(), since it creates an BGRA texture, not RGBA when it does the check. c) happens to do the right thing since we would then end up swizzling during scaling. I'd file a separate bug to sort out this :)
Ok, sounds good re separate bug. Just a couple points: > If you ask CropScaleAndReadbackTexture() to give you back BGRA, it will still create an RGBA texture (which is fine). I don't see CropScaleAndReadbackTexture() creating a texture until after calling GetReadbackConfig()... > b) If that extension is not supported, it will do the additional format check, >where it creates a *BGRA* texture and queries >GL_IMPLEMENTATION_COLOR_READ_FORMAT. As far as I can see, there is no texture created in GetReadbackConfig() when we query for BGRA - it simply checks the format support table, which is assumed to be pre-populated with the relevant info. Not sure if these things matter, but wanted to point them out in case they do..
https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... File content/common/gpu/client/gl_helper_unittest.cc (right): https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:468: const float kRGBtoGrayscaleColorWeights[3] = {0.213f, 0.715f, 0.072f}; Yes, I could construct the 3-element array in gl_helper_testing using the first three elements of the 4-element array in gl_helper. I've noticed that the other color weights (kRGBtoYColorWeights, kRGBtoUColorWeights, kRGBtoVColorWeights) are not shared between gl_helper and gl_helper_unittest. In fact they are not even declared as consts in gl_helper_unittest - they are just used as plain numbers in TestYUVReadback(). If we decide to expose kRGBtoGrayscaleColorWeights in gl helper, we should probably do so for the other three weights arrays as well. Otherwise I can at least declare the as const in gl_helper_unittest. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:778: // Swizzle pixels of the scaled bitmap if the output needs to be BGRA Ok, I've just updated TestScale to consistently use RGBA. https://codereview.chromium.org/388953002/diff/100001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:1832: for (size_t q = 0; q < arraysize(kQualities); q += 2) { The index is also used to fetch the name from kQualityNames. so I'd have to re-declare kQualityNames as well, which seems like an overkill...
On 2014/08/25 19:16:14, mfomitchev_OOO_Aug16-24 wrote: > Ok, sounds good re separate bug. Just a couple points: > > > If you ask CropScaleAndReadbackTexture() to give you back BGRA, it will still > create an RGBA texture (which is fine). > > I don't see CropScaleAndReadbackTexture() creating a texture until after calling > GetReadbackConfig()... > Yes, I mean the format of the texture we read back from is always RGBA. > > b) If that extension is not supported, it will do the additional format check, > >where it creates a *BGRA* texture and queries > >GL_IMPLEMENTATION_COLOR_READ_FORMAT. > > As far as I can see, there is no texture created in GetReadbackConfig() when we > query for BGRA - it simply checks the format support table, which is assumed to > be pre-populated with the relevant info. > But when the table gets initialized it might do this GL test: InitializeReadbackSupport() calls CheckForReadbackSupport() calls SupportsFormat(), which calls GetAdditionalFormat(). > Not sure if these things matter, but wanted to point them out in case they do..
lgtm https://codereview.chromium.org/388953002/diff/160001/content/common/gpu/clie... File content/common/gpu/client/gl_helper_unittest.cc (right): https://codereview.chromium.org/388953002/diff/160001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:770: 2, If there was no scaling should the tolerance be 0?
https://codereview.chromium.org/388953002/diff/160001/content/common/gpu/clie... File content/common/gpu/client/gl_helper_unittest.cc (right): https://codereview.chromium.org/388953002/diff/160001/content/common/gpu/clie... content/common/gpu/client/gl_helper_unittest.cc:770: 2, On 2014/08/25 19:34:52, sievers wrote: > If there was no scaling should the tolerance be 0? Done.
The CQ bit was checked by mfomitchev@chromium.org
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/mfomitchev@chromium.org/388953002/200001
FYI, CQ is re-trying this CL (attempt #1). The failing builders are: win_chromium_rel_swarming on tryserver.chromium.win (http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_rel_...)
Message was sent while issue was closed.
Committed patchset #11 (200001) as e6f08f3ce59f93e6c0d479d7d36ec25f2c52a3d8
Message was sent while issue was closed.
Patchset 11 (id:??) landed as https://crrev.com/a73ebd2e88a5a0e310d9741d56129827195afa27 Cr-Commit-Position: refs/heads/master@{#291768} |