|
|
Created:
6 years ago by christiank Modified:
5 years, 11 months ago Reviewers:
reveman, jbauman CC:
chromium-reviews, jam, sievers+watch_chromium.org, jbauman+watch_chromium.org, darin-cc_chromium.org, kalyank, piman+watch_chromium.org, cc-bugs_chromium.org, danakj+watch_chromium.org, auygun, peterp Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionUpdate GPU memory buffers to use StrideInBytes internally.
Tile texture compression will add support for GPU memory buffer formats
that use less than a byte per pixel.
This CL prepares GPU buffers for this by using a new function
StrideInBytes instead of relying on bytes per pixel calculations when
calculating the number of bytes needed to store a certain number of pixels.
BUG=434699
Committed: https://crrev.com/53e678c7962e12efe4bca3b58c9efbf755b86798
Cr-Commit-Position: refs/heads/master@{#311862}
Committed: https://crrev.com/a78ab1942c818ade187c060957f7228a1e694ec9
Cr-Commit-Position: refs/heads/master@{#312595}
Patch Set 1 #
Total comments: 7
Patch Set 2 : Use StrideInBytes #
Total comments: 8
Patch Set 3 : Use CheckedNumeric #
Total comments: 5
Patch Set 4 : Addressed nits #Patch Set 5 : Move code that must run out of DCHECK #
Total comments: 1
Patch Set 6 : Addressed nit #
Created: 5 years, 11 months ago
Messages
Total messages: 37 (9 generated)
christiank@opera.com changed reviewers: + jbauman@chromium.org, reveman@chromium.org
This is the first CL for the tile texture compression feature. It's prepares the GPU memory buffers for formats that use less than a byte per pixel.
https://codereview.chromium.org/806653006/diff/1/content/common/gpu/client/gp... File content/common/gpu/client/gpu_memory_buffer_impl.h (right): https://codereview.chromium.org/806653006/diff/1/content/common/gpu/client/gp... content/common/gpu/client/gpu_memory_buffer_impl.h:44: gfx::GpuMemoryBuffer::Format format); Hm, this doesn't work with stride != width * bytes_per_pixel. I assume that stride is never less than 1 byte for compressed formats as that would require more significant changes to the GpuMemoryBuffer API, correct? Could we introduce a BytesPerRow or StrideInBytes function instead of this? nit: gfx::GpuMemoryBuffer:: prefix unnecessary https://codereview.chromium.org/806653006/diff/1/ui/gl/gl_image_shared_memory.cc File ui/gl/gl_image_shared_memory.cc (right): https://codereview.chromium.org/806653006/diff/1/ui/gl/gl_image_shared_memory... ui/gl/gl_image_shared_memory.cc:20: DCHECK_EQ(size.GetArea() * GLImageMemory::BitsPerPixel(format) % 8, 0u); Hm, so this will fail for a 1x1 buffer with bpp < 8. Is that what we want? Are not all buffer sizes support with compressed formats? In that case, how are we protecting against a malicious renderer using an invalid size and causing undefined behavior on the browser or GPU process side?
auygun@opera.com changed reviewers: + auygun@opera.com
https://codereview.chromium.org/806653006/diff/1/content/common/gpu/client/gp... File content/common/gpu/client/gpu_memory_buffer_impl.h (right): https://codereview.chromium.org/806653006/diff/1/content/common/gpu/client/gp... content/common/gpu/client/gpu_memory_buffer_impl.h:44: gfx::GpuMemoryBuffer::Format format); On 2014/12/15 17:33:34, reveman wrote: > Hm, this doesn't work with stride != width * bytes_per_pixel. I assume that > stride is never less than 1 byte for compressed formats as that would require > more significant changes to the GpuMemoryBuffer API, correct? Could we introduce > a BytesPerRow or StrideInBytes function instead of this? > > nit: gfx::GpuMemoryBuffer:: prefix unnecessary You're right. This code is based on the assumption that width and height are evenly dividable by 4 (texture compression operates on 4x4 pixel blocks), which happens to work with the sizes we're using. The stride can never be less than 1 byte. However, for some image sizes it can be an uneven number of bytes. Consider an image with height: 5 and width: 4. It will be compressed as two 4x4 blocks, where the unused pixels in the last block are ignored. The final compressed size will be 2 * 8 = 16 bytes. Calculating the stride from this would yield 16 / 5 = 3.2. Since compression works on 4x4 blocks it could make more sense to think of a row as 4 pixels high. In that case the above image would be interpreted as two rows with a stride of 8 bytes. I am not sure if that fits within the current model though. I fear that supporting images with a width and height that's not evenly dividable by 4 might require more significant changes after all. What do you think of all this? Is it feasible? https://codereview.chromium.org/806653006/diff/1/ui/gl/gl_image_shared_memory.cc File ui/gl/gl_image_shared_memory.cc (right): https://codereview.chromium.org/806653006/diff/1/ui/gl/gl_image_shared_memory... ui/gl/gl_image_shared_memory.cc:20: DCHECK_EQ(size.GetArea() * GLImageMemory::BitsPerPixel(format) % 8, 0u); On 2014/12/15 17:33:34, reveman wrote: > Hm, so this will fail for a 1x1 buffer with bpp < 8. Is that what we want? Are > not all buffer sizes support with compressed formats? In that case, how are we > protecting against a malicious renderer using an invalid size and causing > undefined behavior on the browser or GPU process side? Currently we don't support images with a width and height not evenly dividable by 4 so yes, under that premise we expect this to fail. We try to avoid this by never compressing images that don't meet the criteria. We currently have checks for this in the tile manager. However, that doesn't protect us against a malicious renderer. Maybe a CHECK is more appropriate here?
https://codereview.chromium.org/806653006/diff/1/content/common/gpu/client/gp... File content/common/gpu/client/gpu_memory_buffer_impl.h (right): https://codereview.chromium.org/806653006/diff/1/content/common/gpu/client/gp... content/common/gpu/client/gpu_memory_buffer_impl.h:44: gfx::GpuMemoryBuffer::Format format); On 2014/12/16 12:02:52, christiank wrote: > On 2014/12/15 17:33:34, reveman wrote: > > Hm, this doesn't work with stride != width * bytes_per_pixel. I assume that > > stride is never less than 1 byte for compressed formats as that would require > > more significant changes to the GpuMemoryBuffer API, correct? Could we > introduce > > a BytesPerRow or StrideInBytes function instead of this? > > > > nit: gfx::GpuMemoryBuffer:: prefix unnecessary > > You're right. This code is based on the assumption that width and height are > evenly dividable by 4 (texture compression operates on 4x4 pixel blocks), which > happens to work with the sizes we're using. > > The stride can never be less than 1 byte. However, for some image sizes it can > be an uneven number of bytes. Consider an image with height: 5 and width: 4. It > will be compressed as two 4x4 blocks, where the unused pixels in the last block > are ignored. The final compressed size will be 2 * 8 = 16 bytes. Calculating the > stride from this would yield 16 / 5 = 3.2. > > Since compression works on 4x4 blocks it could make more sense to think of a row > as 4 pixels high. In that case the above image would be interpreted as two rows > with a stride of 8 bytes. I am not sure if that fits within the current model > though. > > I fear that supporting images with a width and height that's not evenly > dividable by 4 might require more significant changes after all. > > What do you think of all this? Is it feasible? Thanks for explaining. Not supporting some dimensions with compressed formats sgtm. Such that GpuMemoryBuffer::GetStride can always return a value that makes sense. We just need to make sure we reject any attempts to allocate buffers with invalid dimensions in the browser process correctly. So how about we change this function to StrideInBytes with a "switch (format)" statement and appropriate DCHECKs? Btw, looks like a BitsPerPixel function is not really necessary and StrideInBytes alone is sufficient. https://codereview.chromium.org/806653006/diff/1/ui/gl/gl_image_shared_memory.cc File ui/gl/gl_image_shared_memory.cc (right): https://codereview.chromium.org/806653006/diff/1/ui/gl/gl_image_shared_memory... ui/gl/gl_image_shared_memory.cc:20: DCHECK_EQ(size.GetArea() * GLImageMemory::BitsPerPixel(format) % 8, 0u); On 2014/12/16 12:02:52, christiank wrote: > On 2014/12/15 17:33:34, reveman wrote: > > Hm, so this will fail for a 1x1 buffer with bpp < 8. Is that what we want? Are > > not all buffer sizes support with compressed formats? In that case, how are we > > protecting against a malicious renderer using an invalid size and causing > > undefined behavior on the browser or GPU process side? > > Currently we don't support images with a width and height not evenly dividable > by 4 so yes, under that premise we expect this to fail. > > We try to avoid this by never compressing images that don't meet the criteria. > We currently have checks for this in the tile manager. However, that doesn't > protect us against a malicious renderer. Maybe a CHECK is more appropriate here? A renderer should not be able cause the GPU service to crash. I think if you replace GLImageMemory::BitsPerPixel with GLImageMemory::StrideInBytes the problem will just go away as Map() will fail if size/format combination is invalid.
Another question regarding StrideInBytes. Following the pattern of the old BytesPePiexel it'll have to be re-implemented at a few places. If I were to move the implementation from GpuMemoryBufferImpl to GpuMemoryBuffer a single implementation could be shared more efficiently. However, that might not be in the spirit of keeping GpuMemoryBuffer a interface only. What do you think? https://codereview.chromium.org/806653006/diff/1/content/common/gpu/client/gp... File content/common/gpu/client/gpu_memory_buffer_impl.h (right): https://codereview.chromium.org/806653006/diff/1/content/common/gpu/client/gp... content/common/gpu/client/gpu_memory_buffer_impl.h:44: gfx::GpuMemoryBuffer::Format format); On 2014/12/16 16:40:12, reveman wrote: > On 2014/12/16 12:02:52, christiank wrote: > > On 2014/12/15 17:33:34, reveman wrote: > > > Hm, this doesn't work with stride != width * bytes_per_pixel. I assume that > > > stride is never less than 1 byte for compressed formats as that would > require > > > more significant changes to the GpuMemoryBuffer API, correct? Could we > > introduce > > > a BytesPerRow or StrideInBytes function instead of this? > > > > > > nit: gfx::GpuMemoryBuffer:: prefix unnecessary > > > > You're right. This code is based on the assumption that width and height are > > evenly dividable by 4 (texture compression operates on 4x4 pixel blocks), > which > > happens to work with the sizes we're using. > > > > The stride can never be less than 1 byte. However, for some image sizes it can > > be an uneven number of bytes. Consider an image with height: 5 and width: 4. > It > > will be compressed as two 4x4 blocks, where the unused pixels in the last > block > > are ignored. The final compressed size will be 2 * 8 = 16 bytes. Calculating > the > > stride from this would yield 16 / 5 = 3.2. > > > > Since compression works on 4x4 blocks it could make more sense to think of a > row > > as 4 pixels high. In that case the above image would be interpreted as two > rows > > with a stride of 8 bytes. I am not sure if that fits within the current model > > though. > > > > I fear that supporting images with a width and height that's not evenly > > dividable by 4 might require more significant changes after all. > > > > What do you think of all this? Is it feasible? > > Thanks for explaining. > > Not supporting some dimensions with compressed formats sgtm. Such that > GpuMemoryBuffer::GetStride can always return a value that makes sense. We just > need to make sure we reject any attempts to allocate buffers with invalid > dimensions in the browser process correctly. > > So how about we change this function to StrideInBytes with a "switch (format)" > statement and appropriate DCHECKs? Btw, looks like a BitsPerPixel function is > not really necessary and StrideInBytes alone is sufficient. Sounds good to me. I have now replaced PixelsToBytes and BitsPerPixel with a new function StrideInBytes. I guess the format specific checks will come in the follow up review when I add support for compressed formats.
I'd like to keep implementation details such as StrideInBytes out of the generic gfx::GpuMemoryBuffer interface for now. https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/clien... File content/common/gpu/client/gpu_memory_buffer_impl.cc (right): https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/clien... content/common/gpu/client/gpu_memory_buffer_impl.cc:79: size_t GpuMemoryBufferImpl::StrideInBytes(size_t width, Format format) { I think this needs to be using CheckedNumeric. See my other comments. https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/clien... File content/common/gpu/client/gpu_memory_buffer_impl_shared_memory.cc (right): https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/clien... content/common/gpu/client/gpu_memory_buffer_impl_shared_memory.cc:53: base::CheckedNumeric<int> buffer_size = StrideInBytes(size.width(), format); Can a malicious renderer make this overflow? ie. what if size.width() is INT_MAX? https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/gpu_m... File content/common/gpu/gpu_memory_buffer_factory_io_surface.cc (right): https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/gpu_m... content/common/gpu/gpu_memory_buffer_factory_io_surface.cc:108: width_in_bytes / size.width()); nit: Please keep using BytesPerPixel here instead. Deriving bpp from the stride feels backwards and fragile. I think it's fine that some implementations don't use StrideInBytes. Maybe just DCHECK_EQ(BytesPerPixel(format) * width, StrideInBytes(width, format)) if we're using StrideInBytes in GpuMemoryBufferImplIOSurface. https://codereview.chromium.org/806653006/diff/20001/ui/gl/gl_image_memory.cc File ui/gl/gl_image_memory.cc (right): https://codereview.chromium.org/806653006/diff/20001/ui/gl/gl_image_memory.cc... ui/gl/gl_image_memory.cc:108: return width * 4; Can this overflow? Should it be using CheckedNumeric just like SizeInBytes in gl_image_shared_memory.cc?
https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/clien... File content/common/gpu/client/gpu_memory_buffer_impl.cc (right): https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/clien... content/common/gpu/client/gpu_memory_buffer_impl.cc:79: size_t GpuMemoryBufferImpl::StrideInBytes(size_t width, Format format) { On 2014/12/18 18:49:56, reveman wrote: > I think this needs to be using CheckedNumeric. See my other comments. Agreed, it's now updated to use CheckedNumeric. https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/clien... File content/common/gpu/client/gpu_memory_buffer_impl_shared_memory.cc (right): https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/clien... content/common/gpu/client/gpu_memory_buffer_impl_shared_memory.cc:53: base::CheckedNumeric<int> buffer_size = StrideInBytes(size.width(), format); On 2014/12/18 18:49:56, reveman wrote: > Can a malicious renderer make this overflow? ie. what if size.width() is > INT_MAX? Should be fixed now that StrideInBytes uses CheckedNumeric internally. https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/gpu_m... File content/common/gpu/gpu_memory_buffer_factory_io_surface.cc (right): https://codereview.chromium.org/806653006/diff/20001/content/common/gpu/gpu_m... content/common/gpu/gpu_memory_buffer_factory_io_surface.cc:108: width_in_bytes / size.width()); On 2014/12/18 18:49:57, reveman wrote: > nit: Please keep using BytesPerPixel here instead. Deriving bpp from the stride > feels backwards and fragile. I think it's fine that some implementations don't > use StrideInBytes. Maybe just DCHECK_EQ(BytesPerPixel(format) * width, > StrideInBytes(width, format)) if we're using StrideInBytes in > GpuMemoryBufferImplIOSurface. Alright, I missed that this file actually had its own BytesPerPixel function. https://codereview.chromium.org/806653006/diff/20001/ui/gl/gl_image_memory.cc File ui/gl/gl_image_memory.cc (right): https://codereview.chromium.org/806653006/diff/20001/ui/gl/gl_image_memory.cc... ui/gl/gl_image_memory.cc:108: return width * 4; On 2014/12/18 18:49:57, reveman wrote: > Can this overflow? Should it be using CheckedNumeric just like SizeInBytes in > gl_image_shared_memory.cc? Yes, think it can. It's now updated to use CheckedNumeric.
lgtm with nits https://codereview.chromium.org/806653006/diff/40001/content/common/gpu/clien... File content/common/gpu/client/gpu_memory_buffer_impl.cc (right): https://codereview.chromium.org/806653006/diff/40001/content/common/gpu/clien... content/common/gpu/client/gpu_memory_buffer_impl.cc:83: base::CheckedNumeric<size_t> w = width; nit: maybe s/w/s/ for stride https://codereview.chromium.org/806653006/diff/40001/content/common/gpu/clien... content/common/gpu/client/gpu_memory_buffer_impl.cc:90: default: nit: please avoid having a default case as that allows the compiler to help ensure we update this code after adding new formats. it's still nice to have the NOTREACHED outside the switch statement in case format is somehow messed up at runtime. maybe just keep this as before for now and put the IsValid check inside the switch statement.. https://codereview.chromium.org/806653006/diff/40001/content/common/gpu/clien... File content/common/gpu/client/gpu_memory_buffer_impl_shared_memory.cc (right): https://codereview.chromium.org/806653006/diff/40001/content/common/gpu/clien... content/common/gpu/client/gpu_memory_buffer_impl_shared_memory.cc:141: CHECK(StrideInBytes(size_.width(), format_, &stride_in_bytes)); nit: why CHECK and not DCHECK? I think a DCHECK is preferred here. https://codereview.chromium.org/806653006/diff/40001/ui/gl/gl_image_memory.cc File ui/gl/gl_image_memory.cc (right): https://codereview.chromium.org/806653006/diff/40001/ui/gl/gl_image_memory.cc... ui/gl/gl_image_memory.cc:106: base::CheckedNumeric<size_t> w = width; nit: s/w/s/ for stride? https://codereview.chromium.org/806653006/diff/40001/ui/gl/gl_image_memory.cc... ui/gl/gl_image_memory.cc:121: return true; nit: would be nice to have a NOTREACHED check here in case format is an invalid value at runtime. maybe move this IsValid code up into the switch statement?
Can you also update the description to better match the latest version of this patch? Thanks!
christiank@opera.com changed reviewers: - auygun@opera.com
The CQ bit was checked by christiank@opera.com
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/806653006/60001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: ios_dbg_simulator on tryserver.chromium.mac (http://build.chromium.org/p/tryserver.chromium.mac/builders/ios_dbg_simulator...)
jbauman, would you mind having a look as well?
lgtm (though I'm not sure how meaningful stride is for compressed formats)
The CQ bit was checked by christiank@opera.com
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/806653006/60001
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: chromium_presubmit on tryserver.chromium.linux (http://build.chromium.org/p/tryserver.chromium.linux/builders/chromium_presub...) ios_dbg_simulator on tryserver.chromium.mac (http://build.chromium.org/p/tryserver.chromium.mac/builders/ios_dbg_simulator...)
The CQ bit was checked by christiank@opera.com
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/806653006/60001
Message was sent while issue was closed.
Committed patchset #4 (id:60001)
Message was sent while issue was closed.
Patchset 4 (id:??) landed as https://crrev.com/53e678c7962e12efe4bca3b58c9efbf755b86798 Cr-Commit-Position: refs/heads/master@{#311862}
Message was sent while issue was closed.
A revert of this CL (patchset #4 id:60001) has been created in https://codereview.chromium.org/852283003/ by ricow@google.com. The reason for reverting is: This seems to be causing a lot of redness, e.g. http://build.chromium.org/p/chromium.win/buildstatus?builder=Win7%20Tests%20%....
Message was sent while issue was closed.
A revert of this CL (patchset #4 id:60001) has been created in https://codereview.chromium.org/858583003/ by maniscalco@chromium.org. The reason for reverting is: Reverting because this likely broke XP Tests: https://code.google.com/p/chromium/issues/detail?id=449576 .
Message was sent while issue was closed.
I have now fixed the failing test, how do I go about trying to commit it once again?
Message was sent while issue was closed.
On Mon, Jan 19, 2015 at 11:28 PM, <christiank@opera.com> wrote: > I have now fixed the failing test, how do I go about trying to commit it > once > again? > Unclose this CL, upload the change as a patchset here, have a reviewer look at it, then CQ again. > > https://codereview.chromium.org/806653006/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
On 2015/01/20 21:34:51, chromium-reviews wrote: > On Mon, Jan 19, 2015 at 11:28 PM, <mailto:christiank@opera.com> wrote: > > > I have now fixed the failing test, how do I go about trying to commit it > > once > > again? > > > > Unclose this CL, upload the change as a patchset here, have a reviewer look > at it, then CQ again. > > > > > > https://codereview.chromium.org/806653006/ > > > > To unsubscribe from this group and stop receiving emails from it, send an email > to mailto:chromium-reviews+unsubscribe@chromium.org. Thanks! I had already uploaded the fix (patch set #5). Reveman, would you mind having quick look?
lgtm with nit https://codereview.chromium.org/806653006/diff/80001/content/common/gpu/clien... File content/common/gpu/client/gpu_memory_buffer_impl_shared_memory.cc (right): https://codereview.chromium.org/806653006/diff/80001/content/common/gpu/clien... content/common/gpu/client/gpu_memory_buffer_impl_shared_memory.cc:142: NOTREACHED(); nit: I would prefer: bool valid_stride = StrideInBytes(... DCHECK(valid_stride); as it looks a bit weird with an if statement that could expand to "if (x);" in a release build...
The CQ bit was checked by christiank@opera.com
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/patch-status/806653006/100001
Message was sent while issue was closed.
Committed patchset #6 (id:100001)
Message was sent while issue was closed.
Patchset 6 (id:??) landed as https://crrev.com/a78ab1942c818ade187c060957f7228a1e694ec9 Cr-Commit-Position: refs/heads/master@{#312595} |