|
|
Created:
7 years, 4 months ago by sheu Modified:
7 years, 4 months ago CC:
chromium-reviews, nkostylev+watch_chromium.org, jam, apatrick_chromium, joi+watch-content_chromium.org, feature-media-reviews_chromium.org, darin-cc_chromium.org, oshima+watch_chromium.org, stevenjb+watch_chromium.org, davemoore+watch_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@git-svn Visibility:
Public. |
DescriptionAdd media::VideoEncodeAccelerator with WebRTC integration
* Adds media::VideoEncodeAccelerator class.
* Add GpuVideoEncodeAccelerator{,Host} classes and appropriate IPC.
* Integrates into WebRTC stack with RTCVideoEncoderFactory/RTCVideoEncoder.
* Rename media::GpuVideoDecodeFactories -> media::GpuVideoAcceleratorFactories
and generalize for use by the encode accelerator implementations as well.
BUG=260210
BUG=170345
TEST=local build, run on CrOS snow; local build, unittests on desktop Linux
Committed: https://src.chromium.org/viewvc/chrome?view=rev&revision=217276
Patch Set 1 : 4cd5068c WIP - almost done, just comments (and debugging prints) left #
Total comments: 40
Patch Set 2 : bb555936 Comment updates. #Patch Set 3 : 7fd9dbd5 More debugging statements, some fixes #
Total comments: 109
Patch Set 4 : d8fba33b Address comments, minus RTCVideoEncoder and media::VEA #
Total comments: 7
Patch Set 5 : 2247fd82 Rebase. #
Total comments: 16
Patch Set 6 : 9e8f21a0 Comments addressed. #
Total comments: 117
Patch Set 7 : fcfd089c More comments addressed. #
Total comments: 38
Patch Set 8 : d9b0059b Comments, fixes from debugging. Works now. #
Total comments: 7
Patch Set 9 : b4cd612a Comments, added framerate to RequestEncodingParametersChange #
Total comments: 11
Patch Set 10 : 1a78d13a Rebase. #Patch Set 11 : 7d63add6 Nits. #Patch Set 12 : 7dec70f6 Whoops, one last linker thing I missed #
Total comments: 27
Patch Set 13 : 1ad5d4b8 piman@ comments. #
Total comments: 3
Patch Set 14 : 6dc223a7 RTCVideoEncoder::EncodeOneFrame reentrancy. #Patch Set 15 : 6243184e VEA reentrancy. #
Total comments: 8
Patch Set 16 : 5bb3eab0 IPC bits, mostly. #Patch Set 17 : 9830db80 Missing changes from last patchset #
Total comments: 5
Patch Set 18 : d37945e8 Rebase. #Patch Set 19 : 0c3b8825 unsigned-ness #Patch Set 20 : d3982027 CQ nits. #Messages
Total messages: 70 (0 generated)
Some initial comments on VEA API. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:34: } max_framerate; I'm not sure about this. Is this because of something specific in our HW or required by webrtc? Max framerate may depend on selected codec, profile, resolution, bitrate, etc... https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:55: // Callback to tell client what size of buffers to provide for input and s/tell client/tell the client/ https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:56: // output. The VEA disclaims use or ownership of all previously provided So this means that we can have one set of input/output buffers at a time and we switch between sets via RequireBitstreamBuffers. Who is responsible for freeing them? I assume after RequireBitstreamBuffers is called, the client is expected to free the old set? https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:62: // |input_dimensions| is the logical dimensions of the input frames to s/is/are perhaps? I understand the dimensions part, but I'd say explicitly it's still in pixels. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:65: // which case the input buffer should be padded to |nput_dimensions|. s/nput_/input_/ https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:66: // |output_size| is the required size of output buffers for this encoder. in bytes https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:79: size_t size, s/size/payload_size ? https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:102: int32 initial_bitrate) = 0; Perhaps we could drop initial_bitrate and simply call RequestEncodingParameterChange() before the first Encode() ? https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:110: virtual void Encode(const BitstreamBuffer& buffer, bool force_keyframe) = 0; Btw, why are we using BitstreamBuffer for inputs? For SHM? https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:118: // Request a change in the encoding parameters. This is only a request, s/in/to/ ? https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:119: // fulfilled on a best-effort basis. If the change is possible, is there a requirement when will it happen (on next Encode()d frame, after X frames)? When can we try to call this again? I think we may need a notification whether the request could be fulfilled or not, otherwise the client may wait forever for it, may not know when it's ok to try a different change, etc. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:121: // |bitrate| is the requested new bitrate. in kbps, in bps?
Updated; just comment bits. posciak@ PTAL https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:34: } max_framerate; Right, this struct is used by the VEA to export a supported configuration. We'd get a list of these. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:55: // Callback to tell client what size of buffers to provide for input and On 2013/07/26 02:26:43, posciak wrote: > s/tell client/tell the client/ Done. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:56: // output. The VEA disclaims use or ownership of all previously provided On 2013/07/26 02:26:43, posciak wrote: > So this means that we can have one set of input/output buffers at a time and we > switch between sets via RequireBitstreamBuffers. Who is responsible for freeing > them? I assume after RequireBitstreamBuffers is called, the client is expected > to free the old set? Right, that's the "disclaims use or _ownership_" part of things. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:62: // |input_dimensions| is the logical dimensions of the input frames to On 2013/07/26 02:26:43, posciak wrote: > s/is/are perhaps? > > I understand the dimensions part, but I'd say explicitly it's still in pixels. Done. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:65: // which case the input buffer should be padded to |nput_dimensions|. On 2013/07/26 02:26:43, posciak wrote: > s/nput_/input_/ Done. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:66: // |output_size| is the required size of output buffers for this encoder. On 2013/07/26 02:26:43, posciak wrote: > in bytes Done. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:79: size_t size, On 2013/07/26 02:26:43, posciak wrote: > s/size/payload_size ? Done. Now to run through all the rest of the source files and rename :-P https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:102: int32 initial_bitrate) = 0; On 2013/07/26 02:26:43, posciak wrote: > Perhaps we could drop initial_bitrate and simply call > RequestEncodingParameterChange() before the first Encode() ? Thought about this. fischman@'s initial VEA spec also had bitrate in the Initialize() call, which I found kind of odd, but in the end I reasoned that encoders that don't support midstream bitrate changing should be able to just stub that call out. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:110: virtual void Encode(const BitstreamBuffer& buffer, bool force_keyframe) = 0; On 2013/07/26 02:26:43, posciak wrote: > Btw, why are we using BitstreamBuffer for inputs? For SHM? Yep. They also hold a size and ID nicely. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:118: // Request a change in the encoding parameters. This is only a request, On 2013/07/26 02:26:43, posciak wrote: > s/in/to/ ? Not a big deal, but sure :-P https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:119: // fulfilled on a best-effort basis. On 2013/07/26 02:26:43, posciak wrote: > If the change is possible, is there a requirement when will it happen (on next > Encode()d frame, after X frames)? When can we try to call this again? I think we > may need a notification whether the request could be fulfilled or not, otherwise > the client may wait forever for it, may not know when it's ok to try a different > change, etc. I was hoping this to be fire-and-forget thing for the client. A more advisory call, I guess. There's a problem with the current MFC encoding parameters change bit where we can't actually specify exactly which frame a parameters change starts to take effect from, since all the V4L2 API allows us to do is to queue up buffers, and the parameter change ioctl on the userspace process thread is asynchronous with respect to the IRQ handler thread that actually schedules the buffer to the hardware. So I'm can't make a guarantee on when it has to happen. The above could be fixed in the driver (probably by attaching the encode parameters per-buffer), but that's something to talk to SLSI about, while they actually implement the parameter changing. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:121: // |bitrate| is the requested new bitrate. On 2013/07/26 02:26:43, posciak wrote: > in kbps, in bps? Done.
fischman@: I believe this is what you've been wanting. (i.e. making EVS die)
fischman@: I believe this is what you've been wanting. (i.e. making EVS die)
https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:34: } max_framerate; On 2013/07/26 19:09:07, sheu wrote: > Right, this struct is used by the VEA to export a supported configuration. We'd > get a list of these. Ok, but how does max_framerate relate to max_resolution here? This structure should provide a list of resolutions with supported framerates for each resolution (or at least one max_framerate for each resolution). https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:102: int32 initial_bitrate) = 0; On 2013/07/26 19:09:07, sheu wrote: > On 2013/07/26 02:26:43, posciak wrote: > > Perhaps we could drop initial_bitrate and simply call > > RequestEncodingParameterChange() before the first Encode() ? > > Thought about this. fischman@'s initial VEA spec also had bitrate in the > Initialize() call, which I found kind of odd, but in the end I reasoned that > encoders that don't support midstream bitrate changing should be able to just > stub that call out. Not following you. I agree bitrate change or any other request to change parameters may not be supported and it's ok not to do anything, but how does that relate to having initial_bitrate here? https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:110: virtual void Encode(const BitstreamBuffer& buffer, bool force_keyframe) = 0; On 2013/07/26 19:09:07, sheu wrote: > On 2013/07/26 02:26:43, posciak wrote: > > Btw, why are we using BitstreamBuffer for inputs? For SHM? > > Yep. They also hold a size and ID nicely. Well, it's going to be less useful when we start using dmabufs or something. Can we use something else that also doesn't imply they hold bitstream? VideoFrame? Weren't you using it before? It also has SHM. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:119: // fulfilled on a best-effort basis. On 2013/07/26 19:09:07, sheu wrote: > On 2013/07/26 02:26:43, posciak wrote: > > If the change is possible, is there a requirement when will it happen (on next > > Encode()d frame, after X frames)? When can we try to call this again? I think > we > > may need a notification whether the request could be fulfilled or not, > otherwise > > the client may wait forever for it, may not know when it's ok to try a > different > > change, etc. > > I was hoping this to be fire-and-forget thing for the client. A more advisory > call, I guess. > > There's a problem with the current MFC encoding parameters change bit where we > can't actually specify exactly which frame a parameters change starts to take > effect from, since all the V4L2 API allows us to do is to queue up buffers, and > the parameter change ioctl on the userspace process thread is asynchronous with > respect to the IRQ handler thread that actually schedules the buffer to the > hardware. So I'm can't make a guarantee on when it has to happen. > > The above could be fixed in the driver (probably by attaching the encode > parameters per-buffer), but that's something to talk to SLSI about, while they > actually implement the parameter changing. Ok, I guess it's ok not to have it frame-precise atm, but I'd still be advocating for at least a success/failure callback, even if we can't exactly tell when, for the other reasons stated above.
https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:34: } max_framerate; On 2013/07/27 08:35:48, posciak wrote: > On 2013/07/26 19:09:07, sheu wrote: > > Right, this struct is used by the VEA to export a supported configuration. > We'd > > get a list of these. > > Ok, but how does max_framerate relate to max_resolution here? This structure > should provide a list of resolutions with supported framerates for each > resolution (or at least one max_framerate for each resolution). I suppose we could have a vector of supported max resolutions/framerates, but until we have hardware and a usecase for different limits per resolution I'd just like to keep it this way? (Basically I don't want to make SupportedProfile too complicated... yet.) https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:102: int32 initial_bitrate) = 0; On 2013/07/27 08:35:48, posciak wrote: > On 2013/07/26 19:09:07, sheu wrote: > > On 2013/07/26 02:26:43, posciak wrote: > > > Perhaps we could drop initial_bitrate and simply call > > > RequestEncodingParameterChange() before the first Encode() ? > > > > Thought about this. fischman@'s initial VEA spec also had bitrate in the > > Initialize() call, which I found kind of odd, but in the end I reasoned that > > encoders that don't support midstream bitrate changing should be able to just > > stub that call out. > > Not following you. I agree bitrate change or any other request to change > parameters may not be supported and it's ok not to do anything, but how does > that relate to having initial_bitrate here? You need an initial bitrate to start encoding; can't just let the HW guess what you want. I'd consider it an initial parameter just like input_Format or input_resolution. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:110: virtual void Encode(const BitstreamBuffer& buffer, bool force_keyframe) = 0; On 2013/07/27 08:35:48, posciak wrote: > Well, it's going to be less useful when we start using dmabufs or something. Can > we use something else that also doesn't imply they hold bitstream? VideoFrame? > Weren't you using it before? It also has SHM. DMABUFs are also FDs, which can be dup()ed and IPCed just like the typical anonymous shared memory backing we have for current SharedMemory buffers. So I don't see a need for changing this when DMABUFs weigh in, which is the reason I chose to go this route. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:119: // fulfilled on a best-effort basis. On 2013/07/27 08:35:48, posciak wrote: > Ok, I guess it's ok not to have it frame-precise atm, but I'd still be > advocating for at least a success/failure callback, even if we can't exactly > tell when, for the other reasons stated above. I'll let fischman@ weigh in on this one, since the VEA design doc was fire-and-forget as well.
https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:119: // fulfilled on a best-effort basis. On 2013/07/29 18:24:52, sheu wrote: > On 2013/07/27 08:35:48, posciak wrote: > > Ok, I guess it's ok not to have it frame-precise atm, but I'd still be > > advocating for at least a success/failure callback, even if we can't exactly > > tell when, for the other reasons stated above. > > I'll let fischman@ weigh in on this one, since the VEA design doc was > fire-and-forget as well. I don't see the value in having the callback indicate success/failure today. The client can detect success/failure by observing the encoded bitrate (since it has information of timestamps as well as emitted bitstream buffers and their sizes). https://chromiumcodereview.appspot.com/20632002/diff/26001/content/browser/re... File content/browser/renderer_host/render_process_host_impl.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/browser/re... content/browser/renderer_host/render_process_host_impl.cc:874: switches::kDisableAcceleratedVideoEncode, Not sure what the point of this switch is considering there's no accelerated encode _other_ than for webrtc. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/client/gpu_channel_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_channel_host.cc:188: scoped_ptr<media::VideoEncodeAccelerator> GpuChannelHost::CreateVideoEncoder( I'm surprised by the asymmetry between CVD() and CED() (and the reflection of that in ownership semantics, IPC message differences, and so on). Are you planning to make the decode side look like the encode side, or do you have a reason for making them different? https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:10: #include "content/common/gpu/media/gpu_video_encode_accelerator.h" This seems unkosher. Why is renderer-side code including GPU-side code? I think you need APIs & IPCs for the supportprofiles business, and I think you do not need this #include. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:22: channel_->RemoveRoute(route_id_); Can the AddRoute be done in the ctor for symmetry? https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:28: return GpuVideoEncodeAccelerator::GetSupportedProfiles(); see above https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:86: return; NotifyError? https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:104: "buffer.id()=" << buffer.id(); NotifyError https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:127: if (channel_) braces on multiline if bodies https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/client/gpu_video_encode_accelerator_host.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.h:41: // GpuVideoEncodeAcceleratorHost::GetSupportedProfiles(). s/GpuVideoEncodeAcceleratorHost/GpuVideoEncodeAccelerator/ ? https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.h:70: // IPC handlers s/$/ proxying media::VideoEncodeAccelerator::Client./ https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.cc:19: encoder_.release()->Destroy(); will crash if CreateEncoder() didn't succeed. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.cc:46: DLOG(ERROR) << "GpuVideoEncodeAccelerator::NotifyInitializeDone(): " Here and (lots of) elsewhere, Send() does its own DLOGging. Doing a DLOG(ERROR) after every if (!Send) seems silly. (Send()'s DLOG could emit the message's type id to avoid losing information) https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.cc:108: delete message; NotifyError? https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.cc:126: << "x" << input_resolution.height() here and elsewhere you could use gfx::Size::ToString() to good effect. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/media/gpu_video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.h:47: // Static query for supported profiles. This query calls the appropriate comment should be attached to function below, not above. (though per elsewhere, I suspect this needs to become part of the API or be dropped) https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.h:55: scoped_ptr<media::VideoEncodeAccelerator> CreateEncoder(); This is a member function with a single call-site so I'd make the setting of the member the last thing it did and avoid having to Pass() and type out a few scoped_ptr<> incantations. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.h:59: // IPC handlers. s/\./ proxying media::VideoEncodeAccelerator./ https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.h:73: // The GpuChannel owns this GpuVideoEncodeAccelerator and will outlive this. s/this\./|this|./ https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.h:77: // Owned pointer to the underlying VideoEncodeAccelerator. comment doesn't add value but meh. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/content_co... File content/content_common.gypi (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/content_co... content/content_common.gypi:488: 'common/gpu/media/exynos_video_encode_accelerator.h', EVEA is not a part of this CL. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/media_stream_dependency_factory.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/media_stream_dependency_factory.cc:527: { Yeah, no. EVS (and the code-block this contorts around) needs to die before this can land. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/renderer_gpu_video_accelerator_factories.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/renderer_gpu_video_accelerator_factories.cc:31: if (context) { how can this fail? if it's necessary, please reverse the test and early-return to de-indent the rest of the function. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/renderer_gpu_video_accelerator_factories.cc:85: AsyncCreateVideoDecodeAccelerator, Ugh, is this what clang-format makes?? If yes, IMO worth a go/clang-format-bug https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder.cc:32: Impl(const base::WeakPtr<RTCVideoEncoder>& weak_encoder); single-arg ctors need to be marked explicit. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/rtc_video_encoder.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder.h:37: // the thread that the RTCVideoEncoder constructor is called on. This description made me expect a http://c2.com/cgi/wiki?PimplIdiom (and thus confused over the fact that this class has private methods & members, which it wouldn't if you were using pimpl) but that's not what you're doing at all. Instead Impl serves the purpose of lifecycle extension, apparently because you're worried about webrtc deleting this directly. But I don't believe it does; I believe it goes through the factory's DeleteEncoder method to delete this, so you can introduce the needed lifecycle extension there. I think that would be a far preferable choice to the current approach because ATM it RVE implies it ownes RVE::Impl (via a scoped_ptr) and webrtc implies it owns RVE (via the factory's deleteencoder which is a pass-through), but in reality, it is ~Impl that deletes its "owning" RVE, and deleteencoder only schedules a future ~Impl to run. This covert ownership inversion is well worth avoiding IMO. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder.h:68: // VEA::RequireBitstreamBuffers(). Where are these numbers from? What is the effect of changing them? https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder.h:73: // WebRTC::VideoEncoder methods. no they're not ;) https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder.h:83: base::ThreadChecker thread_checker_; Not reviewing the rest of this file and its .cc since I expect them to change dramatically as a result of my comment at l.37. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/rtc_video_encoder_factory.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:18: cricket::WebRtcVideoEncoderFactory::VideoCodec GVEAToWebRTCCodec( s/GVEA/VEA/ https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:27: name = std::string("VP8"); here and elsewhere, the explicit std::string() is unnecessary. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:38: fps = profile.max_framerate.numerator; DCHECK_EQ(denominator, 1); or handle the division. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:88: // by different decoders. Since we aren't running on the child thread and s/decoders/encoders/ https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:95: // No-op: our codec list is populated on installation. Yeah, I doubt this is going to fly... https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/rtc_video_encoder_factory.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.h:27: RTCVideoEncoderFactory( explicit https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/r... File content/renderer/render_thread_impl.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/r... content/renderer/render_thread_impl.cc:917: gpu_channel_host, factories_loop, context3d); What's the use of allocating a RendererGpuVideoAcceleratorFactories with a null context? https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/r... File content/renderer/render_thread_impl.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/r... content/renderer/render_thread_impl.h:258: // Gets gpu factories, which will run on |factories_loop|. I think the NULL return value should be restored (see comments on .cc). https://chromiumcodereview.appspot.com/20632002/diff/26001/media/filters/gpu_... File media/filters/gpu_video_accelerator_factories.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/filters/gpu_... media/filters/gpu_video_accelerator_factories.h:67: virtual scoped_refptr<GpuVideoAcceleratorFactories> Clone() = 0; In the CL that added this to the subclass (without adding it here) there was chatter about that decision. Why do this here? https://chromiumcodereview.appspot.com/20632002/diff/26001/media/filters/mock... File media/filters/mock_gpu_video_accelerator_factories.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/filters/mock... media/filters/mock_gpu_video_accelerator_factories.h:32: VideoEncodeAccelerator*(VideoEncodeAccelerator::Client*)); doco the reason for these methods (which I assume is scoped_ptr<>'s unfriendliness to MOCKs?) https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:27: // Specification of an encoding profile supported by an encoder. Did you mean to add a query/response API pair here and corresponding IPC messages? https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:43: // A failure occurred at the browser layer or one of its dependencies. s/browser layer/GPU process/ https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:45: // failures, GPU library failures, browser programming errors, and so on. s/browser/GPU process/ https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:57: // buffers once this callback is made. This seems unfortunate in that it makes it higher-latency-than-necessary to increase the number of buffers in use. Why not have a VEA::Client::DismissBitstreamBuffer call, instead? https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:61: // need to hold onto some subset of inputs as reference pictures. The method name deals in bitstreambuffers but here you say frames. I believe this comment is misguided and the param really means "number of bitstream buffers to provide to the VEA". https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:65: // Initialize(), in which case the input buffer should be padded to s/input buffer/input to VEA::Encode()/ https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:67: // |output_size| is the required size of output buffers for this encoder, s/output_size/buffer_size/ https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:74: virtual void NotifyInputDone(int32 bitstream_buffer_id) = 0; s/Input/Encode/ (and in the comment) https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:77: // ownership of the specified buffer once this callback is made. "disclaims use and ownership" is blort. What is true is that the VEA promises not to use the BB from the time this method is called until VEA::UOBB is called again. I think the comment could do a better job of communicating this. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:79: // |size| is the byte size of the used portion of the buffer. missing "payload_" from param name https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:82: bool key_frame) = 0; doco param https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:87: protected: Why protected the dtor? https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:97: // |input_resolution| is the resolution of the input stream. It is unfortunate that this needs to be specified at the head of the stream for variable-resolution sources (such as tab capture when the tab is resized, someday soon). Can this become dynamic? https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:110: // visible size/width as specified in Initialize()). This should be a VideoFrame, not a BitstreamBuffer. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:110: // visible size/width as specified in Initialize()). Please make sure (DCHECK) the requirements on visible & physical size/width are obeyed. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:126: // Destroys the encoder: all pending inputs are dropped immediately and the s/inputs/inputs and outputs/
https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:34: } max_framerate; On 2013/07/29 18:24:52, sheu wrote: > On 2013/07/27 08:35:48, posciak wrote: > > On 2013/07/26 19:09:07, sheu wrote: > > > Right, this struct is used by the VEA to export a supported configuration. > > We'd > > > get a list of these. > > > > Ok, but how does max_framerate relate to max_resolution here? This structure > > should provide a list of resolutions with supported framerates for each > > resolution (or at least one max_framerate for each resolution). > > I suppose we could have a vector of supported max resolutions/framerates, but > until we have hardware and a usecase for different limits per resolution I'd > just like to keep it this way? > > (Basically I don't want to make SupportedProfile too complicated... yet.) How will this be used now then? Is there anything using it now? There are no calls in the API that return it that I can see... https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:102: int32 initial_bitrate) = 0; On 2013/07/29 18:24:52, sheu wrote: > On 2013/07/27 08:35:48, posciak wrote: > > On 2013/07/26 19:09:07, sheu wrote: > > > On 2013/07/26 02:26:43, posciak wrote: > > > > Perhaps we could drop initial_bitrate and simply call > > > > RequestEncodingParameterChange() before the first Encode() ? > > > > > > Thought about this. fischman@'s initial VEA spec also had bitrate in the > > > Initialize() call, which I found kind of odd, but in the end I reasoned that > > > encoders that don't support midstream bitrate changing should be able to > just > > > stub that call out. > > > > Not following you. I agree bitrate change or any other request to change > > parameters may not be supported and it's ok not to do anything, but how does > > that relate to having initial_bitrate here? > > You need an initial bitrate to start encoding; can't just let the HW guess what > you want. I'd consider it an initial parameter just like input_Format or > input_resolution. Or you could let encoder choose a default. I don't see a problem with that and encoders must have something set as default. I feel it's still better than having this one-off here in Initialize() and also that this is too tied to current use case. And when we add more knobs, will we only allow setting bitrate initially? Or ask for them to be set via RequestEncodingParameterChange before first Encode? https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:110: virtual void Encode(const BitstreamBuffer& buffer, bool force_keyframe) = 0; On 2013/07/29 18:24:52, sheu wrote: > On 2013/07/27 08:35:48, posciak wrote: > > Well, it's going to be less useful when we start using dmabufs or something. > Can > > we use something else that also doesn't imply they hold bitstream? VideoFrame? > > Weren't you using it before? It also has SHM. > > DMABUFs are also FDs, which can be dup()ed and IPCed just like the typical > anonymous shared memory backing we have for current SharedMemory buffers. So I > don't see a need for changing this when DMABUFs weigh in, which is the reason I > chose to go this route. Well, I won't push too hard. I just find unpleasantly asymmetric for VDA to do BitstreamBuffer->VideoFrame while VEA doing BitstreamBuffer->BitstreamBuffer. Looks like Ami agrees with me on PS3. https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:119: // fulfilled on a best-effort basis. On 2013/07/31 23:01:12, Ami Fischman wrote: > On 2013/07/29 18:24:52, sheu wrote: > > On 2013/07/27 08:35:48, posciak wrote: > > > Ok, I guess it's ok not to have it frame-precise atm, but I'd still be > > > advocating for at least a success/failure callback, even if we can't exactly > > > tell when, for the other reasons stated above. > > > > I'll let fischman@ weigh in on this one, since the VEA design doc was > > fire-and-forget as well. > > I don't see the value in having the callback indicate success/failure today. > The client can detect success/failure by observing the encoded bitrate (since it > has information of timestamps as well as emitted bitstream buffers and their > sizes). But the client can't tell when the change in bitrate will happen. How long is it supposed to wait for it to happen or not? Should it try a different value when it fails, because minimum bitrate for example depends on resolution and sometimes it cannot go lower? I still feel callbacks may become necessary for other knobs added later, but I'm ok with not adding them now just for bitrate.
https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:69: virtual void RequireBitstreamBuffers(int input_count, I think we are missing a Flush() or something similar after all. This API requires the client to provide at least input_count input buffers at all times, otherwise the encoder may not return output buffers at all until it does. But at the end of the stream we need to force return of the rest of the stream without providing any more input buffers. Unless I'm missing something?
https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:102: int32 initial_bitrate) = 0; On 2013/08/01 03:21:39, posciak wrote: > On 2013/07/29 18:24:52, sheu wrote: > > On 2013/07/27 08:35:48, posciak wrote: > > > On 2013/07/26 19:09:07, sheu wrote: > > > > On 2013/07/26 02:26:43, posciak wrote: > > > > > Perhaps we could drop initial_bitrate and simply call > > > > > RequestEncodingParameterChange() before the first Encode() ? > > > > > > > > Thought about this. fischman@'s initial VEA spec also had bitrate in the > > > > Initialize() call, which I found kind of odd, but in the end I reasoned > that > > > > encoders that don't support midstream bitrate changing should be able to > > just > > > > stub that call out. > > > > > > Not following you. I agree bitrate change or any other request to change > > > parameters may not be supported and it's ok not to do anything, but how does > > > that relate to having initial_bitrate here? > > > > You need an initial bitrate to start encoding; can't just let the HW guess > what > > you want. I'd consider it an initial parameter just like input_Format or > > input_resolution. > > Or you could let encoder choose a default. I don't see a problem with that and > encoders must have something set as default. I feel it's still better than > having this one-off here in Initialize() and also that this is too tied to > current use case. And when we add more knobs, will we only allow setting bitrate > initially? Or ask for them to be set via RequestEncodingParameterChange before > first Encode? FTR I don't care about this question (I can easily accept either way). https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:119: // fulfilled on a best-effort basis. On 2013/08/01 03:21:39, posciak wrote: > On 2013/07/31 23:01:12, Ami Fischman wrote: > > On 2013/07/29 18:24:52, sheu wrote: > > > On 2013/07/27 08:35:48, posciak wrote: > > > > Ok, I guess it's ok not to have it frame-precise atm, but I'd still be > > > > advocating for at least a success/failure callback, even if we can't > exactly > > > > tell when, for the other reasons stated above. > > > > > > I'll let fischman@ weigh in on this one, since the VEA design doc was > > > fire-and-forget as well. > > > > I don't see the value in having the callback indicate success/failure today. > > The client can detect success/failure by observing the encoded bitrate (since > it > > has information of timestamps as well as emitted bitstream buffers and their > > sizes). > > But the client can't tell when the change in bitrate will happen. How long is it > supposed to wait for it to happen or not? Should it try a different value when > it fails, because minimum bitrate for example depends on resolution and > sometimes it cannot go lower? > I still feel callbacks may become necessary for other knobs added later, but I'm > ok with not adding them now just for bitrate. Generally I want to minimize the amount of conversation the VEA and its client have about capabilities and fallbacks (because I think that way lies madness and also because I believe we don't need anywhere near that much flexibility for our use-cases). I'm fine revisiting this when more knobs are added. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:69: virtual void RequireBitstreamBuffers(int input_count, On 2013/08/01 09:00:42, posciak wrote: > I think we are missing a Flush() or something similar after all. This API > requires the client to provide at least input_count input buffers at all times, > otherwise the encoder may not return output buffers at all until it does. But at > the end of the stream we need to force return of the rest of the stream without > providing any more input buffers. Unless I'm missing something? We dropped Flush() from go/vea (see resolved comment thread from "3:53 PM Mar 20") when we realized it was unnecessary for real-time encode purposes. If/when a high-latency encode use-case is introduced then we can add Flush/NotifyFlushDone. (if you still think we need Flush now can you describe the use-case in terms of a hangout?)
https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/2001/media/video/video_e... media/video/video_encode_accelerator.h:119: // fulfilled on a best-effort basis. On 2013/08/01 20:49:22, Ami Fischman wrote: > On 2013/08/01 03:21:39, posciak wrote: > > On 2013/07/31 23:01:12, Ami Fischman wrote: > > > On 2013/07/29 18:24:52, sheu wrote: > > > > On 2013/07/27 08:35:48, posciak wrote: > > > > > Ok, I guess it's ok not to have it frame-precise atm, but I'd still be > > > > > advocating for at least a success/failure callback, even if we can't > > exactly > > > > > tell when, for the other reasons stated above. > > > > > > > > I'll let fischman@ weigh in on this one, since the VEA design doc was > > > > fire-and-forget as well. > > > > > > I don't see the value in having the callback indicate success/failure today. > > > The client can detect success/failure by observing the encoded bitrate > (since > > it > > > has information of timestamps as well as emitted bitstream buffers and their > > > sizes). > > > > But the client can't tell when the change in bitrate will happen. How long is > it > > supposed to wait for it to happen or not? Should it try a different value when > > it fails, because minimum bitrate for example depends on resolution and > > sometimes it cannot go lower? > > I still feel callbacks may become necessary for other knobs added later, but > I'm > > ok with not adding them now just for bitrate. > > Generally I want to minimize the amount of conversation the VEA and its client > have about capabilities and fallbacks (because I think that way lies madness and > also because I believe we don't need anywhere near that much flexibility for our > use-cases). > > I'm fine revisiting this when more knobs are added. Ok. I give up, you are right, too much detail for as of yet nonexisting requirements. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:69: virtual void RequireBitstreamBuffers(int input_count, On 2013/08/01 20:49:22, Ami Fischman wrote: > On 2013/08/01 09:00:42, posciak wrote: > > I think we are missing a Flush() or something similar after all. This API > > requires the client to provide at least input_count input buffers at all > times, > > otherwise the encoder may not return output buffers at all until it does. But > at > > the end of the stream we need to force return of the rest of the stream > without > > providing any more input buffers. Unless I'm missing something? > > We dropped Flush() from go/vea (see resolved comment thread from "3:53 PM Mar > 20") when we realized it was unnecessary for real-time encode purposes. > If/when a high-latency encode use-case is introduced then we can add > Flush/NotifyFlushDone. > > (if you still think we need Flush now can you describe the use-case in terms of > a hangout?) In terms of a hangout, no. This came up while writing/planning testing/validation for encoders. Of course we could queue another set of input_count of empty frames to get more or less the same effect (although we don't really specify if there is a 1:1 relationship between input frames and output bitstream buffers either). But ok, ok, I know I'm (slightly) overthinking it. I guess this is not the place for the discussion on how to test the encoder at the end of the stream. I withdraw :)
https://codereview.chromium.org/20632002/diff/26001/content/renderer/media/me... File content/renderer/media/media_stream_dependency_factory.cc (right): https://codereview.chromium.org/20632002/diff/26001/content/renderer/media/me... content/renderer/media/media_stream_dependency_factory.cc:527: { On 2013/07/31 23:01:12, Ami Fischman wrote: > Yeah, no. > EVS (and the code-block this contorts around) needs to die before this can land. Yeah it's probably better if I back EVS stuff out first (see https://codereview.chromium.org/21421002/). John - let me know if you agree.
Updated. I haven't touched RTCVideoEncoder or media::VideoEncodeAccelerator yet. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/browser/re... File content/browser/renderer_host/render_process_host_impl.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/browser/re... content/browser/renderer_host/render_process_host_impl.cc:874: switches::kDisableAcceleratedVideoEncode, On 2013/07/31 23:01:12, Ami Fischman wrote: > Not sure what the point of this switch is considering there's no accelerated > encode _other_ than for webrtc. Future-proofing? I'll remove it for now. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/client/gpu_channel_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_channel_host.cc:188: scoped_ptr<media::VideoEncodeAccelerator> GpuChannelHost::CreateVideoEncoder( On 2013/07/31 23:01:12, Ami Fischman wrote: > I'm surprised by the asymmetry between CVD() and CED() (and the reflection of > that in ownership semantics, IPC message differences, and so on). Are you > planning to make the decode side look like the encode side, or do you have a > reason for making them different? Decode is tied to the CommandBufferImplProxy, since it's tied to a specific graphics context (since it uses GL). For encode, we're tied to the GpuChannel since here is no graphics context dependency. (That might change if we add TextureVideoFrame support later.) https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:10: #include "content/common/gpu/media/gpu_video_encode_accelerator.h" On 2013/07/31 23:01:12, Ami Fischman wrote: > This seems unkosher. Why is renderer-side code including GPU-side code? > I think you need APIs & IPCs for the supportprofiles business, and I think you > do not need this #include. Yes this is slightly icky. My thought is that since we compile-time know the set of VDAs we're compiled with, we might as well just route it over to the host statically. Doing this over IPC, we should just query it once somewhere. I was thinking GpuInfo, but I'm not sure where I should be doing that. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:22: channel_->RemoveRoute(route_id_); On 2013/07/31 23:01:12, Ami Fischman wrote: > Can the AddRoute be done in the ctor for symmetry? Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:86: return; On 2013/07/31 23:01:12, Ami Fischman wrote: > NotifyError? Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:104: "buffer.id()=" << buffer.id(); On 2013/07/31 23:01:12, Ami Fischman wrote: > NotifyError Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:127: if (channel_) On 2013/07/31 23:01:12, Ami Fischman wrote: > braces on multiline if bodies Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/client/gpu_video_encode_accelerator_host.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.h:41: // GpuVideoEncodeAcceleratorHost::GetSupportedProfiles(). On 2013/07/31 23:01:12, Ami Fischman wrote: > s/GpuVideoEncodeAcceleratorHost/GpuVideoEncodeAccelerator/ ? Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/client/gpu_video_encode_accelerator_host.h:70: // IPC handlers On 2013/07/31 23:01:12, Ami Fischman wrote: > s/$/ proxying media::VideoEncodeAccelerator::Client./ Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.cc:19: encoder_.release()->Destroy(); On 2013/07/31 23:01:12, Ami Fischman wrote: > will crash if CreateEncoder() didn't succeed. Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.cc:46: DLOG(ERROR) << "GpuVideoEncodeAccelerator::NotifyInitializeDone(): " On 2013/07/31 23:01:12, Ami Fischman wrote: > Here and (lots of) elsewhere, Send() does its own DLOGging. Doing a DLOG(ERROR) > after every if (!Send) seems silly. > (Send()'s DLOG could emit the message's type id to avoid losing information) Sounds good. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.cc:108: delete message; On 2013/07/31 23:01:12, Ami Fischman wrote: > NotifyError? <zen> what is the sound of a NULL IPC channel sending? </zen> https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.cc:126: << "x" << input_resolution.height() On 2013/07/31 23:01:12, Ami Fischman wrote: > here and elsewhere you could use gfx::Size::ToString() to good effect. Did not know that existed. Sweet! https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... File content/common/gpu/media/gpu_video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.h:47: // Static query for supported profiles. This query calls the appropriate On 2013/07/31 23:01:12, Ami Fischman wrote: > comment should be attached to function below, not above. > (though per elsewhere, I suspect this needs to become part of the API or be > dropped) Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.h:55: scoped_ptr<media::VideoEncodeAccelerator> CreateEncoder(); On 2013/07/31 23:01:12, Ami Fischman wrote: > This is a member function with a single call-site so I'd make the setting of the > member the last thing it did and avoid having to Pass() and type out a few > scoped_ptr<> incantations. Used to be a static, not anymore. Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.h:59: // IPC handlers. On 2013/07/31 23:01:12, Ami Fischman wrote: > s/\./ proxying media::VideoEncodeAccelerator./ Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/common/gpu... content/common/gpu/media/gpu_video_encode_accelerator.h:73: // The GpuChannel owns this GpuVideoEncodeAccelerator and will outlive this. On 2013/07/31 23:01:12, Ami Fischman wrote: > s/this\./|this|./ Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/content_co... File content/content_common.gypi (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/content_co... content/content_common.gypi:488: 'common/gpu/media/exynos_video_encode_accelerator.h', On 2013/07/31 23:01:12, Ami Fischman wrote: > EVEA is not a part of this CL. Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/media_stream_dependency_factory.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/media_stream_dependency_factory.cc:527: { On 2013/07/31 23:01:12, Ami Fischman wrote: > Yeah, no. > EVS (and the code-block this contorts around) needs to die before this can land. Roger. I'll land the revert before this one, and the merge conflict should remind me to uncontort this. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/renderer_gpu_video_accelerator_factories.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/renderer_gpu_video_accelerator_factories.cc:31: if (context) { On 2013/07/31 23:01:12, Ami Fischman wrote: > how can this fail? > if it's necessary, please reverse the test and early-return to de-indent the > rest of the function. If we disable HW-accelerated decode, we won't get a valid context. But we would still like to support HW-accelerated encode. Test reversed. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/renderer_gpu_video_accelerator_factories.cc:85: AsyncCreateVideoDecodeAccelerator, On 2013/07/31 23:01:12, Ami Fischman wrote: > Ugh, is this what clang-format makes?? If yes, IMO worth a go/clang-format-bug It's just because the scoped function name is so long. The formatting _does_ sort of make sense, for a computer with no innate sense of aesthetics. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder.cc:32: Impl(const base::WeakPtr<RTCVideoEncoder>& weak_encoder); On 2013/07/31 23:01:12, Ami Fischman wrote: > single-arg ctors need to be marked explicit. Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/rtc_video_encoder.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder.h:37: // the thread that the RTCVideoEncoder constructor is called on. On 2013/07/31 23:01:12, Ami Fischman wrote: > This description made me expect a http://c2.com/cgi/wiki?PimplIdiom (and thus > confused over the fact that this class has private methods & members, which it > wouldn't if you were using pimpl) but that's not what you're doing at all. > Instead Impl serves the purpose of lifecycle extension, apparently because > you're worried about webrtc deleting this directly. But I don't believe it > does; I believe it goes through the factory's DeleteEncoder method to delete > this, so you can introduce the needed lifecycle extension there. I think that > would be a far preferable choice to the current approach because ATM it RVE > implies it ownes RVE::Impl (via a scoped_ptr) and webrtc implies it owns RVE > (via the factory's deleteencoder which is a pass-through), but in reality, it is > ~Impl that deletes its "owning" RVE, and deleteencoder only schedules a future > ~Impl to run. This covert ownership inversion is well worth avoiding IMO. "covert ownership inversion" would be a good underground punk-rock band name, BTW. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/rtc_video_encoder_factory.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:18: cricket::WebRtcVideoEncoderFactory::VideoCodec GVEAToWebRTCCodec( On 2013/07/31 23:01:12, Ami Fischman wrote: > s/GVEA/VEA/ Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:27: name = std::string("VP8"); On 2013/07/31 23:01:12, Ami Fischman wrote: > here and elsewhere, the explicit std::string() is unnecessary. Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:38: fps = profile.max_framerate.numerator; On 2013/07/31 23:01:12, Ami Fischman wrote: > DCHECK_EQ(denominator, 1); > or handle the division. The DCHECK is above, maybe not where you'd expect. I moved it. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:88: // by different decoders. Since we aren't running on the child thread and On 2013/07/31 23:01:12, Ami Fischman wrote: > s/decoders/encoders/ Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.cc:95: // No-op: our codec list is populated on installation. On 2013/07/31 23:01:12, Ami Fischman wrote: > Yeah, I doubt this is going to fly... It does quite nicely. According to the interface the Observer is only notified if the codec list is not populated at creation time. Since we do that statically, we don't need to notify observers. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/rtc_video_encoder_factory.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/rtc_video_encoder_factory.h:27: RTCVideoEncoderFactory( On 2013/07/31 23:01:12, Ami Fischman wrote: > explicit Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/r... File content/renderer/render_thread_impl.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/r... content/renderer/render_thread_impl.cc:917: gpu_channel_host, factories_loop, context3d); On 2013/07/31 23:01:12, Ami Fischman wrote: > What's the use of allocating a RendererGpuVideoAcceleratorFactories with a null > context? HW-accelerated decoding requires a context, since it integrates with GL. HW-accelerated encoding does not. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/r... File content/renderer/render_thread_impl.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/r... content/renderer/render_thread_impl.h:258: // Gets gpu factories, which will run on |factories_loop|. On 2013/07/31 23:01:12, Ami Fischman wrote: > I think the NULL return value should be restored (see comments on .cc). If VDA is disabled, we still might want VEA (which is independent of any context). See .cc https://chromiumcodereview.appspot.com/20632002/diff/26001/media/filters/mock... File media/filters/mock_gpu_video_accelerator_factories.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/filters/mock... media/filters/mock_gpu_video_accelerator_factories.h:32: VideoEncodeAccelerator*(VideoEncodeAccelerator::Client*)); On 2013/07/31 23:01:12, Ami Fischman wrote: > doco the reason for these methods (which I assume is scoped_ptr<>'s > unfriendliness to MOCKs?) Got that right.
Wow, the change-then-rebase landed as mess. Sorry guys. You might as well diff #5 against #3.
https://codereview.chromium.org/20632002/diff/102002/content/renderer/media/r... File content/renderer/media/renderer_gpu_video_accelerator_factories.h (right): https://codereview.chromium.org/20632002/diff/102002/content/renderer/media/r... content/renderer/media/renderer_gpu_video_accelerator_factories.h:91: // AsyncCreateVideoEncodeAccelerator returns its ouptu tin the vea_ member. "outpu tin" -> "output in" https://codereview.chromium.org/20632002/diff/102002/content/renderer/media/r... File content/renderer/media/rtc_video_encoder.cc (right): https://codereview.chromium.org/20632002/diff/102002/content/renderer/media/r... content/renderer/media/rtc_video_encoder.cc:159: codecSettings.startBitrate * 1000); // startBitrate is in kbits/sec. (copied from the EVS review comment by palmer@) should check codecSettings.startBitrate to make sure we don't overflow 32-bit int when multiplied by 1000 https://codereview.chromium.org/20632002/diff/102002/content/renderer/media/r... content/renderer/media/rtc_video_encoder.cc:507: base::Bind(&RTCVideoEncoder::Impl::Destroy, impl_)); nit: the above 2 lines ("FROM_HERE, base::Bind(&RTCVideoEncoder::Impl::Destroy, impl));") can fit in one 80-col line. https://codereview.chromium.org/20632002/diff/102002/content/renderer/media/r... content/renderer/media/rtc_video_encoder.cc:581: base::Bind(&RTCVideoEncoder::Impl::Destroy, impl_)); nit: ditto https://codereview.chromium.org/20632002/diff/102002/media/media.gyp File media/media.gyp (right): https://codereview.chromium.org/20632002/diff/102002/media/media.gyp#newcode354 media/media.gyp:354: 'filters/gpu_video_accelerated_factories.h', typo: "accelerated_factories" is probably incorrect? should be "accelerator_factories". nit: list files in alphabetical order. gpu_video_accelerator_factories.* should go before gpu_video_decoder.*
Reviewed only PS#3->#4 diff and still waiting for RTCVideoEncoder* changes. https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... File content/renderer/media/renderer_gpu_video_accelerator_factories.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/content/renderer/m... content/renderer/media/renderer_gpu_video_accelerator_factories.cc:85: AsyncCreateVideoDecodeAccelerator, On 2013/08/02 01:27:49, sheu wrote: > On 2013/07/31 23:01:12, Ami Fischman wrote: > > Ugh, is this what clang-format makes?? If yes, IMO worth a > go/clang-format-bug > > It's just because the scoped function name is so long. The formatting _does_ > sort of make sense, for a computer with no innate sense of aesthetics. I disagree, possibly because I believe the clang-format folks can deliver miracles. Filed b/10147819. https://chromiumcodereview.appspot.com/20632002/diff/108001/content/common/gp... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/108001/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:10: #include "content/common/gpu/media/gpu_video_encode_accelerator.h" I feel this is no more legit than simply hard-coding the set of profiles. This is more future-proofing that isn't buying you anything. Drop until it's done right? On 2013/08/02 01:27:49, sheu wrote: > On 2013/07/31 23:01:12, Ami Fischman wrote: > > This seems unkosher. Why is renderer-side code including GPU-side code? > > I think you need APIs & IPCs for the supportprofiles business, and I think you > > do not need this #include. > > Yes this is slightly icky. My thought is that since we compile-time know the > set of VDAs we're compiled with, we might as well just route it over to the host > statically. > > Doing this over IPC, we should just query it once somewhere. I was thinking > GpuInfo, but I'm not sure where I should be doing that. https://chromiumcodereview.appspot.com/20632002/diff/108001/content/renderer/... File content/renderer/media/rtc_video_encoder_factory.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/108001/content/renderer/... content/renderer/media/rtc_video_encoder_factory.cc:38: DCHECK_EQ(profile.max_framerate.denominator, 1U); ugh, no, you had it in a better place before. Not sure how I missed it. https://chromiumcodereview.appspot.com/20632002/diff/108001/content/renderer/... content/renderer/media/rtc_video_encoder_factory.cc:95: // No-op: our codec list is populated on installation. On 2013/08/02 01:27:49, sheu wrote: > On 2013/07/31 23:01:12, Ami Fischman wrote: > > Yeah, I doubt this is going to fly... > > It does quite nicely. > > According to the interface the Observer is only notified if the codec list is > not populated at creation time. Since we do that statically, we don't need to > notify observers. What I was referring to was that if a renderer<->GPU roundtrip really was necessary (to get supported codecs) then this would not be a viable approach. Since you can hard-code supported profiles in the short term, this is fine. https://chromiumcodereview.appspot.com/20632002/diff/108001/media/filters/gpu... File media/filters/gpu_video_accelerator_factories.h (right): https://chromiumcodereview.appspot.com/20632002/diff/108001/media/filters/gpu... media/filters/gpu_video_accelerator_factories.h:67: virtual scoped_refptr<GpuVideoAcceleratorFactories> Clone() = 0; You missed this comment? > In the CL that added this to the subclass (without adding it here) there was > chatter about that decision. Why do this here?
https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:34: // stays long enough to properly shut down the VEA. You seem to have missed my super-insight into why I think you don't need this trampolining at all. https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... File content/renderer/media/rtc_video_encoder_factory.h (right): https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/rtc_video_encoder_factory.h:10: #include "base/threading/thread.h" ?? https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/rtc_video_encoder_factory.h:22: // This classes creates RTCVideoEncoder instances (each wrapping a classes
Updated; this should cover everything. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:27: // Specification of an encoding profile supported by an encoder. On 2013/07/31 23:01:12, Ami Fischman wrote: > Did you mean to add a query/response API pair here and corresponding IPC > messages? Not at the moment; this is statically queried. (See the hullabaloo around GpuVEA/GpuVEAHost. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:43: // A failure occurred at the browser layer or one of its dependencies. On 2013/07/31 23:01:12, Ami Fischman wrote: > s/browser layer/GPU process/ Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:45: // failures, GPU library failures, browser programming errors, and so on. On 2013/07/31 23:01:12, Ami Fischman wrote: > s/browser/GPU process/ Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:57: // buffers once this callback is made. On 2013/07/31 23:01:12, Ami Fischman wrote: > This seems unfortunate in that it makes it higher-latency-than-necessary to > increase the number of buffers in use. > Why not have a VEA::Client::DismissBitstreamBuffer call, instead? VDA::Client::DismissBitstreamBuffer ended up being either ignored or just batch-used on all buffers in our VDA implementations. In fact, all the client needs to do is to provide at least this many outstanding buffers -- there's no restriction on the client providing more. So increasing the number of buffers in use is a non-issue. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:61: // need to hold onto some subset of inputs as reference pictures. On 2013/07/31 23:01:12, Ami Fischman wrote: > The method name deals in bitstreambuffers but here you say frames. > I believe this comment is misguided and the param really means "number of > bitstream buffers to provide to the VEA". moving to media::VideoFrame. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:65: // Initialize(), in which case the input buffer should be padded to On 2013/07/31 23:01:12, Ami Fischman wrote: > s/input buffer/input to VEA::Encode()/ Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:67: // |output_size| is the required size of output buffers for this encoder, On 2013/07/31 23:01:12, Ami Fischman wrote: > s/output_size/buffer_size/ output_buffer_size_, then. I'd like to make it clear this is an output. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:74: virtual void NotifyInputDone(int32 bitstream_buffer_id) = 0; On 2013/07/31 23:01:12, Ami Fischman wrote: > s/Input/Encode/ (and in the comment) I renamed it to Input a while back. I wanted to avoid implying that this callback notifies that an frame has been encoded: it doesn't, it just informs the client that a particular input can be dispensed with, no guarantees on whether its frame contents have actually been encoded yet. Actually, since we're moving to media::VideoFrame, and they're all reference-counted wrappers around const memory -- I'll just drop this entry point. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:79: // |size| is the byte size of the used portion of the buffer. On 2013/07/31 23:01:13, Ami Fischman wrote: > missing "payload_" from param name Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:82: bool key_frame) = 0; On 2013/07/31 23:01:13, Ami Fischman wrote: > doco param Done. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:87: protected: On 2013/07/31 23:01:13, Ami Fischman wrote: > Why protected the dtor? Because VDA::Client does the same? Not sure if this should be private or protected, but generally we shouldn't be deleting VEA::Client through any pointers held to a Client instance. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:97: // |input_resolution| is the resolution of the input stream. On 2013/07/31 23:01:13, Ami Fischman wrote: > It is unfortunate that this needs to be specified at the head of the stream for > variable-resolution sources (such as tab capture when the tab is resized, > someday soon). Can this become dynamic? Unfortunately, we'd best tear down and reinstantiate a VEA at this point. Exynos HW does not support resolution changes midstream. The webrtc::VideoEncoder interface also doesn't provide for resolution changing, so I think we're good here. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:110: // visible size/width as specified in Initialize()). Alright, retargeting for media::VideoFrame. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:126: // Destroys the encoder: all pending inputs are dropped immediately and the On 2013/07/31 23:01:13, Ami Fischman wrote: > s/inputs/inputs and outputs/ Done. https://chromiumcodereview.appspot.com/20632002/diff/108001/content/common/gp... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/108001/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:10: #include "content/common/gpu/media/gpu_video_encode_accelerator.h" On 2013/08/02 18:42:40, Ami Fischman wrote: > I feel this is no more legit than simply hard-coding the set of profiles. > This is more future-proofing that isn't buying you anything. > Drop until it's done right? It's not any more legit, but I think it at least centralizes the non-legit-ness in one place. This is why I put GpuVEA::GetSupportedProfiles() and GpuVEA::CreateEncoder() next to each other in gpu_video_encode_accelerator.cc (and why I proxy over to that file in the first place): so when I add EVEA in the next changelist, the "#if defined(OS_CHROMEOS) && defined(ARCH_ARM_EL)" business stays in one place. https://chromiumcodereview.appspot.com/20632002/diff/108001/content/renderer/... File content/renderer/media/rtc_video_encoder_factory.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/108001/content/renderer/... content/renderer/media/rtc_video_encoder_factory.cc:38: DCHECK_EQ(profile.max_framerate.denominator, 1U); On 2013/08/02 18:42:40, Ami Fischman wrote: > ugh, no, you had it in a better place before. Not sure how I missed it. Ha. I actually like it where it is now better :-) https://chromiumcodereview.appspot.com/20632002/diff/108001/media/filters/gpu... File media/filters/gpu_video_accelerator_factories.h (right): https://chromiumcodereview.appspot.com/20632002/diff/108001/media/filters/gpu... media/filters/gpu_video_accelerator_factories.h:67: virtual scoped_refptr<GpuVideoAcceleratorFactories> Clone() = 0; On 2013/08/02 18:42:40, Ami Fischman wrote: > You missed this comment? > > > In the CL that added this to the subclass (without adding it here) there was > > chatter about that decision. Why do this here? Ah, I missed it. Reverted. https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... File content/renderer/media/renderer_gpu_video_accelerator_factories.h (right): https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/renderer_gpu_video_accelerator_factories.h:91: // AsyncCreateVideoEncodeAccelerator returns its ouptu tin the vea_ member. On 2013/08/02 17:29:36, hshi1 wrote: > "outpu tin" -> "output in" Done. https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:34: // stays long enough to properly shut down the VEA. On 2013/08/02 22:00:49, Ami Fischman wrote: > You seem to have missed my super-insight into why I think you don't need this > trampolining at all. I moved state around and so pimpl is making more sense here. As to why we trampoline at all -- So what we get in and out of WebRtcVideoEncoderFactory is a plain webrtc::VideoEncoder. We'd have to downcast this to whatever Chrome-specific class we have internally to do lifecycle management, which is a little icky. More importantly, we do a lot of the processing on the MediaMessageLoop, whereas we're getting called from webrtc on its worker thread. So this split lets us separate the webrtc-thread state from the media-thread state, which is easier to reason about. https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:159: codecSettings.startBitrate * 1000); // startBitrate is in kbits/sec. On 2013/08/02 17:29:36, hshi1 wrote: > (copied from the EVS review comment by palmer@) should check > codecSettings.startBitrate to make sure we don't overflow 32-bit int when > multiplied by 1000 Done. https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:507: base::Bind(&RTCVideoEncoder::Impl::Destroy, impl_)); On 2013/08/02 17:29:36, hshi1 wrote: > nit: the above 2 lines ("FROM_HERE, base::Bind(&RTCVideoEncoder::Impl::Destroy, > impl));") can fit in one 80-col line. Done. https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:581: base::Bind(&RTCVideoEncoder::Impl::Destroy, impl_)); On 2013/08/02 17:29:36, hshi1 wrote: > nit: ditto Done. https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... File content/renderer/media/rtc_video_encoder_factory.h (right): https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/rtc_video_encoder_factory.h:10: #include "base/threading/thread.h" On 2013/08/02 22:00:49, Ami Fischman wrote: > ?? Done. https://chromiumcodereview.appspot.com/20632002/diff/102002/content/renderer/... content/renderer/media/rtc_video_encoder_factory.h:22: // This classes creates RTCVideoEncoder instances (each wrapping a On 2013/08/02 22:00:49, Ami Fischman wrote: > classes Done. https://chromiumcodereview.appspot.com/20632002/diff/102002/media/media.gyp File media/media.gyp (right): https://chromiumcodereview.appspot.com/20632002/diff/102002/media/media.gyp#n... media/media.gyp:354: 'filters/gpu_video_accelerated_factories.h', On 2013/08/02 17:29:36, hshi1 wrote: > typo: "accelerated_factories" is probably incorrect? should be > "accelerator_factories". > > nit: list files in alphabetical order. gpu_video_accelerator_factories.* should > go before gpu_video_decoder.* Done.
https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:57: // buffers once this callback is made. On 2013/08/03 01:31:03, sheu wrote: > On 2013/07/31 23:01:12, Ami Fischman wrote: > > This seems unfortunate in that it makes it higher-latency-than-necessary to > > increase the number of buffers in use. > > Why not have a VEA::Client::DismissBitstreamBuffer call, instead? > > VDA::Client::DismissBitstreamBuffer ended up being either ignored or just > batch-used on all buffers in our VDA implementations. > Not ignored anymore, since we have support for MSE/EME. WebRTC/hangout might be switching resolutions/settings as well and possibly quite often? It might make sense then to request/dismiss some pictures in the working set of VideoFrames (I'm assuming after switch to VideoFrame there will be a constant set of pre-alloced/inited VideoFrames, but not BitstreamBuffers, like in VDA model and VEA::NotifyInputDone() would be equivalent to a Dismiss?). That said, this seems like something that could be added later. On the other hand, we can't have a direct parallel to VDA here with output BitstreamBufs, on config/knob change we might need to request larger output buffers to be provided to us from that point? > In fact, all the client needs to do is to provide at least this many outstanding > buffers -- there's no restriction on the client providing more. So increasing > the number of buffers in use is a non-issue.
@posciak: don't let sheu confuse you! VDA does not have a "VDA::Client::DismissBitstreamBuffer" because VDA's Provide/Assign/Dismiss cycle is in *PictureBuffers*. VEA, OTOH, needs a cycle of *BitstreamBuffers*. @sheu: I believe you need to allow for DismissBB to allow the encoder to deal with larger videoframes being fed to Encode(), or even as target_bitrate is raised for a constant resolution. Unless, of course, you always ask for the max-sized buffers the encoder might be able to write, which is not necessarily non-legit, but should be called out.
https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:66: } Whoops, I missed updating this to account for the media::VideoFrame interface now. FINV
On 2013/08/03 02:24:00, Ami Fischman wrote: > @posciak: don't let sheu confuse you! VDA does not have a > "VDA::Client::DismissBitstreamBuffer" because VDA's Provide/Assign/Dismiss > cycle is in *PictureBuffers*. VEA, OTOH, needs a cycle of > *BitstreamBuffers*. Wow... confused you say? VDA::Client::DismissBitstreamBuffer, really? :P Can we establish some nomenclature here? When we say BitstreamBuffers, it's only about output, right? And we want to call inputs VFs, right? We'll want a constant working set of *inputs* when they become VFs, but not outputs, correct? If not, ignore below, but I think yes, we need to set up the encoder with fds etc. beforehand. In VDA change requests originate only from the decoder, while in VEA are initiated by client and then encoder reacts with change requests: 1) change in required number/size of outputs (as a result of client requesting new knob setting, or (2)) 2) change of working set of inputs on resolution change (new resolution from capturer getting to the client), which may potentially trigger 1) as well C - client, E - encoder 1) New knob setting: C - RequestEncodingParameterChange -> E E - NewSizeOfOutputs -> C from now on C provides larger BitstreamBuffers 2) New resolution: C - ChangeResolution() -> E E - ProvideInputs() -> C (E - NewSizeOfOutputs -> C) (from now on C provides larger BitstreamBuffers) C - UseThisNewSetOfVideoFrames() -> E Bottom line, minimal set of required calls is (some of them already exist, i.e. make ): VEA::Client::NewSizeOfOutputs() VEA::Client::ProvideInputs() // also drops previous set VEA::UseThisNewSetOfVideoFrames() VEA::ChangeResolution() But, so as not to overdesign as I tend to do all too often, perhaps: Rename Client::RequireBitstreamBuffers -> Client::RequireBuffers and have it trigger a drop of VF set and provide of new VF set, as well as potentially set a new size of inputs to be given from now on. New call to VEA::UseVideoFrames() to set up the working set, removing VEA::UseOutputBitstreamBuffer(). And a call to somehow notify about a change in resolution (we can't do it on Encode, as it passes the frame already). > @sheu: I believe you need to allow for DismissBB to allow the encoder to > deal with larger videoframes being fed to Encode(), or even as > target_bitrate is raised for a constant resolution. Unless, of course, you > always ask for the max-sized buffers the encoder might be able to write, > which is not necessarily non-legit, but should be called out. My thoughts exactly.
On 2013/08/03 03:00:40, posciak wrote: > On 2013/08/03 02:24:00, Ami Fischman wrote: > > @posciak: don't let sheu confuse you! VDA does not have a > > "VDA::Client::DismissBitstreamBuffer" because VDA's Provide/Assign/Dismiss > > cycle is in *PictureBuffers*. VEA, OTOH, needs a cycle of > > *BitstreamBuffers*. > > Wow... confused you say? VDA::Client::DismissBitstreamBuffer, really? :P Ok I'm stupid, I read it as "VDA does have" instead of "VDA does not have". I'll go hide now.
https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:57: // buffers once this callback is made. On 2013/08/03 01:57:47, posciak wrote: > On 2013/08/03 01:31:03, sheu wrote: > > On 2013/07/31 23:01:12, Ami Fischman wrote: > > > This seems unfortunate in that it makes it higher-latency-than-necessary to > > > increase the number of buffers in use. > > > Why not have a VEA::Client::DismissBitstreamBuffer call, instead? > > > > VDA::Client::DismissBitstreamBuffer ended up being either ignored or just > > batch-used on all buffers in our VDA implementations. > > > > Not ignored anymore, since we have support for MSE/EME. > WebRTC/hangout might be switching resolutions/settings as well and possibly > quite often? > > It might make sense then to request/dismiss some pictures in the working set of > VideoFrames (I'm assuming after switch to VideoFrame there will be a constant > set of pre-alloced/inited VideoFrames, but not BitstreamBuffers, like in VDA > model and VEA::NotifyInputDone() would be equivalent to a Dismiss?). That said, > this seems like something that could be added later. > > On the other hand, we can't have a direct parallel to VDA here with output > BitstreamBufs, on config/knob change we might need to request larger output > buffers to be provided to us from that point? > > > In fact, all the client needs to do is to provide at least this many > outstanding > > buffers -- there's no restriction on the client providing more. So increasing > > the number of buffers in use is a non-issue. Pawel I think this comment was part of your confusion about our earlier conversation. If you have a proposal/point here still can you restate it? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:89: "duplicate buffer handle for GPU process"; This is the message that someone would get if they tried to Encode a VideoFrame that's not backed by shm (b/c frame->shared_memory_handle() would be null). Is that what you want? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:165: if (iter == frame_map_.end()) { l.164, 165, and 171 can be collapsed into: if (frame_map_.erase(frame_id) != 1) { or if (!frame_map_.erase(frame_id)) { if you're feeling idiomatic. (saves LOCs and an unnecessary find()) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... File content/common/gpu/gpu_messages.h (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/gpu_messages.h:745: IPC_MESSAGE_ROUTED1(AcceleratedVideoEncoderHostMsg_NotifyEncodeDone, If this remains outside the VEA::Client interface then it should be doco'd here. I believe this means something like "Done with |frame_id|, feel free to use it for other purposes." and not something like "The bitstreambuffer representing the picture from |frame_id| has been emitted using BBR." I suspect, though, that this should actually be part of the VEA::Client API, with a comment like the above present _there_, and then no change in this file. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:63: void GpuVideoEncodeAccelerator::NotifyInputDone(int32 bitstream_buffer_id) { This is dead code (no callers). Which makes me wonder how the pipeline isn't stalling as the GVEAHost should be holding on to VideoFrames not releasing them, which should make the VideoCaptureBufferPool run out of buffers pretty quickly waiting for them to be returned from the renderer. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:63: void GpuVideoEncodeAccelerator::NotifyInputDone(int32 bitstream_buffer_id) { If this undeads, it probably wants to be renamed to NotifyEncodeDone. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:63: void GpuVideoEncodeAccelerator::NotifyInputDone(int32 bitstream_buffer_id) { s/bitstream_buffer_id/frame_id/ here and below. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:134: scoped_ptr<base::SharedMemory> shm( Can stack-allocate instead of heap-allocate. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:170: } How does this fall-through not completely break your testing? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:172: NOTREACHED(); It's kind of silly to have a single-case switch statement. You could replace it with detecting != I420 in OnInitialize and error out early, and then have a DCHECK before you assume I420 for the first time in this method. (gets you a -4 de-indent for the bulk of this method) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:190: DLOG(ERROR) << "GpuVideoEncodeAccelerator::OnUseOutputBitstreamBuffer(): " nit: DLOG will emit the file:line pair for this; hard-coding class/method names seems silly (it's brittle as you found out here, it takes up your readers' mind-share, and it occupies screen real-estate) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:29: // be created on any thread, but should subsequently be posted to (and Destroy() Since CreateAndInitializeVEA is one of the methods of this class, this sentence is logically equivalent to: It can be created on any thread but all other methods of the class must be called on a single thread. which I think is a lot easier to grok. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:30: // called on) the thread that calls CreateAndInitializeVEA(). Notifications s/Notifications returned by the encoder/Callbacks to RTCVideoEncoder/ https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:36: // stays long enough to properly shut down the VEA. I would emph the thread-separation more. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:44: // to whichever thread makes this call. Comment should explain it's only legal to call this once. (MessageLoop: y u no have a NewSoon like you have a DeleteSoon??) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:46: const webrtc::VideoCodec& codecSettings, s/codecSettings/codec_settings/ but I think it's a mistake to bake a webrtc type in this class which is otherwise webrtc-agnostic. Instead I think you should simply pass the width/height (as a gfx::Size) and bitrate as explicit params, since VideoCodec includes a lot more fields that you never inspect. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:50: const scoped_refptr<RendererGpuVideoAcceleratorFactories>& gpu_factories); This is known at ctor time so why not pass it in then? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:57: void Destroy(); doco https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:62: const gfx::Size& input_input_coded_size, input_input https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:76: kOutputBufferCount = 3, per comment on VEA.h this seems like it should be a param to RequireBitstreamBuffers. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:85: void EncodeFrameFinished(int index); Doco what |index| is an index into. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:96: // The message loop on which to post notifications. s/notifications/callbacks to weak_encoder_/ https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:97: const scoped_refptr<base::MessageLoopProxy> encoder_message_loop_proxy_; Put this right above or below weak_encoder_? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:101: // |async_retval_| when we initialization completes, encoding completes, or s/we// https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:106: // The underling VEA to perform encoding on. s/underling/underlying/ unless you've got your oppression on. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:118: gfx::Size output_frame_dimensions_; Isn't this input_frame_visible_size_? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:128: DISALLOW_COPY_AND_ASSIGN(Impl); s/COPY_AND_ASSIGN/IMPLICIT_CONSTRUCTORS/? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:156: if (bitrate <= 0) { Signed integer overflow is undefined behavior. Better to: if (codecSettings.startBitrate > kint32max / 1000) { https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:222: async_waiter_ = NULL; This stanza is repeated frequently. Extract to helper. (probably worthwhile to do with the setters, too; i.e.: RegisterWaiter(retval, waiter); SignalWaiter(retval); ) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:245: NOTIFY_ERROR(media::VideoEncodeAccelerator::kPlatformFailureError); I thought the point of using a macro for NOTIFY_ERROR is that you could get the DLOG(ERROR) at the callsite; given that, DLOG(ERROR)'ing right before NOTIFY_ERROR()'ing seems unnecessary. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:251: input_buffers_free_.push_back(i); could do this at l.248, fwiw. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:268: i, output_buffers_[i]->handle(), output_buffers_[i]->mapped_size())); Oh, wow, I didn't realize https://chromiumcodereview.appspot.com/12537014/diff/151002/base/shared_memory.h happened and now we can query the size from the shm. I wonder if BitstreamBuffer::size_ can now be excised. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:285: NOTIFY_ERROR(media::VideoEncodeAccelerator::kInvalidArgumentError); This is a platform error. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:321: retval = WEBRTC_VIDEO_CODEC_ERROR; You could do a more specific job of mapping VEA's error codes to values from https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc... ... https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:350: DCHECK(async_retval_); drop these in favor of DCHECKs in the proposed SignalWaiter() impl https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:352: const int index = input_buffers_free_.back(); why not pop_back right here? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:355: if (video_encoder_) { why not reverse the test and early-return at the top of the method? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:357: // the encoder. Can you avoid the copy in the case where the requirements are already met? (which will be the vast majority of the time as webcams capture 360p, 480p, 720p, or 1080p) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:385: } Extract CopyPlaneWithStride() helper to avoid the 3x copy/pasta? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:425: This might be a good place for a ////////////////////////////////////////////////////////////////////////////// // Impl above, RTCVE below. ////////////////////////////////////////////////////////////////////////////// separator or somesuch. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:425: #undef NOTIFY_ERROR (I care because N_E has threading assumptions, so if a method of RTCVE was to call it it would mean something different from an Impl method calling it, so) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:457: int32_t initialization_retval = WEBRTC_VIDEO_CODEC_UNINITIALIZED; any reason not to use impl_status_ for this? (here and elsewhere) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:478: // DCHECK(thread_checker_.CalledOnValidThread()); uncomment https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:524: impl_status_ = WEBRTC_VIDEO_CODEC_OK; s/OK/UNINITIALIZED? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:536: int32_t RTCVideoEncoder::SetRates(uint32_t new_bit_rate, uint32_t frame_rate) { frame_rate is ignored; should that be noted somehow? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:564: if (!impl_) Impossible given l.558. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... File content/renderer/media/rtc_video_encoder.h (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:31: // trampolined to a private RTCVideoEncoder::Impl instance. The class runs on "The class" is ambiguous in this sentence. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:31: // trampolined to a private RTCVideoEncoder::Impl instance. The class runs on Now that I understand that you're using Impl as a way to isolate RTCVE & Impl to their respective threads, I love this design. WDYT of s/Impl/ImplOnMediaThread/ to make other people come to love this design more easily? https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:34: // the thread that the RTCVideoEncoder constructor is called on. Comment could be clearer about the fact that RTCVE is created, called, and destroyed exclusively on the libjingle worker thread. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:66: void NotifyError(const scoped_refptr<Impl>& impl, int32_t error); newline https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:76: // Message loop that |impl_| runs on; queried from |gpu_factories_|. given this is always equal to gpu_factories_->GetMessageLoop() why bother having a local copy here? https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:62: // may need to hold onto some subset of inputs as reference pictures. "provide at least this many frames" sounds like there's a VEA::ProvideVideoFrame() call of some sort, which there isn't. I believe what really is is "encoding latency" - the number of frames that must be fed to the encoder to guarantee that at least one frame's output is seen in a bitstreambuffer (and subsequently that that frame is released). Is this a param that could be returned in NotifyInitializeDone? Because returning it this late is going to be tricky to wire back to VideoCaptureBufferPool in the browser process. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:67: // input VideoFrame to Encode() should be padded appropriately. I love this comment. I wish all comments were like this. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:69: // encoder in bytes. How does the client know how many buffers of this size to provide? https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:78: // |key_frame| is true if this delivered frame is a keyframe. I think you missed my comment: On 2013/07/31 23:01:13, Ami Fischman wrote: > "disclaims use and ownership" is blort. > What is true is that the VEA promises not to use the BB from the time this > method is called until VEA::UOBB is called again. I think the comment could do > a better job of communicating this. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:86: protected: If you want to be providing a service here then make this private and add a comment about why. If you don't want to take the active voice about who should be deleting the client then just remove the dtor. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:96: // reported by VideoFrame::format() for frames passed to Encode()). On 2013/08/03 01:31:03, sheu wrote: > On 2013/07/31 23:01:13, Ami Fischman wrote: > > It is unfortunate that this needs to be specified at the head of the stream > for > > variable-resolution sources (such as tab capture when the tab is resized, > > someday soon). Can this become dynamic? > > Unfortunately, we'd best tear down and reinstantiate a VEA at this point. > Exynos HW does not support resolution changes midstream. > > The webrtc::VideoEncoder interface also doesn't provide for resolution changing, > so I think we're good here. I don't think that's right. Here are two examples of webrtc encoder impls that do handle mid-stream frame size changes: https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc... https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc... https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:116: // encoded output. This sounds one-off'ish, but in reality must be called for each BB fed to NotifyBitstreamBufferReady, right? I think it would be useful to say that explicitly in this comment. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:128: // immediately and the component is freed. This call may asynchornously free typo: asynchornously
Another day, another revision. fischman@, posciak@: PTAL https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:89: "duplicate buffer handle for GPU process"; On 2013/08/05 18:44:38, Ami Fischman wrote: > This is the message that someone would get if they tried to Encode a VideoFrame > that's not backed by shm (b/c frame->shared_memory_handle() would be null). Is > that what you want? Good point. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:165: if (iter == frame_map_.end()) { On 2013/08/05 18:44:38, Ami Fischman wrote: > l.164, 165, and 171 can be collapsed into: > if (frame_map_.erase(frame_id) != 1) { > or > if (!frame_map_.erase(frame_id)) { > if you're feeling idiomatic. > > (saves LOCs and an unnecessary find()) Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... File content/common/gpu/gpu_messages.h (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/gpu_messages.h:745: IPC_MESSAGE_ROUTED1(AcceleratedVideoEncoderHostMsg_NotifyEncodeDone, On 2013/08/05 18:44:38, Ami Fischman wrote: > If this remains outside the VEA::Client interface then it should be doco'd here. > I believe this means something like "Done with |frame_id|, feel free to use it > for other purposes." and not something like "The bitstreambuffer representing > the picture from |frame_id| has been emitted using BBR." > > I suspect, though, that this should actually be part of the VEA::Client API, > with a comment like the above present _there_, and then no change in this file. As part of the move to media::VideoFrame: since VideoFrame has its own destructor callback, the VEA::Client API does not need an explicit EncodeDone(), but we still need to communicate that across IPC. I'll add notes here documenting this wrinkle. Actually I'll just add the boilerplate documentation to the rest of the IPCs too. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:63: void GpuVideoEncodeAccelerator::NotifyInputDone(int32 bitstream_buffer_id) { On 2013/08/05 18:44:38, Ami Fischman wrote: > s/bitstream_buffer_id/frame_id/ here and below. (see comment below, which I posted when uploading this) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:134: scoped_ptr<base::SharedMemory> shm( On 2013/08/05 18:44:38, Ami Fischman wrote: > Can stack-allocate instead of heap-allocate. It gets passed to EncodeFrameFinished. Stack-allocating would be interesting. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:170: } On 2013/08/05 18:44:38, Ami Fischman wrote: > How does this fall-through not completely break your testing? Easy -- I didn't test :-P Fixed now that I actually tested https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:172: NOTREACHED(); On 2013/08/05 18:44:38, Ami Fischman wrote: > It's kind of silly to have a single-case switch statement. > You could replace it with detecting != I420 in OnInitialize and error out early, > and then have a DCHECK before you assume I420 for the first time in this method. > (gets you a -4 de-indent for the bulk of this method) It's silly but I intend to be taking more formats in the future, so I think this is a nice self-documenting way of saying "future format support goes here". https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:190: DLOG(ERROR) << "GpuVideoEncodeAccelerator::OnUseOutputBitstreamBuffer(): " On 2013/08/05 18:44:38, Ami Fischman wrote: > nit: DLOG will emit the file:line pair for this; hard-coding class/method names > seems silly (it's brittle as you found out here, it takes up your readers' > mind-share, and it occupies screen real-estate) I suppose. Search-replace time. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:29: // be created on any thread, but should subsequently be posted to (and Destroy() On 2013/08/05 18:44:38, Ami Fischman wrote: > Since CreateAndInitializeVEA is one of the methods of this class, this sentence > is logically equivalent to: > It can be created on any thread but all other methods of the class must be > called on a single thread. > which I think is a lot easier to grok. Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:30: // called on) the thread that calls CreateAndInitializeVEA(). Notifications On 2013/08/05 18:44:38, Ami Fischman wrote: > s/Notifications returned by the encoder/Callbacks to RTCVideoEncoder/ Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:36: // stays long enough to properly shut down the VEA. On 2013/08/05 18:44:38, Ami Fischman wrote: > I would emph the thread-separation more. Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:44: // to whichever thread makes this call. On 2013/08/05 18:44:38, Ami Fischman wrote: > Comment should explain it's only legal to call this once. > (MessageLoop: y u no have a NewSoon like you have a DeleteSoon??) Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:46: const webrtc::VideoCodec& codecSettings, On 2013/08/05 18:44:38, Ami Fischman wrote: > s/codecSettings/codec_settings/ > but I think it's a mistake to bake a webrtc type in this class which is > otherwise webrtc-agnostic. > Instead I think you should simply pass the width/height (as a gfx::Size) and > bitrate as explicit params, since VideoCodec includes a lot more fields that you > never inspect. Sure, makes sense. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:50: const scoped_refptr<RendererGpuVideoAcceleratorFactories>& gpu_factories); On 2013/08/05 18:44:38, Ami Fischman wrote: > This is known at ctor time so why not pass it in then? Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:57: void Destroy(); On 2013/08/05 18:44:38, Ami Fischman wrote: > doco Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:62: const gfx::Size& input_input_coded_size, On 2013/08/05 18:44:38, Ami Fischman wrote: > input_input Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:76: kOutputBufferCount = 3, On 2013/08/05 18:44:38, Ami Fischman wrote: > per comment on VEA.h this seems like it should be a param to > RequireBitstreamBuffers. (see VEA) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:85: void EncodeFrameFinished(int index); On 2013/08/05 18:44:38, Ami Fischman wrote: > Doco what |index| is an index into. Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:96: // The message loop on which to post notifications. On 2013/08/05 18:44:38, Ami Fischman wrote: > s/notifications/callbacks to weak_encoder_/ Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:97: const scoped_refptr<base::MessageLoopProxy> encoder_message_loop_proxy_; On 2013/08/05 18:44:38, Ami Fischman wrote: > Put this right above or below weak_encoder_? Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:101: // |async_retval_| when we initialization completes, encoding completes, or On 2013/08/05 18:44:38, Ami Fischman wrote: > s/we// Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:106: // The underling VEA to perform encoding on. On 2013/08/05 18:44:38, Ami Fischman wrote: > s/underling/underlying/ unless you've got your oppression on. we don't recognize an oppressor, we're an anarcho-syndicalist commune https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:118: gfx::Size output_frame_dimensions_; On 2013/08/05 18:44:38, Ami Fischman wrote: > Isn't this input_frame_visible_size_? Yeah, I was on the fence about renaming this. If you're pointing it out too then I should rename. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:128: DISALLOW_COPY_AND_ASSIGN(Impl); On 2013/08/05 18:44:38, Ami Fischman wrote: > s/COPY_AND_ASSIGN/IMPLICIT_CONSTRUCTORS/? DISALLOW_COPY_AND_ASSIGN is my reflexive weapon of voice. Since we already have a defined constructor I don't think that is necessary. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:156: if (bitrate <= 0) { On 2013/08/05 18:44:38, Ami Fischman wrote: > Signed integer overflow is undefined behavior. > Better to: > if (codecSettings.startBitrate > kint32max / 1000) { I was hoping to get something more generic with safe_numerics, but didn't find a good solution. I guess I can run with kint32max. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:222: async_waiter_ = NULL; On 2013/08/05 18:44:38, Ami Fischman wrote: > This stanza is repeated frequently. Extract to helper. > (probably worthwhile to do with the setters, too; i.e.: > RegisterWaiter(retval, waiter); > SignalWaiter(retval); > ) Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:245: NOTIFY_ERROR(media::VideoEncodeAccelerator::kPlatformFailureError); On 2013/08/05 18:44:38, Ami Fischman wrote: > I thought the point of using a macro for NOTIFY_ERROR is that you could get the > DLOG(ERROR) at the callsite; given that, DLOG(ERROR)'ing right before > NOTIFY_ERROR()'ing seems unnecessary. I'd like to see exactly what the parameters were the particular error condition. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:251: input_buffers_free_.push_back(i); On 2013/08/05 18:44:38, Ami Fischman wrote: > could do this at l.248, fwiw. Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:268: i, output_buffers_[i]->handle(), output_buffers_[i]->mapped_size())); On 2013/08/05 18:44:38, Ami Fischman wrote: > Oh, wow, I didn't realize > https://chromiumcodereview.appspot.com/12537014/diff/151002/base/shared_memory.h > happened and now we can query the size from the shm. I wonder if > BitstreamBuffer::size_ can now be excised. Don't think so. BitstreamBuffer comes across as a base::SharedMemoryHandle, so we don't get a size for it. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:285: NOTIFY_ERROR(media::VideoEncodeAccelerator::kInvalidArgumentError); On 2013/08/05 18:44:38, Ami Fischman wrote: > This is a platform error. Is it? It's an invalid argument coming back over IPC, so I think this qualifies. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:321: retval = WEBRTC_VIDEO_CODEC_ERROR; On 2013/08/05 18:44:38, Ami Fischman wrote: > You could do a more specific job of mapping VEA's error codes to values from > https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc... > ... Gave it a shot. Tell me what you think. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:350: DCHECK(async_retval_); On 2013/08/05 18:44:38, Ami Fischman wrote: > drop these in favor of DCHECKs in the proposed SignalWaiter() impl Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:352: const int index = input_buffers_free_.back(); On 2013/08/05 18:44:38, Ami Fischman wrote: > why not pop_back right here? I tend to get in the habit of only destructively popping queues, etc. at the very end of things, since in other (larger struct) cases I'd actually refer to head/tail with a reference. Plus it's nice not to touch input state in an error condition. So habit. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:355: if (video_encoder_) { On 2013/08/05 18:44:38, Ami Fischman wrote: > why not reverse the test and early-return at the top of the method? We'd still like to consume the buffers and signal below. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:357: // the encoder. On 2013/08/05 18:44:38, Ami Fischman wrote: > Can you avoid the copy in the case where the requirements are already met? > (which will be the vast majority of the time as webcams capture 360p, 480p, > 720p, or 1080p) That will require support for passing SharedMemory through I420VideoFrame, which comes Some Time Later (tm) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:385: } On 2013/08/05 18:44:38, Ami Fischman wrote: > Extract CopyPlaneWithStride() helper to avoid the 3x copy/pasta? Tried it. The copypasta just gets a little more confusing. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:425: On 2013/08/05 18:44:38, Ami Fischman wrote: > #undef NOTIFY_ERROR > > (I care because N_E has threading assumptions, so if a method of RTCVE was to > call it it would mean something different from an Impl method calling it, so) Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:425: On 2013/08/05 18:44:38, Ami Fischman wrote: > This might be a good place for a > > ////////////////////////////////////////////////////////////////////////////// > // Impl above, RTCVE below. > ////////////////////////////////////////////////////////////////////////////// > > separator or somesuch. Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:457: int32_t initialization_retval = WEBRTC_VIDEO_CODEC_UNINITIALIZED; On 2013/08/05 18:44:38, Ami Fischman wrote: > any reason not to use impl_status_ for this? > (here and elsewhere) impl_status_ belongs on another thread. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:478: // DCHECK(thread_checker_.CalledOnValidThread()); On 2013/08/05 18:44:38, Ami Fischman wrote: > uncomment This fails presently for reasons unknown that I still have to dig up with the WebRTC folks. So, it's in progress. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:524: impl_status_ = WEBRTC_VIDEO_CODEC_OK; On 2013/08/05 18:44:38, Ami Fischman wrote: > s/OK/UNINITIALIZED? Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:536: int32_t RTCVideoEncoder::SetRates(uint32_t new_bit_rate, uint32_t frame_rate) { On 2013/08/05 18:44:38, Ami Fischman wrote: > frame_rate is ignored; should that be noted somehow? Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:564: if (!impl_) On 2013/08/05 18:44:38, Ami Fischman wrote: > Impossible given l.558. Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... File content/renderer/media/rtc_video_encoder.h (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:31: // trampolined to a private RTCVideoEncoder::Impl instance. The class runs on On 2013/08/05 18:44:38, Ami Fischman wrote: > "The class" is ambiguous in this sentence. Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:31: // trampolined to a private RTCVideoEncoder::Impl instance. The class runs on On 2013/08/05 18:44:38, Ami Fischman wrote: > Now that I understand that you're using Impl as a way to isolate RTCVE & Impl to > their respective threads, I love this design. > WDYT of s/Impl/ImplOnMediaThread/ to make other people come to love this design > more easily? TBH that sounds a little excessive? I haven't seen that convention elsewhere. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:34: // the thread that the RTCVideoEncoder constructor is called on. On 2013/08/05 18:44:38, Ami Fischman wrote: > Comment could be clearer about the fact that RTCVE is created, called, and > destroyed exclusively on the libjingle worker thread. Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:66: void NotifyError(const scoped_refptr<Impl>& impl, int32_t error); On 2013/08/05 18:44:38, Ami Fischman wrote: > newline Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.h:76: // Message loop that |impl_| runs on; queried from |gpu_factories_|. On 2013/08/05 18:44:38, Ami Fischman wrote: > given this is always equal to gpu_factories_->GetMessageLoop() why bother having > a local copy here? To save some typing and ref/unrefs? I guess I can remove it. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:62: // may need to hold onto some subset of inputs as reference pictures. On 2013/08/05 18:44:38, Ami Fischman wrote: > "provide at least this many frames" sounds like there's a > VEA::ProvideVideoFrame() call of some sort, which there isn't. > I believe what really is is "encoding latency" - the number of frames that must > be fed to the encoder to guarantee that at least one frame's output is seen in a > bitstreambuffer (and subsequently that that frame is released). > > Is this a param that could be returned in NotifyInitializeDone? Because > returning it this late is going to be tricky to wire back to > VideoCaptureBufferPool in the browser process. Done (comment) RequireBitstreamBuffers() is called to return the input_count as soon as it can, so I don't see how wiring it into NotifyInitializeDone() would help matters. In fact, in EVDA right now we return this callback immediately since the first stage in our input pipe is GSC, which has no input_count limitations. I'm thinking ahead to when we start optimizing the encode path, and I think for something as simple as I420->NV12 swizzling it would possibly be more effective to do this on the CPU with libyuv, during the copy step in GpuVEA. In that case, we'd be piping direct to MFC and then we would have to report the MFC input_count limitation, which can't be queried until we have output buffers queued by UseOuputBitstreamBuffer(), which can't happen until we return a RequireBitstreamBuffers() -- yes it's a catch-22, but not one we have to deal with presently, and in any case we're tracking this with the Exynos vendor driver guys. In any case, I think it makes sense to return NotifyInitializeDone() separately from this. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:69: // encoder in bytes. On 2013/08/05 18:44:38, Ami Fischman wrote: > How does the client know how many buffers of this size to provide? It can provide as many or few as it wants, > 0. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:78: // |key_frame| is true if this delivered frame is a keyframe. On 2013/08/05 18:44:38, Ami Fischman wrote: > I think you missed my comment: > On 2013/07/31 23:01:13, Ami Fischman wrote: > > "disclaims use and ownership" is blort. > > What is true is that the VEA promises not to use the BB from the time this > > method is called until VEA::UOBB is called again. I think the comment could > do > > a better job of communicating this. > Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:86: protected: On 2013/08/05 18:44:38, Ami Fischman wrote: > If you want to be providing a service here then make this private and add a > comment about why. If you don't want to take the active voice about who should > be deleting the client then just remove the dtor. Done. Has to be protected though or nothing can inherit from it. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:96: // reported by VideoFrame::format() for frames passed to Encode()). On 2013/08/05 18:44:38, Ami Fischman wrote: > On 2013/08/03 01:31:03, sheu wrote: > > On 2013/07/31 23:01:13, Ami Fischman wrote: > > > It is unfortunate that this needs to be specified at the head of the stream > > for > > > variable-resolution sources (such as tab capture when the tab is resized, > > > someday soon). Can this become dynamic? > > > > Unfortunately, we'd best tear down and reinstantiate a VEA at this point. > > Exynos HW does not support resolution changes midstream. > > > > The webrtc::VideoEncoder interface also doesn't provide for resolution > changing, > > so I think we're good here. > > I don't think that's right. Here are two examples of webrtc encoder impls that > do handle mid-stream frame size changes: > https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc... > https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc... Oh. Hmm. In that case we should just have RTCVideoEncoder tear down and restart the encoder, since EVDA can't handle it anyways. Can we worry about this a bit later? pretty please https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:116: // encoded output. On 2013/08/05 18:44:38, Ami Fischman wrote: > This sounds one-off'ish, but in reality must be called for each BB fed to > NotifyBitstreamBufferReady, right? > I think it would be useful to say that explicitly in this comment. Done. https://chromiumcodereview.appspot.com/20632002/diff/157004/media/video/video... media/video/video_encode_accelerator.h:128: // immediately and the component is freed. This call may asynchornously free On 2013/08/05 18:44:38, Ami Fischman wrote: > typo: asynchornously Done. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:376: dst += width; Teehee The benefits of not testing
https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:57: // buffers once this callback is made. On 2013/08/05 18:44:38, Ami Fischman wrote: > On 2013/08/03 01:57:47, posciak wrote: > > On 2013/08/03 01:31:03, sheu wrote: > > > On 2013/07/31 23:01:12, Ami Fischman wrote: > > > > This seems unfortunate in that it makes it higher-latency-than-necessary > to > > > > increase the number of buffers in use. > > > > Why not have a VEA::Client::DismissBitstreamBuffer call, instead? > > > > > > VDA::Client::DismissBitstreamBuffer ended up being either ignored or just > > > batch-used on all buffers in our VDA implementations. > > > > > > > Not ignored anymore, since we have support for MSE/EME. > > WebRTC/hangout might be switching resolutions/settings as well and possibly > > quite often? > > > > It might make sense then to request/dismiss some pictures in the working set > of > > VideoFrames (I'm assuming after switch to VideoFrame there will be a constant > > set of pre-alloced/inited VideoFrames, but not BitstreamBuffers, like in VDA > > model and VEA::NotifyInputDone() would be equivalent to a Dismiss?). That > said, > > this seems like something that could be added later. > > > > On the other hand, we can't have a direct parallel to VDA here with output > > BitstreamBufs, on config/knob change we might need to request larger output > > buffers to be provided to us from that point? > > > > > In fact, all the client needs to do is to provide at least this many > > outstanding > > > buffers -- there's no restriction on the client providing more. So > increasing > > > the number of buffers in use is a non-issue. > > Pawel I think this comment was part of your confusion about our earlier > conversation. If you have a proposal/point here still can you restate it? Sorry, my point was how do we want to handle change of resolution initiated by client. This should be a common Hangouts scenario, where WebRTC switches res and requests a new one from capturer and needs to update encoder as well. Unless we want to have a destroy/create/reinit cycle (which I'm not strongly saying is not ok), DPB() would be useful.
On Mon, Aug 5, 2013 at 11:47 PM, <posciak@chromium.org> wrote: > Sorry, my point was how do we want to handle change of resolution > initiated by client. > Resolved in offline thread.
https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:172: NOTREACHED(); On 2013/08/06 06:16:36, sheu wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > It's kind of silly to have a single-case switch statement. > > You could replace it with detecting != I420 in OnInitialize and error out > early, > > and then have a DCHECK before you assume I420 for the first time in this > method. > > (gets you a -4 de-indent for the bulk of this method) > > It's silly but I intend to be taking more formats in the future, so I think this > is a nice self-documenting way of saying "future format support goes here". If this alleged future is in less than 30d, then ok. Otherwise I call BS. https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:245: NOTIFY_ERROR(media::VideoEncodeAccelerator::kPlatformFailureError); On 2013/08/06 06:16:36, sheu wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > I thought the point of using a macro for NOTIFY_ERROR is that you could get > the > > DLOG(ERROR) at the callsite; given that, DLOG(ERROR)'ing right before > > NOTIFY_ERROR()'ing seems unnecessary. > > I'd like to see exactly what the parameters were the particular error condition. Then you might enjoy putting them in the NOTIFY_ERROR itself, as was done in https://code.google.com/p/chromium/codesearch#chromium/src/content/common/gpu... (note the << usage of log) https://chromiumcodereview.appspot.com/20632002/diff/157004/content/renderer/... content/renderer/media/rtc_video_encoder.cc:352: const int index = input_buffers_free_.back(); On 2013/08/06 06:16:36, sheu wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > why not pop_back right here? > > I tend to get in the habit of only destructively popping queues, etc. at the > very end of things, since in other (larger struct) cases I'd actually refer to > head/tail with a reference. Plus it's nice not to touch input state in an error > condition. > > So habit. FWIW I consider that a dangerous habit and would encourage you to rethink it. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/common/gp... File content/common/gpu/gpu_messages.h (right): https://chromiumcodereview.appspot.com/20632002/diff/190001/content/common/gp... content/common/gpu/gpu_messages.h:749: // Notify the renderer that encoding has been completed on an input buffer. really? I thought this notification was that the frame has been queued for encoding, not that encoding was done (as visible by BitstreamBufferReady). https://chromiumcodereview.appspot.com/20632002/diff/190001/content/common/gp... content/common/gpu/gpu_messages.h:750: // There is not congruent entry point in the media::VideoEncodeAccelerator s/congruent/congruent to/ https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:42: explicit Impl( explicit no longer necessary https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:47: // and then the instance is then bound forevermore to whichever thread made s/is then/is/ https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:49: // WebRTC expects this call to be synchronous. This sentence is out of place here considering this could well be asynchronous givne the async_* params. Ditto for the other methods below that say the same thing. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:64: void Destroy(); On 2013/08/06 06:16:36, sheu wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > doco > > Done. No you didn't. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:294: NOTIFY_ERROR(media::VideoEncodeAccelerator::kInvalidArgumentError); On 2013/08/06 06:16:36, sheu wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > This is a platform error. > > Is it? It's an invalid argument coming back over IPC, so I think this > qualifies. No, it's an invalid argument coming from the VEA impl, not IPC, and is thus part of the "platform". https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:333: retval = WEBRTC_VIDEO_CODEC_MEMORY; I guess s/MEMORY/ERROR/ (better to be vague than incorrectly precise) https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:365: if (video_encoder_) { On 2013/08/06 06:16:36, sheu wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > why not reverse the test and early-return at the top of the method? > > We'd still like to consume the buffers and signal below. Isn't the negation an error condition? https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:367: // the encoder. Please add a TODO pointing to a crbug for the remaining work. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:456: #undef NOTIFY_ERROR go above separator https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:488: int32_t initialization_retval = WEBRTC_VIDEO_CODEC_UNINITIALIZED; On 2013/08/06 06:16:36, sheu wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > any reason not to use impl_status_ for this? > > (here and elsewhere) > > impl_status_ belongs on another thread. What does that mean? https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:509: // DCHECK(thread_checker_.CalledOnValidThread()); On 2013/08/06 06:16:36, sheu wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > uncomment > > This fails presently for reasons unknown that I still have to dig up with the > WebRTC folks. So, it's in progress. Big-ass TODO please, then. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:621: if (!impl_) impossible given l.623 (and the fact that the only caller of this function passes it a |this| for |impl|. https://chromiumcodereview.appspot.com/20632002/diff/190001/media/video/video... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/190001/media/video/video... media/video/video_encode_accelerator.h:70: // encoder in bytes. On 2013/08/06 06:16:36, sheu wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > How does the client know how many buffers of this size to provide? > > It can provide as many or few as it wants, > 0. What an interesting choice. Does the encoder really not want to have a say in this?? https://chromiumcodereview.appspot.com/20632002/diff/190001/media/video/video... media/video/video_encode_accelerator.h:107: // TODO(sheu): handle resolution changes. crbug please (Note that this isn't pie-in-the-sky - it'll be here in a week or two hopefully.) https://chromiumcodereview.appspot.com/20632002/diff/190001/media/video/video... media/video/video_encode_accelerator.h:122: // BistreamBufferReady(). s/./, so call it each time the BitstreamBuffer is ready for re-use./
You are in a maze of twisty little comments, all alike. Anyways things work now; I can see stuff on Chromecast (and with force IDR support, the quality isn't bad). Modulo a little more debugging I suspect is on the kernel side. https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/26001/media/video/video_... media/video/video_encode_accelerator.h:57: // buffers once this callback is made. On 2013/08/06 06:47:33, posciak wrote: > On 2013/08/05 18:44:38, Ami Fischman wrote: > > On 2013/08/03 01:57:47, posciak wrote: > > > On 2013/08/03 01:31:03, sheu wrote: > > > > On 2013/07/31 23:01:12, Ami Fischman wrote: > > > > > This seems unfortunate in that it makes it higher-latency-than-necessary > > to > > > > > increase the number of buffers in use. > > > > > Why not have a VEA::Client::DismissBitstreamBuffer call, instead? > > > > > > > > VDA::Client::DismissBitstreamBuffer ended up being either ignored or just > > > > batch-used on all buffers in our VDA implementations. > > > > > > > > > > Not ignored anymore, since we have support for MSE/EME. > > > WebRTC/hangout might be switching resolutions/settings as well and possibly > > > quite often? > > > > > > It might make sense then to request/dismiss some pictures in the working set > > of > > > VideoFrames (I'm assuming after switch to VideoFrame there will be a > constant > > > set of pre-alloced/inited VideoFrames, but not BitstreamBuffers, like in VDA > > > model and VEA::NotifyInputDone() would be equivalent to a Dismiss?). That > > said, > > > this seems like something that could be added later. > > > > > > On the other hand, we can't have a direct parallel to VDA here with output > > > BitstreamBufs, on config/knob change we might need to request larger output > > > buffers to be provided to us from that point? > > > > > > > In fact, all the client needs to do is to provide at least this many > > > outstanding > > > > buffers -- there's no restriction on the client providing more. So > > increasing > > > > the number of buffers in use is a non-issue. > > > > Pawel I think this comment was part of your confusion about our earlier > > conversation. If you have a proposal/point here still can you restate it? > > Sorry, my point was how do we want to handle change of resolution initiated by > client. This should be a common Hangouts scenario, where WebRTC switches res and > requests a new one from capturer and needs to update encoder as well. Unless we > want to have a destroy/create/reinit cycle (which I'm not strongly saying is not > ok), DPB() would be useful. Until we have actual HW support for resolution changes, can we punt this? At the moment all we can do anyways is destroy/create/reinit, and that will happen at the RTCVideoEncoder level. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/common/gp... File content/common/gpu/gpu_messages.h (right): https://chromiumcodereview.appspot.com/20632002/diff/190001/content/common/gp... content/common/gpu/gpu_messages.h:749: // Notify the renderer that encoding has been completed on an input buffer. On 2013/08/06 20:41:13, Ami Fischman wrote: > really? I thought this notification was that the frame has been queued for > encoding, not that encoding was done (as visible by BitstreamBufferReady). Reworded and renamed. This comment is a little obtuse. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/common/gp... content/common/gpu/gpu_messages.h:750: // There is not congruent entry point in the media::VideoEncodeAccelerator On 2013/08/06 20:41:13, Ami Fischman wrote: > s/congruent/congruent to/ Changed, but not that way. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:42: explicit Impl( On 2013/08/06 20:41:13, Ami Fischman wrote: > explicit no longer necessary Done. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:47: // and then the instance is then bound forevermore to whichever thread made On 2013/08/06 20:41:13, Ami Fischman wrote: > s/is then/is/ Done. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:49: // WebRTC expects this call to be synchronous. On 2013/08/06 20:41:13, Ami Fischman wrote: > This sentence is out of place here considering this could well be asynchronous > givne the async_* params. > Ditto for the other methods below that say the same thing. Done. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:64: void Destroy(); On 2013/08/06 20:41:13, Ami Fischman wrote: > On 2013/08/06 06:16:36, sheu wrote: > > On 2013/08/05 18:44:38, Ami Fischman wrote: > > > doco > > > > Done. > > No you didn't. I added one more. I suppose you want more than that. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:294: NOTIFY_ERROR(media::VideoEncodeAccelerator::kInvalidArgumentError); On 2013/08/06 20:41:13, Ami Fischman wrote: > On 2013/08/06 06:16:36, sheu wrote: > > On 2013/08/05 18:44:38, Ami Fischman wrote: > > > This is a platform error. > > > > Is it? It's an invalid argument coming back over IPC, so I think this > > qualifies. > > No, it's an invalid argument coming from the VEA impl, not IPC, and is thus part > of the "platform". BitstreamBufferReady is part of the VEA::Client interface; this one in particular gets called by GpuVEAHost, which passes it on directly from IPC. So it is coming from IPC. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:333: retval = WEBRTC_VIDEO_CODEC_MEMORY; On 2013/08/06 20:41:13, Ami Fischman wrote: > I guess s/MEMORY/ERROR/ (better to be vague than incorrectly precise) Most kPlatformFailureErrors are memory-related I think -- more useful to be able to distinguish failure classes methinks. I don't feel strongly either way, so let me know if you concur. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:363: base::SharedMemory* input_buffer = input_buffers_[index]; Actually now that I think of it, I should move these into the conditional. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:365: if (video_encoder_) { On 2013/08/06 20:41:13, Ami Fischman wrote: > On 2013/08/06 06:16:36, sheu wrote: > > On 2013/08/05 18:44:38, Ami Fischman wrote: > > > why not reverse the test and early-return at the top of the method? > > > > We'd still like to consume the buffers and signal below. > > Isn't the negation an error condition? It is, but I'd still like to leave the Impl in a self-consistent state even if an error occurs. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:367: // the encoder. On 2013/08/06 20:41:13, Ami Fischman wrote: > Please add a TODO pointing to a crbug for the remaining work. Done. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:456: #undef NOTIFY_ERROR On 2013/08/06 20:41:13, Ami Fischman wrote: > go above separator Done. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:488: int32_t initialization_retval = WEBRTC_VIDEO_CODEC_UNINITIALIZED; On 2013/08/06 20:41:13, Ami Fischman wrote: > On 2013/08/06 06:16:36, sheu wrote: > > On 2013/08/05 18:44:38, Ami Fischman wrote: > > > any reason not to use impl_status_ for this? > > > (here and elsewhere) > > > > impl_status_ belongs on another thread. > > What does that mean? Yeah ok that made no sense. Anyways I was thinking that impl_ and impl_status_ are tied together implicitly and I would like to avoid mucking with it where I don't also set impl_ to NULL. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:509: // DCHECK(thread_checker_.CalledOnValidThread()); On 2013/08/06 20:41:13, Ami Fischman wrote: > On 2013/08/06 06:16:36, sheu wrote: > > On 2013/08/05 18:44:38, Ami Fischman wrote: > > > uncomment > > > > This fails presently for reasons unknown that I still have to dig up with the > > WebRTC folks. So, it's in progress. > > > > Big-ass TODO please, then. Done. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:621: if (!impl_) On 2013/08/06 20:41:13, Ami Fischman wrote: > impossible given l.623 (and the fact that the only caller of this function > passes it a |this| for |impl|. Done. https://chromiumcodereview.appspot.com/20632002/diff/190001/media/video/video... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/190001/media/video/video... media/video/video_encode_accelerator.h:70: // encoder in bytes. On 2013/08/06 20:41:13, Ami Fischman wrote: > On 2013/08/06 06:16:36, sheu wrote: > > On 2013/08/05 18:44:38, Ami Fischman wrote: > > > How does the client know how many buffers of this size to provide? > > > > It can provide as many or few as it wants, > 0. > > What an interesting choice. Does the encoder really not want to have a say in > this?? Yeah, it doesn't care. More than one buffer just hides the IPC latency, pretty much, and I think the client should be the one figuring that out. https://chromiumcodereview.appspot.com/20632002/diff/190001/media/video/video... media/video/video_encode_accelerator.h:107: // TODO(sheu): handle resolution changes. On 2013/08/06 20:41:13, Ami Fischman wrote: > crbug please > > (Note that this isn't pie-in-the-sky - it'll be here in a week or two > hopefully.) Done. https://chromiumcodereview.appspot.com/20632002/diff/190001/media/video/video... media/video/video_encode_accelerator.h:122: // BistreamBufferReady(). On 2013/08/06 20:41:13, Ami Fischman wrote: > s/./, so call it each time the BitstreamBuffer is ready for re-use./ Done.
https://chromiumcodereview.appspot.com/20632002/diff/213002/media/media.gyp File media/media.gyp (right): https://chromiumcodereview.appspot.com/20632002/diff/213002/media/media.gyp#n... media/media.gyp:354: 'filters/gpu_video_accelerator_factories.h', Nit: one of my earlier comments might have been missed. Please put gpu_video_accelerator_factories.* before gpu_video_decoder.* (alphabetic order). Thanks.
LGTM % these comments. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:64: void Destroy(); On 2013/08/07 09:25:03, sheu wrote: > On 2013/08/06 20:41:13, Ami Fischman wrote: > > On 2013/08/06 06:16:36, sheu wrote: > > > On 2013/08/05 18:44:38, Ami Fischman wrote: > > > > doco > > > > > > Done. > > > > No you didn't. > > I added one more. I suppose you want more than that. I wanted the doco to talk about how it deletes this but then realized it didn't but forgot to update my CR comment. Your PS#8 doco is great. https://chromiumcodereview.appspot.com/20632002/diff/190001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:333: retval = WEBRTC_VIDEO_CODEC_MEMORY; On 2013/08/07 09:25:03, sheu wrote: > On 2013/08/06 20:41:13, Ami Fischman wrote: > > I guess s/MEMORY/ERROR/ (better to be vague than incorrectly precise) > > Most kPlatformFailureErrors are memory-related I think -- more useful to be able > to distinguish failure classes methinks. > > I don't feel strongly either way, so let me know if you concur. I worry that MEMORY is specific enough that it'll mislead webrtc ppl. Let's switch to ERROR, which is at least vague enough that debuggers will have to look. https://chromiumcodereview.appspot.com/20632002/diff/213002/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/213002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:314: NOTIFY_ERROR(media::VideoEncodeAccelerator::kInvalidArgumentError); On 2013/08/07 09:25:03, sheu wrote: > On 2013/08/06 20:41:13, Ami Fischman wrote: > > On 2013/08/06 06:16:36, sheu wrote: > > > On 2013/08/05 18:44:38, Ami Fischman wrote: > > > > This is a platform error. > > > > > > Is it? It's an invalid argument coming back over IPC, so I think this > > > qualifies. > > > > No, it's an invalid argument coming from the VEA impl, not IPC, and is thus > part > > of the "platform". > > BitstreamBufferReady is part of the VEA::Client interface; this one in > particular gets called by GpuVEAHost, which passes it on directly from IPC. So > it is coming from IPC. My point is that the id here came from GVEAH which got it from GVEA which got it from EVEA (or another VEA impl). In any case, it's the VEA impl that crafted the id being passed here. InvalidArgumentError is meant to signal to VEA _clients_ that they done messed up in one of their calls to VEA's public API, but _this_ error here cannot have been triggered by such an error on the client's behalf, hence my objection to this being an InvalidArgument and the suggestion that it should be a PlatformError, which is used to signal to clients that something went wrong in the impl of VEA, which is what has happened here because the VEA impl did something wrong and came up with this invalid id for us. If this still doesn't make sense to you then let's VC. https://chromiumcodereview.appspot.com/20632002/diff/213002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:390: if (video_encoder_) { I don't get it. Why not replace this with if (!video_encoder_) return; ? What about l.446-449 leaves the Impl in a more "self-consistent" state than this option?
Added support for dynamically changing framerate. The RequestEncodingParametersChange entry point in VEA has added new parameters. fischman@: PTAL https://chromiumcodereview.appspot.com/20632002/diff/213002/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/213002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:314: NOTIFY_ERROR(media::VideoEncodeAccelerator::kInvalidArgumentError); On 2013/08/07 21:07:15, Ami Fischman wrote: > On 2013/08/07 09:25:03, sheu wrote: > > On 2013/08/06 20:41:13, Ami Fischman wrote: > > > On 2013/08/06 06:16:36, sheu wrote: > > > > On 2013/08/05 18:44:38, Ami Fischman wrote: > > > > > This is a platform error. > > > > > > > > Is it? It's an invalid argument coming back over IPC, so I think this > > > > qualifies. > > > > > > No, it's an invalid argument coming from the VEA impl, not IPC, and is thus > > part > > > of the "platform". > > > > BitstreamBufferReady is part of the VEA::Client interface; this one in > > particular gets called by GpuVEAHost, which passes it on directly from IPC. > So > > it is coming from IPC. > > My point is that the id here came from > GVEAH which got it from GVEA which got it from EVEA (or another VEA impl). In > any case, it's the VEA impl that crafted the id being passed here. > InvalidArgumentError is meant to signal to VEA _clients_ that they done messed > up in one of their calls to VEA's public API, but _this_ error here cannot have > been triggered by such an error on the client's behalf, hence my objection to > this being an InvalidArgument and the suggestion that it should be a > PlatformError, which is used to signal to clients that something went wrong in > the impl of VEA, which is what has happened here because the VEA impl did > something wrong and came up with this invalid id for us. > > If this still doesn't make sense to you then let's VC. Alright sure, that makes sense. I've made the corresponding changes in GpuVEA as well. https://chromiumcodereview.appspot.com/20632002/diff/213002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:361: retval = WEBRTC_VIDEO_CODEC_MEMORY; Done. https://chromiumcodereview.appspot.com/20632002/diff/213002/content/renderer/... content/renderer/media/rtc_video_encoder.cc:390: if (video_encoder_) { On 2013/08/07 21:07:15, Ami Fischman wrote: > I don't get it. Why not replace this with > if (!video_encoder_) > return; > ? What about l.446-449 leaves the Impl in a more "self-consistent" state than > this option? In the error state (where video_encoder_ is NULL), we still want to: 1. clear input_next_frame_{,keyframe_} so we drop the current frame and its state 2. signal the waiter (or we will hang the RTCVideoEncoder thread) https://chromiumcodereview.appspot.com/20632002/diff/213002/media/media.gyp File media/media.gyp (right): https://chromiumcodereview.appspot.com/20632002/diff/213002/media/media.gyp#n... media/media.gyp:354: 'filters/gpu_video_accelerator_factories.h', On 2013/08/07 16:33:17, hshi1 wrote: > Nit: one of my earlier comments might have been missed. Please put > gpu_video_accelerator_factories.* before gpu_video_decoder.* (alphabetic order). > Thanks. Done.
https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... File content/renderer/media/media_stream_dependency_factory.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... content/renderer/media/media_stream_dependency_factory.cc:524: rtc_encoding_capturer_factory->AsWeakPtr()); Sorry please do one more rebase... the above block of code is now gone (EVS reverted). Thanks
https://codereview.chromium.org/20632002/diff/232001/content/renderer/media/m... File content/renderer/media/media_stream_dependency_factory.cc (right): https://codereview.chromium.org/20632002/diff/232001/content/renderer/media/m... content/renderer/media/media_stream_dependency_factory.cc:515: const CommandLine& command_line = *CommandLine::ForCurrentProcess(); nit: this is redundant. You can use the "cmd_line" declared at line 500.
PS#8->#9 still LGTM % nits https://chromiumcodereview.appspot.com/20632002/diff/232001/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/232001/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:125: NotifyError(media::VideoEncodeAccelerator::kPlatformFailureError); I think these changes are wrong; here, the triggering condition can be specified by the ultimate VEA::Client making the calls, so these really are InvalidArgumentErrors. Unless you intend that the GVEAH impose sanity, and so say that anything that makes it from the VEA::Client past GVEAH to here is a platform failure. https://chromiumcodereview.appspot.com/20632002/diff/232001/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:211: encoder_->RequestEncodingParametersChange( If denom is 0 then InvalidArgumentError to avoid bad renderer crashing browser. https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:451: SignalAsyncWaiter(WEBRTC_VIDEO_CODEC_OK); I'm vaguely worried that this will send the client mixed messages; previously an error might have been delivered (and thus video_encoder_ is NULL in this method) but now we deliver OK. Up to you. https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:618: 1)); nit: why support num/denom if denom is always 1? Could instead have integral frames_per_second param and call it a day.
https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:618: 1)); On 2013/08/08 23:08:19, Ami Fischman wrote: > nit: why support num/denom if denom is always 1? > Could instead have integral frames_per_second param and call it a day. I guess this is to permit common fractional frame rates such as 30000/1001 = 29.97Hz?
Sending out for wider OWNERS reviews. https://chromiumcodereview.appspot.com/20632002/diff/232001/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/232001/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:125: NotifyError(media::VideoEncodeAccelerator::kPlatformFailureError); On 2013/08/08 23:08:19, Ami Fischman wrote: > I think these changes are wrong; here, the triggering condition can be specified > by the ultimate VEA::Client making the calls, so these really are > InvalidArgumentErrors. > > Unless you intend that the GVEAH impose sanity, and so say that anything that > makes it from the VEA::Client past GVEAH to here is a platform failure. Thus triggering condition isn't actually from VEA::Client. The frame_id parameter comes from an implementation detail of the GpuVEA<->GpuVEAH connection; the VEA interface only specifies a media::VideoFrame input to Encode() (and no EncodeDone() or similar callback), so as an implementation detail the GpuVEAH synthesizes a frame_id to track the VideoFrame and passes it to GpuVEA. If it's an invalid frame_id here, it definitely was an invalid creation of GpuVEAH itself and not of the VEA::Client. https://chromiumcodereview.appspot.com/20632002/diff/232001/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:211: encoder_->RequestEncodingParametersChange( On 2013/08/08 23:08:19, Ami Fischman wrote: > If denom is 0 then InvalidArgumentError to avoid bad renderer crashing browser. I'll just make it 1-parameter and dispense with this problem. Still 0-checking it though; thanks. https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... File content/renderer/media/rtc_video_encoder.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:451: SignalAsyncWaiter(WEBRTC_VIDEO_CODEC_OK); On 2013/08/08 23:08:19, Ami Fischman wrote: > I'm vaguely worried that this will send the client mixed messages; previously an > error might have been delivered (and thus video_encoder_ is NULL in this method) > but now we deliver OK. > Up to you. Done. https://chromiumcodereview.appspot.com/20632002/diff/232001/content/renderer/... content/renderer/media/rtc_video_encoder.cc:618: 1)); On 2013/08/09 00:01:10, hshi1 wrote: > On 2013/08/08 23:08:19, Ami Fischman wrote: > > nit: why support num/denom if denom is always 1? > > Could instead have integral frames_per_second param and call it a day. > > I guess this is to permit common fractional frame rates such as 30000/1001 = > 29.97Hz? Since the only current user (WebRTC) uses only integral framerates, I suppose we could simplify things.
I need some OWNERS signoffs. jochen@: content/browser/renderer_host/render_process_host_impl.cc, content/renderer/* piman@: content/common/gpu/*, content/*.gypi, content/public/common/content_switches.*
On 2013/08/09 00:20:14, sheu wrote: > I need some OWNERS signoffs. > > jochen@: content/browser/renderer_host/render_process_host_impl.cc, > content/renderer/* > piman@: content/common/gpu/*, content/*.gypi, > content/public/common/content_switches.* PTAL, thanks!
On 2013/08/09 00:20:32, sheu wrote: > On 2013/08/09 00:20:14, sheu wrote: > > I need some OWNERS signoffs. > > > > jochen@: content/browser/renderer_host/render_process_host_impl.cc, > > content/renderer/* > > piman@: content/common/gpu/*, content/*.gypi, > > content/public/common/content_switches.* > > PTAL, thanks! To be clear: jochen@: content/renderer/media/* has been covered by fischman@.
On 2013/08/09 00:26:57, sheu wrote: > On 2013/08/09 00:20:32, sheu wrote: > > On 2013/08/09 00:20:14, sheu wrote: > > > I need some OWNERS signoffs. > > > > > > jochen@: content/browser/renderer_host/render_process_host_impl.cc, > > > content/renderer/* > > > piman@: content/common/gpu/*, content/*.gypi, > > > content/public/common/content_switches.* > > > > PTAL, thanks! > > To be clear: > > jochen@: content/renderer/media/* has been covered by fischman@. One last thing: piman@: if you feel like reviewing everything in content/* that's not in content/renderer/media/* (which is about half of it actually), that's fine by me too.
Adding cevans@ for IPC message review. Thanks! cevans@: content/common/gpu/gpu_messages.h
I defer to Antoine
https://chromiumcodereview.appspot.com/20632002/diff/294002/content/browser/r... File content/browser/renderer_host/render_process_host_impl.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/browser/r... content/browser/renderer_host/render_process_host_impl.cc:945: switches::kEnableWebRtcHWEncoding, Please also add to kForwardSwitches in chrome/browser/chromeos/login/chrome_restart_request.cc so that it also applies in guest mode. (side note: it looks like other flags aren't there, they should be). https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/client/gpu_channel_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_channel_host.cc:193: int32 route_id; nit: initialize https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:113: // Mask against 30 bits, to avoid (undefined) wraparound on signed integer. why not use unsigned? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:198: if (!channel_) { I think all the callers are checking channel_ anyway. Can we assert instead? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:202: OnNotifyError(kPlatformFailureError); dead code (after return). Do you need the explicit return? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:207: return; same here, do you need the explicit return? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:163: base::Unretained(this), what makes this Unretained safe? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:169: NOTREACHED(); This comes from the untrusted renderer, so it can be reached. Maybe DVLOG instead? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:199: media::BitstreamBuffer(buffer_id, buffer_handle, buffer_size)); what happens if buffer_handle is invalid, or doesn't match buffer_size? Is every encoder required to implement the security around it? Can we do it here instead?
https://chromiumcodereview.appspot.com/20632002/diff/294002/content/browser/r... File content/browser/renderer_host/render_process_host_impl.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/browser/r... content/browser/renderer_host/render_process_host_impl.cc:945: switches::kEnableWebRtcHWEncoding, On 2013/08/09 04:04:58, piman wrote: > Please also add to kForwardSwitches in > chrome/browser/chromeos/login/chrome_restart_request.cc so that it also applies > in guest mode. > (side note: it looks like other flags aren't there, they should be). I am just gonna umm copy them to chrome_restart_request.cc and try not to look too closely. DeriveCommandLine() is a _mess_. Shouldn't there be a better way to do this? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/client/gpu_channel_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_channel_host.cc:193: int32 route_id; On 2013/08/09 04:04:58, piman wrote: > nit: initialize Done. https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:113: // Mask against 30 bits, to avoid (undefined) wraparound on signed integer. On 2013/08/09 04:04:58, piman wrote: > why not use unsigned? Generally using negative values as error conditions. https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:198: if (!channel_) { On 2013/08/09 04:04:58, piman wrote: > I think all the callers are checking channel_ anyway. Can we assert instead? All things considered, I think we should remove the other checks. Done. https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:202: OnNotifyError(kPlatformFailureError); On 2013/08/09 04:04:58, piman wrote: > dead code (after return). > > Do you need the explicit return? Done. https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:207: return; On 2013/08/09 04:04:58, piman wrote: > same here, do you need the explicit return? Done. https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:163: base::Unretained(this), On 2013/08/09 04:04:58, piman wrote: > what makes this Unretained safe? That's an interesting question, I guess. I was assuming that this would wrap the lifetime of the underlying VEA, which would would be the only owners of the VideoFrame. But I guess I can't assume that the VEA won't pass this off elsewhere, or even that we can bound the lifetime of the VEA (since it's only Destroy()ed, not explicitly destructed.) So, moving to weak_ptr. You're gonna _love_ this tower of base::Bind()-ing. https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:169: NOTREACHED(); On 2013/08/09 04:04:58, piman wrote: > This comes from the untrusted renderer, so it can be reached. Maybe DVLOG > instead? Right, if it's reached, we proceed to return an error below. Do we want to not NOTREACHED() this? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:199: media::BitstreamBuffer(buffer_id, buffer_handle, buffer_size)); On 2013/08/09 04:04:58, piman wrote: > what happens if buffer_handle is invalid, or doesn't match buffer_size? > Is every encoder required to implement the security around it? > Can we do it here instead? If the buffer handle is invalid, mapping it will fail. There's no way to know its buffer size though other than what we're passed through IPC, so there really isn't a way around this. We're relying on the security of the underlying mmap(). media::BitstreamBuffer is spec-ed in terms of handles, so handles we do. It serves for the decoder.
https://codereview.chromium.org/20632002/diff/286002/content/public/common/co... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/20632002/diff/286002/content/public/common/co... content/public/common/content_switches.cc:236: Ditto (rebase). https://codereview.chromium.org/20632002/diff/286002/content/public/common/co... File content/public/common/content_switches.h (right): https://codereview.chromium.org/20632002/diff/286002/content/public/common/co... content/public/common/content_switches.h:86: extern const char kEnableWebRtcHWEncoding[]; Please rebase & resolve... this section of switches have moved. https://codereview.chromium.org/20632002/diff/286002/media/base/video_frame.cc File media/base/video_frame.cc (right): https://codereview.chromium.org/20632002/diff/286002/media/base/video_frame.c... media/base/video_frame.cc:128: base::SharedMemoryHandle shm_handle, Need to rebase once more... |shm_handle| is gone
https://chromiumcodereview.appspot.com/20632002/diff/292003/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/292003/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:95: int32 initial_bitrate) { Input parameter validation. - What about a negative, zero or crazy-bit initial_bitrate. - AFAIK, we _still_ don't have generic gfx::Size validation, so we should probably check for integer overflow in width * height. - What about unrecognized input_format or output_profile? https://chromiumcodereview.appspot.com/20632002/diff/292003/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:145: static_cast<size_t>(input_coded_size_.GetArea() * 3 / 2)) { Where does input_coded_size_ come from? Two possible integer overflows here? https://chromiumcodereview.appspot.com/20632002/diff/292003/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:211: uint32 framerate) { Parameter validation. Also, why is bitrate signed? Does a negative bitrate make any sense? If not, an unsigned type is much safer.
https://codereview.chromium.org/20632002/diff/292003/media/video/video_encode... File media/video/video_encode_accelerator.h (right): https://codereview.chromium.org/20632002/diff/292003/media/video/video_encode... media/video/video_encode_accelerator.h:132: // |framerate_denom| is the denominator of the framerate as a fraction. The function does not take the documented params.
IPC fixes done, thanks. cevans@ is out so I'm fishing for another IPC reviewer. cdn@: PTAL at content/common/gpu/messages.h and content/common/gpu/{media,client}/gpu_video_encode_accelerator*, thanks. https://chromiumcodereview.appspot.com/20632002/diff/292003/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/292003/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:95: int32 initial_bitrate) { On 2013/08/11 22:05:41, Chris Evans wrote: > Input parameter validation. > - What about a negative, zero or crazy-bit initial_bitrate. > - AFAIK, we _still_ don't have generic gfx::Size validation, so we should > probably check for integer overflow in width * height. > - What about unrecognized input_format or output_profile? initial_bitrate is made unsigned now. If zero or crazy, that should be up to the encoder to decide (since different encoders may have different ideas of crazy). I'll check input_visible_size. input_format and output_profile being unrecognized values again should be handled by the encoder, since different encoders may support different formats. Internally, we make sure to handle bad values for input_format_ when used (e.g. in OnEncode()). https://chromiumcodereview.appspot.com/20632002/diff/292003/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:145: static_cast<size_t>(input_coded_size_.GetArea() * 3 / 2)) { On 2013/08/11 22:05:41, Chris Evans wrote: > Where does input_coded_size_ come from? Two possible integer overflows here? input_coded_size_ is reported from the VideoEncodeAccelerator instance within this process, in RequireBitstreamBuffers. If the instance reports an overflowing size, it's got problems too. https://chromiumcodereview.appspot.com/20632002/diff/292003/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:211: uint32 framerate) { On 2013/08/11 22:05:41, Chris Evans wrote: > Parameter validation. > Also, why is bitrate signed? Does a negative bitrate make any sense? If not, an > unsigned type is much safer. Bitrate is unsigned to allow negative values as special values. We're not using that facility though, so for the purposes of safety I suppose I could change this. I'm expecting implementers of the VideoEncodeAccelerator interface to validate parameters, since requirements may differ. https://chromiumcodereview.appspot.com/20632002/diff/292003/media/video/video... File media/video/video_encode_accelerator.h (right): https://chromiumcodereview.appspot.com/20632002/diff/292003/media/video/video... media/video/video_encode_accelerator.h:132: // |framerate_denom| is the denominator of the framerate as a fraction. On 2013/08/12 01:50:05, posciak wrote: > The function does not take the documented params. Done.
I was missing some changes from the last patchset. Reuploaded.
https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:163: base::Unretained(this), On 2013/08/09 08:26:41, sheu wrote: > On 2013/08/09 04:04:58, piman wrote: > > what makes this Unretained safe? > > That's an interesting question, I guess. I was assuming that this would wrap > the lifetime of the underlying VEA, which would would be the only owners of the > VideoFrame. > > But I guess I can't assume that the VEA won't pass this off elsewhere, or even > that we can bound the lifetime of the VEA (since it's only Destroy()ed, not > explicitly destructed.) > > So, moving to weak_ptr. You're gonna _love_ this tower of base::Bind()-ing. Why do you then need the extra PostTask? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:169: NOTREACHED(); On 2013/08/09 08:26:41, sheu wrote: > On 2013/08/09 04:04:58, piman wrote: > > This comes from the untrusted renderer, so it can be reached. Maybe DVLOG > > instead? > > Right, if it's reached, we proceed to return an error below. Do we want to not > NOTREACHED() this? It can give a false sense of security, which, for security-critical code (IPC handler) is rather unfortunate. Hence DVLOG? https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:199: media::BitstreamBuffer(buffer_id, buffer_handle, buffer_size)); On 2013/08/09 08:26:41, sheu wrote: > On 2013/08/09 04:04:58, piman wrote: > > what happens if buffer_handle is invalid, or doesn't match buffer_size? > > Is every encoder required to implement the security around it? > > Can we do it here instead? > > If the buffer handle is invalid, mapping it will fail. There's no way to know > its buffer size though other than what we're passed through IPC, so there really > isn't a way around this. We're relying on the security of the underlying > mmap(). We want to make sure this is properly communicated, and handled by all the encoders. E.g. we don't want the encoder to make security decisions based on the size until it knows that the size is correct. Really ideally the checking code would be here. > > media::BitstreamBuffer is spec-ed in terms of handles, so handles we do. It > serves for the decoder.
https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:163: base::Unretained(this), On 2013/08/12 21:43:40, piman wrote: > On 2013/08/09 08:26:41, sheu wrote: > > On 2013/08/09 04:04:58, piman wrote: > > > what makes this Unretained safe? > > > > That's an interesting question, I guess. I was assuming that this would wrap > > the lifetime of the underlying VEA, which would would be the only owners of > the > > VideoFrame. > > > > But I guess I can't assume that the VEA won't pass this off elsewhere, or even > > that we can bound the lifetime of the VEA (since it's only Destroy()ed, not > > explicitly destructed.) > > > > So, moving to weak_ptr. You're gonna _love_ this tower of base::Bind()-ing. > > Why do you then need the extra PostTask? The VideoFrame destructor will call the callback, but the destructor might itself be called on any thread since we're holding references to the VideoFrame through a thread-safe scoped_refptr. We want to make sure the call to EncodeFrameFinished happens on this thread. Thus the tower of power. https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:169: NOTREACHED(); On 2013/08/12 21:43:40, piman wrote: > On 2013/08/09 08:26:41, sheu wrote: > > On 2013/08/09 04:04:58, piman wrote: > > > This comes from the untrusted renderer, so it can be reached. Maybe DVLOG > > > instead? > > > > Right, if it's reached, we proceed to return an error below. Do we want to > not > > NOTREACHED() this? > > It can give a false sense of security, which, for security-critical code (IPC > handler) is rather unfortunate. Hence DVLOG? Sure -- in any case though this has been pulled into media::VideoFrame::WrapSharedMemoryBuffer(), so it's moot now. https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:199: media::BitstreamBuffer(buffer_id, buffer_handle, buffer_size)); On 2013/08/12 21:43:40, piman wrote: > We want to make sure this is properly communicated, and handled by all the > encoders. E.g. we don't want the encoder to make security decisions based on the > size until it knows that the size is correct. > > Really ideally the checking code would be here. Right, but how do we know the buffer size? It's going to have to come down IPC somehow.
Given that this is just GPU <-> renderer I am ok with giving it a stamp once these changes are made. I don't know the code at all though and all the shared memory buffers scares the bejezus out of me. aedla: can you take a look at this stuff at some point after it lands just to make sure everything is sane. I believe you have looked at this area extensively before? https://codereview.chromium.org/20632002/diff/338001/content/common/gpu/clien... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://codereview.chromium.org/20632002/diff/338001/content/common/gpu/clien... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:162: int input_count, Do not send int over IPC. Specify size (int32, etc) https://codereview.chromium.org/20632002/diff/338001/content/renderer/media/r... File content/renderer/media/rtc_video_encoder.cc (right): https://codereview.chromium.org/20632002/diff/338001/content/renderer/media/r... content/renderer/media/rtc_video_encoder.cc:256: int input_count, Why is this signed? https://codereview.chromium.org/20632002/diff/338001/content/renderer/media/r... content/renderer/media/rtc_video_encoder.cc:269: for (int i = 0; i < input_count + kInputBufferExtraCount; ++i) { I don't see anything that would prevent input_count from being negative here https://codereview.chromium.org/20632002/diff/338001/content/renderer/media/r... content/renderer/media/rtc_video_encoder.cc:407: int stride = next_frame->stride(webrtc::kYPlane); passing signed ints to memcpy scares me. Why is this signed? Can this be negative?
Signedness stuff done. cdn@: PTAL https://codereview.chromium.org/20632002/diff/338001/content/common/gpu/clien... File content/common/gpu/client/gpu_video_encode_accelerator_host.cc (right): https://codereview.chromium.org/20632002/diff/338001/content/common/gpu/clien... content/common/gpu/client/gpu_video_encode_accelerator_host.cc:162: int input_count, On 2013/08/12 23:29:02, Cris Neckar wrote: > Do not send int over IPC. Specify size (int32, etc) Done.
piman@: anything else?
https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:163: base::Unretained(this), On 2013/08/12 21:58:18, sheu wrote: > On 2013/08/12 21:43:40, piman wrote: > > On 2013/08/09 08:26:41, sheu wrote: > > > On 2013/08/09 04:04:58, piman wrote: > > > > what makes this Unretained safe? > > > > > > That's an interesting question, I guess. I was assuming that this would > wrap > > > the lifetime of the underlying VEA, which would would be the only owners of > > the > > > VideoFrame. > > > > > > But I guess I can't assume that the VEA won't pass this off elsewhere, or > even > > > that we can bound the lifetime of the VEA (since it's only Destroy()ed, not > > > explicitly destructed.) > > > > > > So, moving to weak_ptr. You're gonna _love_ this tower of base::Bind()-ing. > > > > Why do you then need the extra PostTask? > > The VideoFrame destructor will call the callback, but the destructor might > itself be called on any thread since we're holding references to the VideoFrame > through a thread-safe scoped_refptr. We want to make sure the call to > EncodeFrameFinished happens on this thread. > > Thus the tower of power. Ok. https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:199: media::BitstreamBuffer(buffer_id, buffer_handle, buffer_size)); On 2013/08/12 21:58:18, sheu wrote: > On 2013/08/12 21:43:40, piman wrote: > > We want to make sure this is properly communicated, and handled by all the > > encoders. E.g. we don't want the encoder to make security decisions based on > the > > size until it knows that the size is correct. > > > > Really ideally the checking code would be here. > > Right, but how do we know the buffer size? It's going to have to come down IPC > somehow. You have no way of knowing it (well, fstat would on posix), but you could verify that mapping it with that size succeeds.
https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... File content/common/gpu/media/gpu_video_encode_accelerator.cc (right): https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gp... content/common/gpu/media/gpu_video_encode_accelerator.cc:199: media::BitstreamBuffer(buffer_id, buffer_handle, buffer_size)); On 2013/08/13 01:00:04, piman wrote: > On 2013/08/12 21:58:18, sheu wrote: > > On 2013/08/12 21:43:40, piman wrote: > > > We want to make sure this is properly communicated, and handled by all the > > > encoders. E.g. we don't want the encoder to make security decisions based on > > the > > > size until it knows that the size is correct. > > > > > > Really ideally the checking code would be here. > > > > Right, but how do we know the buffer size? It's going to have to come down > IPC > > somehow. > > You have no way of knowing it (well, fstat would on posix), but you could verify > that mapping it with that size succeeds. Right, and the solution adopted elsewhere (i.e. the video decoders) is to check for failure on mapping when the mapping occurs, in the individual encoder. It would be good to do it at a higher level, but we since we just pass a handle down to the encoder, another mapping here would be redundant.
IPC changes lgtm
On Mon, Aug 12, 2013 at 6:07 PM, <sheu@chromium.org> wrote: > > https://chromiumcodereview.**appspot.com/20632002/diff/** > 294002/content/common/gpu/**media/gpu_video_encode_**accelerator.cc<https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gpu/media/gpu_video_encode_accelerator.cc> > File content/common/gpu/media/gpu_**video_encode_accelerator.cc (right): > > https://chromiumcodereview.**appspot.com/20632002/diff/** > 294002/content/common/gpu/**media/gpu_video_encode_** > accelerator.cc#newcode199<https://chromiumcodereview.appspot.com/20632002/diff/294002/content/common/gpu/media/gpu_video_encode_accelerator.cc#newcode199> > content/common/gpu/media/gpu_**video_encode_accelerator.cc:**199: > media::BitstreamBuffer(buffer_**id, buffer_handle, buffer_size)); > On 2013/08/13 01:00:04, piman wrote: > >> On 2013/08/12 21:58:18, sheu wrote: >> > On 2013/08/12 21:43:40, piman wrote: >> > > We want to make sure this is properly communicated, and handled by >> > all the > >> > > encoders. E.g. we don't want the encoder to make security >> > decisions based on > >> > the >> > > size until it knows that the size is correct. >> > > >> > > Really ideally the checking code would be here. >> > >> > Right, but how do we know the buffer size? It's going to have to >> > come down > >> IPC >> > somehow. >> > > You have no way of knowing it (well, fstat would on posix), but you >> > could verify > >> that mapping it with that size succeeds. >> > > Right, and the solution adopted elsewhere (i.e. the video decoders) is > to check for failure on mapping when the mapping occurs, in the > individual encoder. > > It would be good to do it at a higher level, but we since we just pass a > handle down to the encoder, another mapping here would be redundant. > As long as we're very careful about it, LGTM. > > https://chromiumcodereview.**appspot.com/20632002/<https://chromiumcodereview... >
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/sheu@chromium.org/20632002/363001
Step "update" is always a major failure. Look at the try server FAQ for more details. http://build.chromium.org/p/tryserver.chromium/buildstatus?builder=linux_clan...
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/sheu@chromium.org/20632002/363001
Sorry for I got bad news for ya. Compile failed with a clobber build on android_clang_dbg. http://build.chromium.org/p/tryserver.chromium/buildstatus?builder=android_cl... Your code is likely broken or HEAD is junk. Please ensure your code is not broken then alert the build sheriffs. Look at the try server FAQ for more details.
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/sheu@chromium.org/20632002/356001
Retried try job too often on mac_rel for step(s) telemetry_unittests http://build.chromium.org/p/tryserver.chromium/buildstatus?builder=mac_rel&nu...
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/sheu@chromium.org/20632002/356001
Retried try job too often on mac_rel for step(s) telemetry_unittests http://build.chromium.org/p/tryserver.chromium/buildstatus?builder=mac_rel&nu...
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/sheu@chromium.org/20632002/356001
Message was sent while issue was closed.
Change committed as 217276 |