|
|
DescriptionFix Vaapi VDA flush handling during resolution change
Suppose VAVDA got client request Decode()*N and then a Flush(), they
will be posted to decoder thread and handled by DecodeTask and
FlushTask. Ideally DecodeTask will decode all N input, so no pending
input remains when FlushTask starts.
However, the DecodeTask() encountered a resolution change when it
processes one of input buffer. It has to allocate new surfaces. So it
post something to main thread and expect next DecodeTask will come back
to finish inputs. During the main thread allocating new surfaces, the
decode thread do next task in the queue --- FlushTask, which starts
before all input buffers are done.
The main idea of this CL is to enqueue a dummy buffer to indicate flushing
instead of kFlush state.
BUG=b:30617067, b:34317221
TEST=run ARC CTS test AdaptivePlaybackTest#testVP9_adaptiveDrcEarlyEos
CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel
Review-Url: https://codereview.chromium.org/2642623002
Cr-Commit-Position: refs/heads/master@{#445951}
Committed: https://chromium.googlesource.com/chromium/src/+/e87f2ba577b96a81dd994ff0cda70ec2c5fef332
Patch Set 1 #Patch Set 2 : handle b/34317221 #Patch Set 3 : queue a dummy flush buffer #
Total comments: 16
Patch Set 4 : allow multiple Flush #
Total comments: 6
Patch Set 5 : address owen's comment #
Total comments: 7
Patch Set 6 : address Pawel's comments #Patch Set 7 : fix compile due to style check #
Messages
Total messages: 38 (17 generated)
Description was changed from ========== Fix Vaapi VDA flush handling during resolution change Suppose VAVDA got client request Decode()*N and then a Flush(), they will be posted to decoder thread and handled by DecodeTask and FlushTask. Ideally DecodeTask will decode all N input, so no pending input remains when FlushTask starts. However, the DecodeTask() encountered a resolution change when it processes one of input buffer. It has to allocate new surfaces. So it post something to main thread and expect next DecodeTask will come back to finish inputs. During the main thread allocating new surfaces, the decode thread do next task in the queue --- FlushTask, which starts before all input buffers are done. The main idea of this CL is making DecodeTask to call FlushTask when all input handled. BUG=b:30617067 TEST=run ARC CTS test AdaptivePlaybackTest#testVP9_adaptiveDrcEarlyEos ========== to ========== Fix Vaapi VDA flush handling during resolution change Suppose VAVDA got client request Decode()*N and then a Flush(), they will be posted to decoder thread and handled by DecodeTask and FlushTask. Ideally DecodeTask will decode all N input, so no pending input remains when FlushTask starts. However, the DecodeTask() encountered a resolution change when it processes one of input buffer. It has to allocate new surfaces. So it post something to main thread and expect next DecodeTask will come back to finish inputs. During the main thread allocating new surfaces, the decode thread do next task in the queue --- FlushTask, which starts before all input buffers are done. The main idea of this CL is making DecodeTask to call FlushTask when all input handled. BUG=b:30617067 TEST=run ARC CTS test AdaptivePlaybackTest#testVP9_adaptiveDrcEarlyEos CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
kcwu@chromium.org changed reviewers: + owenlin@chromium.org, posciak@chromium.org
Description was changed from ========== Fix Vaapi VDA flush handling during resolution change Suppose VAVDA got client request Decode()*N and then a Flush(), they will be posted to decoder thread and handled by DecodeTask and FlushTask. Ideally DecodeTask will decode all N input, so no pending input remains when FlushTask starts. However, the DecodeTask() encountered a resolution change when it processes one of input buffer. It has to allocate new surfaces. So it post something to main thread and expect next DecodeTask will come back to finish inputs. During the main thread allocating new surfaces, the decode thread do next task in the queue --- FlushTask, which starts before all input buffers are done. The main idea of this CL is making DecodeTask to call FlushTask when all input handled. BUG=b:30617067 TEST=run ARC CTS test AdaptivePlaybackTest#testVP9_adaptiveDrcEarlyEos CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== Fix Vaapi VDA flush handling during resolution change Suppose VAVDA got client request Decode()*N and then a Flush(), they will be posted to decoder thread and handled by DecodeTask and FlushTask. Ideally DecodeTask will decode all N input, so no pending input remains when FlushTask starts. However, the DecodeTask() encountered a resolution change when it processes one of input buffer. It has to allocate new surfaces. So it post something to main thread and expect next DecodeTask will come back to finish inputs. During the main thread allocating new surfaces, the decode thread do next task in the queue --- FlushTask, which starts before all input buffers are done. The main idea of this CL is making DecodeTask to call FlushTask when all input handled. BUG=b:30617067,b:34317221 TEST=run ARC CTS test AdaptivePlaybackTest#testVP9_adaptiveDrcEarlyEos CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
kcwu@chromium.org changed reviewers: + johnylin@chromium.org
Updated my CL to handle the case of b/34317221. I'm doing more tests.
To make the flush flow simpler in general, could we perhaps use/queue a dummy flush input buffer in Flush() instead, like the V4L2 VDAs do? I think we should then be able to remove the kFlushing state and only process flush once we reached the dummy buffer in DecodeTask, which should also simplify the various relationships between tasks?
On 2017/01/18 09:19:41, Pawel Osciak wrote: > To make the flush flow simpler in general, could we perhaps use/queue a dummy > flush input buffer in Flush() instead, like the V4L2 VDAs do? > > I think we should then be able to remove the kFlushing state and only process > flush once we reached the dummy buffer in DecodeTask, which should also simplify > the various relationships between tasks? Agree "queue a dummy flush buffer" makes the code simpler. I didn't do so because it needs to changed more code. Anyway, my CL is updated to use dummy buffer. PTAL.
Description was changed from ========== Fix Vaapi VDA flush handling during resolution change Suppose VAVDA got client request Decode()*N and then a Flush(), they will be posted to decoder thread and handled by DecodeTask and FlushTask. Ideally DecodeTask will decode all N input, so no pending input remains when FlushTask starts. However, the DecodeTask() encountered a resolution change when it processes one of input buffer. It has to allocate new surfaces. So it post something to main thread and expect next DecodeTask will come back to finish inputs. During the main thread allocating new surfaces, the decode thread do next task in the queue --- FlushTask, which starts before all input buffers are done. The main idea of this CL is making DecodeTask to call FlushTask when all input handled. BUG=b:30617067,b:34317221 TEST=run ARC CTS test AdaptivePlaybackTest#testVP9_adaptiveDrcEarlyEos CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== Fix Vaapi VDA flush handling during resolution change Suppose VAVDA got client request Decode()*N and then a Flush(), they will be posted to decoder thread and handled by DecodeTask and FlushTask. Ideally DecodeTask will decode all N input, so no pending input remains when FlushTask starts. However, the DecodeTask() encountered a resolution change when it processes one of input buffer. It has to allocate new surfaces. So it post something to main thread and expect next DecodeTask will come back to finish inputs. During the main thread allocating new surfaces, the decode thread do next task in the queue --- FlushTask, which starts before all input buffers are done. The main idea of this CL is to enqueue a dummy buffer to indicate flushing instead of kFlush state. BUG=b:30617067,b:34317221 TEST=run ARC CTS test AdaptivePlaybackTest#testVP9_adaptiveDrcEarlyEos CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ==========
Just verified this patch could also fix b/34317221. Thanks a lot
On 2017/01/19 04:31:09, johnylin1 wrote: > Just verified this patch could also fix b/34317221. > > Thanks a lot Also double confirmed with: 1. autotest: video_VideoSeek.h264, video_VideoSeek.h264.switchres 2. gts: GtsExoPlayerTestCases com.google.android.exoplayer.gts.DashTest With regard to TEST for https://codereview.chromium.org/2151183002/ since this patch slightly modified the code which 2151183002 has modified.
https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.cc (right): https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:476: // Skip empty buffers. Please see my comment below. Let's do the check in Decode(). https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:586: bool VaapiVideoDecodeAccelerator::IsFlushPending_Locked() { I am thinking maybe we can get rid of this function. The function is mainly used to no Decode() or Flush() before FlushDone(). But I think we can easily handle those cases by keep them in the input queue and handle them until FlushDone() (instead of reject them). For example: Decode() ---> input A Flush() ---> Flush until Output A Decode() ---> input B (Keep it in the queue) PictureReady() ---> output A FlushDone() --> Now we can process input B. We can do this by making GetInputBuffer_Locked() returns false when still processing a flush(). How do you think? https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:937: DCHECK(input_buffers_.empty()) << "Input buffers should be empty because" It looks like a comment instead of what we should print out. https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:975: FROM_HERE, base::Bind(&VaapiVideoDecodeAccelerator::DecodeTask, Can we extract the common part of this function and Decode()? Maybe call it QueueInput(bitstream_buffer) My idea is to reject 0 size bitstream_buffer in Decode, and Flush() calls QueueInput with a empty bitstream_buffer. https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.h (right): https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.h:214: InputBuffer(); Remove both ctor and dtor? https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.h:222: std::unique_ptr<SharedMemoryRegion> shm; Why not just use "shm == null" to indicate an empty buffer?
https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.cc (right): https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:586: bool VaapiVideoDecodeAccelerator::IsFlushPending_Locked() { On 2017/01/19 07:39:16, Owen Lin wrote: > I am thinking maybe we can get rid of this function. The function is mainly used > to no Decode() or Flush() before FlushDone(). > > But I think we can easily handle those cases by keep them in the input queue and > handle them until FlushDone() (instead of reject them). > > For example: > > Decode() ---> input A > Flush() ---> Flush until Output A > Decode() ---> input B (Keep it in the queue) > PictureReady() ---> output A > FlushDone() --> Now we can process input B. > > We can do this by making GetInputBuffer_Locked() returns false when still > processing a flush(). > > How do you think? It's intentional to reject them since this is invalid api usage. Before FlushDone, only Reset is allowed. https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:975: FROM_HERE, base::Bind(&VaapiVideoDecodeAccelerator::DecodeTask, On 2017/01/19 07:39:16, Owen Lin wrote: > Can we extract the common part of this function and Decode()? Maybe call it > QueueInput(bitstream_buffer) > > My idea is to reject 0 size bitstream_buffer in Decode, and Flush() calls > QueueInput with a empty bitstream_buffer. > Regarding to Flush() to use empty buffer, I don't want to overload the meaning of empty buffer. Use an explicit flag is easier to understand. Otherwise, empty buffer is invalid in some functions and means flush in some other functions. https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.h (right): https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.h:222: std::unique_ptr<SharedMemoryRegion> shm; On 2017/01/19 07:39:16, Owen Lin wrote: > Why not just use "shm == null" to indicate an empty buffer? Do you mean "shm = null" to assign a default value? If so, it's unnecessary.
https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.cc (right): https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:586: bool VaapiVideoDecodeAccelerator::IsFlushPending_Locked() { On 2017/01/19 08:26:10, kcwu wrote: > On 2017/01/19 07:39:16, Owen Lin wrote: > > I am thinking maybe we can get rid of this function. The function is mainly > used > > to no Decode() or Flush() before FlushDone(). > > > > But I think we can easily handle those cases by keep them in the input queue > and > > handle them until FlushDone() (instead of reject them). > > > > For example: > > > > Decode() ---> input A > > Flush() ---> Flush until Output A > > Decode() ---> input B (Keep it in the queue) > > PictureReady() ---> output A > > FlushDone() --> Now we can process input B. > > > > We can do this by making GetInputBuffer_Locked() returns false when still > > processing a flush(). > > > > How do you think? > > It's intentional to reject them since this is invalid api usage. > Before FlushDone, only Reset is allowed. Yes, I see your point and it makes totally sense. Actually, it is a dilemma to me as well. In the respect of API, I don't think this restriction is really needed. It makes sense to call flush() and then keep sending input (if we still accept input after flush). The behavior can be easily explained without confusion. Besides, I believe this restriction is not documented yet. For now, the restriction complicates the code. Without the restriction, the API is easier to use and is still backward compatible. https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:975: FROM_HERE, base::Bind(&VaapiVideoDecodeAccelerator::DecodeTask, On 2017/01/19 08:26:10, kcwu wrote: > On 2017/01/19 07:39:16, Owen Lin wrote: > > Can we extract the common part of this function and Decode()? Maybe call it > > QueueInput(bitstream_buffer) > > > > My idea is to reject 0 size bitstream_buffer in Decode, and Flush() calls > > QueueInput with a empty bitstream_buffer. > > > > Regarding to Flush() to use empty buffer, I don't want to overload the meaning > of empty buffer. Use an explicit flag is easier to understand. > Otherwise, empty buffer is invalid in some functions and means flush in some > other functions. > First, empty buffer is not invalid, we do accept empty buffer, but just return them to the client immediately(see l.476). And I believe this will be the only place different, in all other places, we can treat empty buffer as flush(). My point here is to extract the common code of Flush() and Decode() as this part is really complicated and they are actually doing the same thing. https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.h (right): https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.h:222: std::unique_ptr<SharedMemoryRegion> shm; On 2017/01/19 08:26:10, kcwu wrote: > On 2017/01/19 07:39:16, Owen Lin wrote: > > Why not just use "shm == null" to indicate an empty buffer? > > Do you mean "shm = null" to assign a default value? If so, it's unnecessary. No, I mean we don't really need a new member dummy_flush but using shm == null to indicate an empty buffer. But it's really up to you.
Patchset #4 (id:60001) has been deleted
https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.cc (right): https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:937: DCHECK(input_buffers_.empty()) << "Input buffers should be empty because" On 2017/01/19 07:39:16, Owen Lin wrote: > It looks like a comment instead of what we should print out. Code removed. https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:975: FROM_HERE, base::Bind(&VaapiVideoDecodeAccelerator::DecodeTask, On 2017/01/20 02:20:06, Owen Lin wrote: > On 2017/01/19 08:26:10, kcwu wrote: > > On 2017/01/19 07:39:16, Owen Lin wrote: > > > Can we extract the common part of this function and Decode()? Maybe call it > > > QueueInput(bitstream_buffer) > > > > > > My idea is to reject 0 size bitstream_buffer in Decode, and Flush() calls > > > QueueInput with a empty bitstream_buffer. > > > > > > > Regarding to Flush() to use empty buffer, I don't want to overload the meaning > > of empty buffer. Use an explicit flag is easier to understand. > > Otherwise, empty buffer is invalid in some functions and means flush in some > > other functions. > > > > First, empty buffer is not invalid, we do accept empty buffer, but just return > them to the client immediately(see l.476). And I believe this will be the only > place different, in all other places, we can treat empty buffer as flush(). > > My point here is to extract the common code of Flush() and Decode() as this part > is really complicated and they are actually doing the same thing. Done. https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.h (right): https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.h:214: InputBuffer(); On 2017/01/19 07:39:16, Owen Lin wrote: > Remove both ctor and dtor? Done. https://codereview.chromium.org/2642623002/diff/40001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.h:222: std::unique_ptr<SharedMemoryRegion> shm; On 2017/01/20 02:20:06, Owen Lin wrote: > On 2017/01/19 08:26:10, kcwu wrote: > > On 2017/01/19 07:39:16, Owen Lin wrote: > > > Why not just use "shm == null" to indicate an empty buffer? > > > > Do you mean "shm = null" to assign a default value? If so, it's unnecessary. > > No, I mean we don't really need a new member dummy_flush but using shm == null > to indicate an empty buffer. But it's really up to you. Done.
lgtm % nits https://codereview.chromium.org/2642623002/diff/80001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.cc (right): https://codereview.chromium.org/2642623002/diff/80001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:466: DVLOG(4) << "Mapping new input buffer id: " << bitstream_buffer.id() s/Mapping/Queueing/ https://codereview.chromium.org/2642623002/diff/80001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:764: base::SharedMemory::CloseHandle(bitstream_buffer.handle()); Nice, you catch a bug in the original code. https://codereview.chromium.org/2642623002/diff/80001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:942: Comment why we queue an empty buffer.
https://codereview.chromium.org/2642623002/diff/80001/media/gpu/vaapi_video_d... File media/gpu/vaapi_video_decode_accelerator.cc (right): https://codereview.chromium.org/2642623002/diff/80001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:466: DVLOG(4) << "Mapping new input buffer id: " << bitstream_buffer.id() On 2017/01/23 05:49:59, Owen Lin wrote: > s/Mapping/Queueing/ Done. https://codereview.chromium.org/2642623002/diff/80001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:764: base::SharedMemory::CloseHandle(bitstream_buffer.handle()); On 2017/01/23 05:49:59, Owen Lin wrote: > Nice, you catch a bug in the original code. Actually not, it was guard by a smart pointer in MapAndQueueNewInputBuffer. https://codereview.chromium.org/2642623002/diff/80001/media/gpu/vaapi_video_d... media/gpu/vaapi_video_decode_accelerator.cc:942: On 2017/01/23 05:49:59, Owen Lin wrote: > Comment why we queue an empty buffer. Done.
Have confirmed the latest patch with: 1. fix b/34317221. 2. cts: android.media.cts.AdaptivePlaybackTest 2. autotest: video_VideoSeek.h264, video_VideoSeek.h264.switchres 3. gts: GtsExoPlayerTestCases com.google.android.exoplayer.gts.DashTest Thanks
https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... File media/gpu/vaapi_video_decode_accelerator.cc (right): https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... media/gpu/vaapi_video_decode_accelerator.cc:487: ++num_stream_bufs_at_decoder_; I think we should only increment for non-flush buffers, it's used only for trace counters. https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... media/gpu/vaapi_video_decode_accelerator.cc:1013: if (!input_buffer->is_flush()) Perhaps we could have this in destructor of InputBuffer to be done automatically? https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... File media/gpu/vaapi_video_decode_accelerator.h (right): https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... media/gpu/vaapi_video_decode_accelerator.h:214: int32_t id = 0; I think we start from 0 normally, so perhaps -1? (https://cs.chromium.org/chromium/src/media/filters/gpu_video_decoder.cc?rcl=0...)
https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... File media/gpu/vaapi_video_decode_accelerator.cc (right): https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... media/gpu/vaapi_video_decode_accelerator.cc:487: ++num_stream_bufs_at_decoder_; On 2017/01/24 07:10:33, Pawel Osciak wrote: > I think we should only increment for non-flush buffers, it's used only for trace > counters. Done. https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... media/gpu/vaapi_video_decode_accelerator.cc:1013: if (!input_buffer->is_flush()) On 2017/01/24 07:10:33, Pawel Osciak wrote: > Perhaps we could have this in destructor of InputBuffer to be done > automatically? If doing so, InputBuffer would need to keep reference of either VDA or (task_runner_ and client_). And need to deal with dtor timing of VDA, client_, and/or task_runner_. I'd prefer not in this CL. https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... File media/gpu/vaapi_video_decode_accelerator.h (right): https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... media/gpu/vaapi_video_decode_accelerator.h:214: int32_t id = 0; On 2017/01/24 07:10:33, Pawel Osciak wrote: > I think we start from 0 normally, so perhaps -1? > > (https://cs.chromium.org/chromium/src/media/filters/gpu_video_decoder.cc?rcl=0...) Done.
lgtm https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... File media/gpu/vaapi_video_decode_accelerator.cc (right): https://codereview.chromium.org/2642623002/diff/100001/media/gpu/vaapi_video_... media/gpu/vaapi_video_decode_accelerator.cc:1013: if (!input_buffer->is_flush()) On 2017/01/24 07:54:23, kcwu wrote: > On 2017/01/24 07:10:33, Pawel Osciak wrote: > > Perhaps we could have this in destructor of InputBuffer to be done > > automatically? > > If doing so, InputBuffer would need to keep reference of either VDA or > (task_runner_ and client_). And need to deal with dtor timing of VDA, client_, > and/or task_runner_. > > I'd prefer not in this CL. Acknowledged.
The CQ bit was checked by kcwu@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from owenlin@chromium.org Link to the patchset: https://codereview.chromium.org/2642623002/#ps120001 (title: "address Pawel's comments")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: linux_chromium_chromeos_ozone_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by kcwu@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
Owen, FYI, ctor and dtor are added back due to style check.
The CQ bit was checked by kcwu@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from owenlin@chromium.org, posciak@chromium.org Link to the patchset: https://codereview.chromium.org/2642623002/#ps140001 (title: "fix compile due to style check")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 140001, "attempt_start_ts": 1485319663752970, "parent_rev": "cd9754daf31d0b352130ba49e9ab463d42370297", "commit_rev": "e87f2ba577b96a81dd994ff0cda70ec2c5fef332"}
Message was sent while issue was closed.
Description was changed from ========== Fix Vaapi VDA flush handling during resolution change Suppose VAVDA got client request Decode()*N and then a Flush(), they will be posted to decoder thread and handled by DecodeTask and FlushTask. Ideally DecodeTask will decode all N input, so no pending input remains when FlushTask starts. However, the DecodeTask() encountered a resolution change when it processes one of input buffer. It has to allocate new surfaces. So it post something to main thread and expect next DecodeTask will come back to finish inputs. During the main thread allocating new surfaces, the decode thread do next task in the queue --- FlushTask, which starts before all input buffers are done. The main idea of this CL is to enqueue a dummy buffer to indicate flushing instead of kFlush state. BUG=b:30617067,b:34317221 TEST=run ARC CTS test AdaptivePlaybackTest#testVP9_adaptiveDrcEarlyEos CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel ========== to ========== Fix Vaapi VDA flush handling during resolution change Suppose VAVDA got client request Decode()*N and then a Flush(), they will be posted to decoder thread and handled by DecodeTask and FlushTask. Ideally DecodeTask will decode all N input, so no pending input remains when FlushTask starts. However, the DecodeTask() encountered a resolution change when it processes one of input buffer. It has to allocate new surfaces. So it post something to main thread and expect next DecodeTask will come back to finish inputs. During the main thread allocating new surfaces, the decode thread do next task in the queue --- FlushTask, which starts before all input buffers are done. The main idea of this CL is to enqueue a dummy buffer to indicate flushing instead of kFlush state. BUG=b:30617067,b:34317221 TEST=run ARC CTS test AdaptivePlaybackTest#testVP9_adaptiveDrcEarlyEos CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel Review-Url: https://codereview.chromium.org/2642623002 Cr-Commit-Position: refs/heads/master@{#445951} Committed: https://chromium.googlesource.com/chromium/src/+/e87f2ba577b96a81dd994ff0cda7... ==========
Message was sent while issue was closed.
Committed patchset #7 (id:140001) as https://chromium.googlesource.com/chromium/src/+/e87f2ba577b96a81dd994ff0cda7... |