|
|
Created:
4 years, 6 months ago by Mikhail Modified:
4 years ago Reviewers:
tommi (sloooow) - chröme, sandersd (OOO until July 31), Rick Byers, hongchan, miu, haraken, jameswest, kmackay, Raymond Toy, DaleCurtis CC:
blink-reviews, blink-reviews-api_chromium.org, chcunningham, chromium-reviews, darin-cc_chromium.org, dglazkov+blink, feature-media-reviews_chromium.org, hongchan, jam, miu+watch_chromium.org, mkwst+moarreviews-renderer_chromium.org, mlamouri+watch-content_chromium.org, posciak+watch_chromium.org Base URL:
https://chromium.googlesource.com/chromium/src.git@master Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionImplementation of 'AudioContext.getOutputTimestamp' method
The newly added 'AudioContext.getOutputTimestamp' method
helps to bind audio context time and performance time values.
This CL contains generic code which provides the audio output
timestamp from the content layer to 'AudioContext' JS bindings.
Link to specification:
https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestamp-AudioTimestamp
Intent to ship: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/fEWdrU4C-3Y
BUG=619533
Committed: https://crrev.com/851c52f392dde2c5611453142c737839b65fe1ab
Cr-Commit-Position: refs/heads/master@{#439553}
Patch Set 1 : Implementation of 'AudioContext.getOutputTimestamp' method #Patch Set 2 : Implementation of 'AudioContext.getOutputTimestamp' method #
Total comments: 24
Patch Set 3 : review commnets + more tests fixes + 'Packet' structure #
Total comments: 1
Patch Set 4 : Added a layout test. Moved timetsamp update to 'handlePreRenderTasks()' #Patch Set 5 : Fixed suspend/resume bug. Moved 'getOutputTimestamp' implementation to 'AudioContext'. Manual test. #Patch Set 6 : Added implementation for ALSA. #
Total comments: 5
Patch Set 7 : New implementation #Patch Set 8 : rebased 'global-interface-listing-expected' layout test results #Patch Set 9 : rebased 'global-interface-listing-expected' layout test results #
Total comments: 20
Patch Set 10 : Comments from Raymond and miu@ #
Total comments: 5
Patch Set 11 : Comments from miu@ #
Total comments: 2
Patch Set 12 : Comment from Raymond #Patch Set 13 : Improved manual test. Added layout test. More checks. #
Total comments: 1
Patch Set 14 : Comments from Raymond #Patch Set 15 : global-interface-listing rebaseline #Patch Set 16 : Rebased. #Patch Set 17 : rebased #Messages
Total messages: 151 (77 generated)
Description was changed from ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the platform layer to 'AudioContext' JS bindings and also implementation of platform calls on Windows. The 'AudioContext.getOutputTimestamp' is implemented behind 'AudioTimestamp' Runtime Enabled Feature in Blink. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... BUG= ========== to ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the platform layer to 'AudioContext' JS bindings and also implementation of platform calls on Windows. The 'AudioContext.getOutputTimestamp' is implemented behind 'AudioTimestamp' Runtime Enabled Feature in Blink. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... BUG=619533 ==========
mikhail.pozdnyakov@intel.com changed reviewers: + hongchan@chromium.org, rtoy@chromium.org
Patchset #1 (id:1) has been deleted
Patchset #1 (id:20001) has been deleted
Patchset #1 (id:30001) has been deleted
Can you rebase your CL before I review it? Looks like almost all bots are failing to apply your patch.
Patchset #3 (id:90001) has been deleted
Patchset #2 (id:70001) has been deleted
On 2016/06/13 16:04:25, Raymond Toy wrote: > Can you rebase your CL before I review it? Looks like almost all bots are > failing to apply your patch. Some bots are red as the tests mock objects needs to be updated accordingly to media/audio API changes, I am fixing this atm. However could you pls take a look at the patch and give your opinion on the intended implementation approach in general?
An initial review. You'll need someone else to do a real review of the media/ code. https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... File media/audio/audio_output_controller.cc (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... media/audio/audio_output_controller.cc:176: sync_reader_->UpdatePendingBytes(0, 0, {0, 0}); Will this produce the right sized structure for AudioTimeStamp? I think those 0's are ints, not int64's. https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... File media/audio/audio_output_resampler.cc (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... media/audio/audio_output_resampler.cc:76: // Information about last recodred stream output position. Typo: "recodred" -> "recorded" https://codereview.chromium.org/2060833002/diff/110001/media/audio/win/audio_... File media/audio/win/audio_low_latency_output_win.cc (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/win/audio_... media/audio/win/audio_low_latency_output_win.cc:523: UINT64 qpc_position = 0; What does "qpc" stand for? https://codereview.chromium.org/2060833002/diff/110001/media/audio/win/audio_... File media/audio/win/audio_low_latency_output_win.h (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/win/audio_... media/audio/win/audio_low_latency_output_win.h:242: double hns_units_to_perf_count_; Add comment on what this really is. I can't tell from the name. https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AbstractAudioContext.cpp (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AbstractAudioContext.cpp:569: LocalDOMWindow* window = context ? context->executingWindow() : nullptr; Why not just early return with 0.0 if the context doesn't exist? Similarly if the window doesn't exist. https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.cpp (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.cpp:51: MutexLocker locker(m_mutex); Can this be blocked by the main thread? Generally, we use tryLocks to make sure we don't block the audio thread. https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.cpp (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.cpp:151: m_outputTimestamp = timestamp; Does this need to be protected by the mutex? https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/pub... File third_party/WebKit/public/platform/WebAudioDevice.h (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/pub... third_party/WebKit/public/platform/WebAudioDevice.h:37: // Representaion of device audio stream position. Typo: "Representaion" -> "Representation"
https://codereview.chromium.org/2060833002/diff/110001/content/renderer/media... File content/renderer/media/renderer_webaudiodevice_impl.cc (right): https://codereview.chromium.org/2060833002/diff/110001/content/renderer/media... content/renderer/media/renderer_webaudiodevice_impl.cc:128: WebAudioTimestamp web_timestamp(static_cast<size_t>(timestamp.frames), Perhaps web_audio_timestamp is better? https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_devi... File media/audio/audio_device_thread.h (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_devi... media/audio/audio_device_thread.h:56: // Called whenever we receive notifications about pending input data. Let's expand the comment to reflect the change of the timestamp. https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.cpp (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.cpp:51: MutexLocker locker(m_mutex); On 2016/06/14 16:42:40, Raymond Toy wrote: > Can this be blocked by the main thread? Generally, we use tryLocks to make sure > we don't block the audio thread. Consider to move this operation to context's pre-render task. https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.cpp:116: MutexLocker locker(m_mutex); This will block the audio thread when the mutex is enabled. https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.h (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.h:95: // FIXME(Mikhail) : when allowed use std::atomic<WebAudioTimestamp> instead of mutex. FIXME -> TODO.
Patchset #3 (id:130001) has been deleted
Thanks for your comments! https://codereview.chromium.org/2060833002/diff/110001/content/renderer/media... File content/renderer/media/renderer_webaudiodevice_impl.cc (right): https://codereview.chromium.org/2060833002/diff/110001/content/renderer/media... content/renderer/media/renderer_webaudiodevice_impl.cc:128: WebAudioTimestamp web_timestamp(static_cast<size_t>(timestamp.frames), On 2016/06/14 17:10:38, hoch wrote: > Perhaps web_audio_timestamp is better? Done. https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_devi... File media/audio/audio_device_thread.h (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_devi... media/audio/audio_device_thread.h:56: // Called whenever we receive notifications about pending input data. On 2016/06/14 17:10:38, hoch wrote: > Let's expand the comment to reflect the change of the timestamp. Done. https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... File media/audio/audio_output_controller.cc (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... media/audio/audio_output_controller.cc:176: sync_reader_->UpdatePendingBytes(0, 0, {0, 0}); On 2016/06/14 16:42:40, Raymond Toy wrote: > Will this produce the right sized structure for AudioTimeStamp? I think those > 0's are ints, not int64's. those are converted to int64 to construct an AudioTimestamp instance, though think it's better to use 'AudioTimestamp()', whereas that would improve readability. https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... File media/audio/audio_output_resampler.cc (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... media/audio/audio_output_resampler.cc:76: // Information about last recodred stream output position. On 2016/06/14 16:42:40, Raymond Toy wrote: > Typo: "recodred" -> "recorded" Done. https://codereview.chromium.org/2060833002/diff/110001/media/audio/win/audio_... File media/audio/win/audio_low_latency_output_win.cc (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/win/audio_... media/audio/win/audio_low_latency_output_win.cc:523: UINT64 qpc_position = 0; On 2016/06/14 16:42:40, Raymond Toy wrote: > What does "qpc" stand for? that is 'QueryPerformanceCounter', renaming to 'performance_counter_position' https://codereview.chromium.org/2060833002/diff/110001/media/audio/win/audio_... File media/audio/win/audio_low_latency_output_win.h (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/win/audio_... media/audio/win/audio_low_latency_output_win.h:242: double hns_units_to_perf_count_; On 2016/06/14 16:42:40, Raymond Toy wrote: > Add comment on what this really is. I can't tell from the name. Done. https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AbstractAudioContext.cpp (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AbstractAudioContext.cpp:569: LocalDOMWindow* window = context ? context->executingWindow() : nullptr; On 2016/06/14 16:42:40, Raymond Toy wrote: > Why not just early return with 0.0 if the context doesn't exist? Similarly if > the window doesn't exist. Done. https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.h (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.h:95: // FIXME(Mikhail) : when allowed use std::atomic<WebAudioTimestamp> instead of mutex. On 2016/06/14 17:10:38, hoch wrote: > FIXME -> TODO. I've changed the approach so that it is using 'lock' and 'tryLock' from context https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.cpp (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.cpp:151: m_outputTimestamp = timestamp; On 2016/06/14 16:42:40, Raymond Toy wrote: > Does this need to be protected by the mutex? it looks like here it is always accessed on the same thread https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/pub... File third_party/WebKit/public/platform/WebAudioDevice.h (right): https://codereview.chromium.org/2060833002/diff/110001/third_party/WebKit/pub... third_party/WebKit/public/platform/WebAudioDevice.h:37: // Representaion of device audio stream position. On 2016/06/14 16:42:41, Raymond Toy wrote: > Typo: "Representaion" -> "Representation" Done.
https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... File media/audio/audio_output_controller.cc (right): https://codereview.chromium.org/2060833002/diff/110001/media/audio/audio_outp... media/audio/audio_output_controller.cc:176: sync_reader_->UpdatePendingBytes(0, 0, {0, 0}); On 2016/06/17 09:36:57, Mikhail wrote: > On 2016/06/14 16:42:40, Raymond Toy wrote: > > Will this produce the right sized structure for AudioTimeStamp? I think those > > 0's are ints, not int64's. > those are converted to int64 to construct an AudioTimestamp instance, though > think it's better to use 'AudioTimestamp()', whereas that would improve > readability. Yes AudioTimestamp() would be much nicer!
Do you have a test for this? I think some kind of test is important for this. Even a ManualTest would be good. Just would like to see some test that indicates things are working as expected.
I believe the experimental flag goes with 'a brief description'. Where can I find one for this change? https://codereview.chromium.org/2060833002/diff/150001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.cpp (right): https://codereview.chromium.org/2060833002/diff/150001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.cpp:79: context()->setWebAudioTimestamp(outputTimestamp); I prefer to have this inside of handlePreRenderTasks(). This is a part of pre-render task.
On 2016/06/17 20:56:35, Raymond Toy wrote: > Do you have a test for this? I think some kind of test is important for this. > Even a ManualTest would be good. Just would like to see some test that > indicates things are working as expected. Added a layout test, at this stage I did not dare making a virtual test suite with enabled 'AudioTimestamp' feature as it would work only on Win.
On 2016/06/20 16:05:22, hoch wrote: > I believe the experimental flag goes with 'a brief description'. Where can I > find one for this change? I've failed to find description for the existing features in the code base, what would be an appropriate place to add a description for the 'AudioTimestamp' feature? > > https://codereview.chromium.org/2060833002/diff/150001/third_party/WebKit/Sou... > File third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.cpp > (right): > > https://codereview.chromium.org/2060833002/diff/150001/third_party/WebKit/Sou... > third_party/WebKit/Source/modules/webaudio/AudioDestinationNode.cpp:79: > context()->setWebAudioTimestamp(outputTimestamp); > I prefer to have this inside of handlePreRenderTasks(). This is a part of > pre-render task. Done.
Could you pls have a look?
On 2016/06/28 10:10:16, Mikhail wrote: > Could you pls have a look? I'm trying to run this locally. I have forgotten how to enable this feature. Can you help me? Also, did you only implement this for Windows? Is this true? If so, I think you need to implement this for all other platforms, as you stated in the intent to implement and ship.
On 2016/06/28 10:10:16, Mikhail wrote: > Could you pls have a look? I'm trying to run this locally. I have forgotten how to enable this feature. Can you help me? Also, did you only implement this for Windows? Is this true? If so, I think you need to implement this for all other platforms, as you stated in the intent to implement and ship.
Patchset #5 (id:190001) has been deleted
On 2016/06/29 20:57:10, Raymond Toy wrote: > On 2016/06/28 10:10:16, Mikhail wrote: > > Could you pls have a look? > > I'm trying to run this locally. I have forgotten how to enable this feature. > Can you help me? Pls use 'out/my_build/content_shell(or chrome) --enable-blink-features=AudioTimestamp ./third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html' > > Also, did you only implement this for Windows? Is this true? If so, I think you > need to implement this for all other platforms, as you stated in the intent to > implement and ship. I intended first to upstream the generic code which is the crucial and most complex part of the implementation and then add platform-specific parts with the following CLs. Do you think it would be better to include implementation for all the platforms to this CL?
On 2016/07/03 18:45:14, Mikhail wrote: > On 2016/06/29 20:57:10, Raymond Toy wrote: > > On 2016/06/28 10:10:16, Mikhail wrote: > > > Could you pls have a look? > > > > I'm trying to run this locally. I have forgotten how to enable this feature. > > Can you help me? > > Pls use 'out/my_build/content_shell(or chrome) > --enable-blink-features=AudioTimestamp > ./third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html' First, sorry for the delay! > > > > > Also, did you only implement this for Windows? Is this true? If so, I think > you > > need to implement this for all other platforms, as you stated in the intent to > > implement and ship. > > I intended first to upstream the generic code which is the crucial and most > complex part of the implementation and then add platform-specific parts with the > following CLs. Do you think it would be better to include implementation for all > the platforms to this CL? Are the changes for the platform-specific parts huge? If not, I'd prefer it all here (for testing), but I will defer to your judgement on what's appropriate. I'd just like to be able to test this, and Windows is not my primary platform.
On 2016/07/07 17:09:51, Raymond Toy wrote: > > Are the changes for the platform-specific parts huge? If not, I'd prefer it all > here > (for testing), but I will defer to your judgement on what's appropriate. I'd > just > like to be able to test this, and Windows is not my primary platform. I think they are quite significant and each platform enabling would deserve a separate review track (i.e. a separate CL). Nevertheless I will include ALSA support into this CL so that the introduced functionality can be testable also on desktop Linux.
On 2016/07/11 15:14:27, Mikhail wrote: > On 2016/07/07 17:09:51, Raymond Toy wrote: > > > > Are the changes for the platform-specific parts huge? If not, I'd prefer it > all > > here > > (for testing), but I will defer to your judgement on what's appropriate. I'd > > just > > like to be able to test this, and Windows is not my primary platform. > I think they are quite significant and each platform enabling would deserve a > separate review track (i.e. a separate CL). > Nevertheless I will include ALSA support into this CL so that the introduced > functionality can be testable also on desktop Linux. Up to you. I do use win when needed, so it's not a major problem.
On 2016/07/11 15:17:12, Raymond Toy wrote: > > Up to you. I do use win when needed, so it's not a major problem. I was hoping to use snd_pcm_status_get_audio* API which is unfortunately not accessible as Chromium is using ALSA 1.0.25 that does not include it (is it feasible to switch to a newer ALSA version?). I'll try to obtain output position substracting queued frames from written ones. Could we in meanwhile progress with the current patch reviewing?
Yeah that's fine with me. I'm out for the next couple of days but I'll get on it as soon as I'm back. On Jul 12, 2016 7:26 AM, <mikhail.pozdnyakov@intel.com> wrote: > On 2016/07/11 15:17:12, Raymond Toy wrote: > > > > Up to you. I do use win when needed, so it's not a major problem. > > I was hoping to use snd_pcm_status_get_audio* API which is unfortunately > not > accessible as Chromium is using ALSA 1.0.25 that does not include it (is it > feasible to switch to a newer ALSA version?). I'll try to obtain output > position > substracting queued frames from written ones. > > Could we in meanwhile progress with the current patch reviewing? > > https://codereview.chromium.org/2060833002/ > -- You received this message because you are subscribed to the Google Groups "Blink Reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to blink-reviews+unsubscribe@chromium.org.
Yeah that's fine with me. I'm out for the next couple of days but I'll get on it as soon as I'm back. On Jul 12, 2016 7:26 AM, <mikhail.pozdnyakov@intel.com> wrote: > On 2016/07/11 15:17:12, Raymond Toy wrote: > > > > Up to you. I do use win when needed, so it's not a major problem. > > I was hoping to use snd_pcm_status_get_audio* API which is unfortunately > not > accessible as Chromium is using ALSA 1.0.25 that does not include it (is it > feasible to switch to a newer ALSA version?). I'll try to obtain output > position > substracting queued frames from written ones. > > Could we in meanwhile progress with the current patch reviewing? > > https://codereview.chromium.org/2060833002/ > -- You received this message because you are subscribed to the Google Groups "Chromium-reviews" group. To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
mikhail.pozdnyakov@intel.com changed reviewers: + sandersd@chromium.org, tommi@chromium.org
CC'ing tommi@chromium.org and sandersd@chromium.org for review of the media/ code.
sandersd@chromium.org changed reviewers: + chcunningham@chromium.org
To chcunningham@ for media/.
dalecurtis@chromium.org changed reviewers: + dalecurtis@chromium.org, kmackay@chromium.org, miu@chromium.org
This and https://codereview.chromium.org/2101303004 need to be reconciled.
Also seems to be missing implementations for mac/android?
https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h File media/audio/audio_io.h (right): https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h... media/audio/audio_io.h:54: int64_t frames; Why do we need this? Client code could track this by keeping a running-sum of AudioBus::frames(), if it's needed for a particular use case. https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h... media/audio/audio_io.h:55: int64_t ticks; // Obtained from base::TimeTicks::ToInternalValue(). Don't use ToInternalValue(). As the header comments in base/time/time.h explain, this is only for wire serialization. Client code should never attempt to interpret the meaning of the TimeTicks's internal value. naming nit: This should probably be called |reference_time| for clarity. https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h... media/audio/audio_io.h:82: uint32_t frames_skipped, This is in conflict with this change: https://codereview.chromium.org/2101303004/ You guys need to chat and come to a consensus here. ;-) (I've also stated some of my opinions on what this API should become, in that code review.)
jameswest@google.com changed reviewers: + jameswest@google.com
https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h File media/audio/audio_io.h (right): https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h... media/audio/audio_io.h:82: uint32_t frames_skipped, We're in favor of changing the API to the following, which was suggested by Yuri: virtual int OnMoreData( base::TimeTicks target_playout_time, int prior_frames_skipped, AudioBus* dest); To get |target_playout_time|, you would convert |total_bytes_delay| to time duration and add it to base::TimeTicks::Now(). Does that work for you?
Catching up on code reviews. Sorry for the delay. However, it's not clear whether we are attempting to land this change or https://codereview.chromium.org/2101303004/ (by jameswest@). Which one?
On 2016/07/20 04:19:18, James West wrote: > https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h > File media/audio/audio_io.h (right): > > https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h... > media/audio/audio_io.h:82: uint32_t frames_skipped, > We're in favor of changing the API to the following, which was suggested by > Yuri: > > virtual int OnMoreData( > base::TimeTicks target_playout_time, > int prior_frames_skipped, > AudioBus* dest); > > To get |target_playout_time|, you would convert |total_bytes_delay| to time > duration and add it to base::TimeTicks::Now(). > > Does that work for you? Sorry for the late response (I was OOO). The intention of this CL is to bring a pair of bound values (stream output position in frames and a corresponding system timestamp) obtained from the hardware with the maximum accuracy (e.g. for WASAPI this means propagation of IAudioClock::GetPosition method results). Could you elaborate more how I could get such pair using the API that you described?
On 2016/08/05 12:05:22, Mikhail wrote: > On 2016/07/20 04:19:18, James West wrote: > > https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h > > File media/audio/audio_io.h (right): > > > > > https://codereview.chromium.org/2060833002/diff/230001/media/audio/audio_io.h... > > media/audio/audio_io.h:82: uint32_t frames_skipped, > > We're in favor of changing the API to the following, which was suggested by > > Yuri: > > > > virtual int OnMoreData( > > base::TimeTicks target_playout_time, > > int prior_frames_skipped, > > AudioBus* dest); > > > > To get |target_playout_time|, you would convert |total_bytes_delay| to time > > duration and add it to base::TimeTicks::Now(). > > > > Does that work for you? > > Sorry for the late response (I was OOO). > > The intention of this CL is to bring a pair of bound values (stream output > position in frames and a corresponding system timestamp) obtained from the > hardware with the maximum accuracy (e.g. for WASAPI this means propagation of > IAudioClock::GetPosition method results). Could you elaborate more how I could > get such pair using the API that you described? I'm not sure our change is really related to this one. We're interested in getting the output delay, but it sounds like you're trying to get the total number of frames already output.
On 2016/08/06 02:13:18, James West wrote: > > I'm not sure our change is really related to this one. We're interested in > getting the output delay, but it sounds like you're trying to get the total > number of frames already output. Would it be possible/convenient to calculate the delay from the currently audible position? or maybe our changes can co-exist?
On 2016/08/05 12:05:22, Mikhail wrote: > The intention of this CL is to bring a pair of bound values (stream output > position in frames and a corresponding system timestamp) obtained from the > hardware with the maximum accuracy (e.g. for WASAPI this means propagation of > IAudioClock::GetPosition method results). Could you elaborate more how I could > get such pair using the API that you described? The renderers of the audio (i.e., the implementations of OnMoreData()) are not designed to be constrained by such timing information. Furthermore, the very idea conflicts with the architecture of the audio stack. The rate at which OnMoreData() is being called will always be different from the rate at which the audio signal is being generated from the source. It's up to an implementation of OnMoreData() to account for this and modify the audio signal to conceal the timing skew and produce a "continuous-sounding" audio signal (e.g., by using time stretching, error/gap concealment, etc.). In the WebAudio case, for example, the graph nodes will be generating an audio signal at the destination node at a rate different than the system's audio hardware consumes it. Maybe we should take a step back and meet on VC to discuss how to integrate the Chrome audio stack with WebAudio w.r.t. these timing concepts?
On 2016/08/11 23:36:15, miu wrote: > On 2016/08/05 12:05:22, Mikhail wrote: > > The intention of this CL is to bring a pair of bound values (stream output > > position in frames and a corresponding system timestamp) obtained from the > > hardware with the maximum accuracy (e.g. for WASAPI this means propagation of > > IAudioClock::GetPosition method results). Could you elaborate more how I could > > get such pair using the API that you described? > > The renderers of the audio (i.e., the implementations of OnMoreData()) are not > designed to be constrained by such timing information. Furthermore, the very > idea conflicts with the architecture of the audio stack. The rate at which > OnMoreData() is being called will always be different from the rate at which the > audio signal is being generated from the source. It's up to an implementation of > OnMoreData() to account for this and modify the audio signal to conceal the > timing skew and produce a "continuous-sounding" audio signal (e.g., by using > time stretching, error/gap concealment, etc.). > I would like to clarify the intended approach a bit more: by the "accuracy" I mean the accuracy of how well the timestamp values *within* an 'AudioTimestamp' instance correspond to each other (so that the client can e.g. approximate performance timestamp value for a slightly different audio stream position), and it's OK if the 'AudioTimestamp' instance itself is returned a bit later than it was initialized from the device, on the other hand values returned from 'AudioContext.getOutputTimestamp()' should be updated in sync with 'AudioContext.currentTime' values update (which happens after OnMoreData() is called). So the rate at which OnMoreData() is being called does not have to fit the rate at which the audio signal is being generated from the source, OnMoreData() would just provide the latest known pair of values obtained from the device. > > Maybe we should take a step back and meet on VC to discuss how to integrate the > Chrome audio stack with WebAudio w.r.t. these timing concepts? Sure! If there are better ways to implement the required functionality I'd be happy to use those.
On 2016/08/12 13:10:46, Mikhail wrote: > On 2016/08/11 23:36:15, miu wrote: > > On 2016/08/05 12:05:22, Mikhail wrote: > > > The intention of this CL is to bring a pair of bound values (stream output > > > position in frames and a corresponding system timestamp) obtained from the > > > hardware with the maximum accuracy (e.g. for WASAPI this means propagation > of > > > IAudioClock::GetPosition method results). Could you elaborate more how I > could > > > get such pair using the API that you described? > > > > The renderers of the audio (i.e., the implementations of OnMoreData()) are not > > designed to be constrained by such timing information. Furthermore, the very > > idea conflicts with the architecture of the audio stack. The rate at which > > OnMoreData() is being called will always be different from the rate at which > the > > audio signal is being generated from the source. It's up to an implementation > of > > OnMoreData() to account for this and modify the audio signal to conceal the > > timing skew and produce a "continuous-sounding" audio signal (e.g., by using > > time stretching, error/gap concealment, etc.). > > > I would like to clarify the intended approach a bit more: by the "accuracy" I > mean the accuracy of how well the timestamp values *within* an 'AudioTimestamp' > instance correspond to each other (so that the client can e.g. approximate > performance timestamp value for a slightly different audio stream position), and > it's OK if the 'AudioTimestamp' instance itself is returned a bit later than it > was initialized from the device, on the other hand values returned from > 'AudioContext.getOutputTimestamp()' should be updated in sync with > 'AudioContext.currentTime' values update (which happens after OnMoreData() is > called). > > So the rate at which OnMoreData() is being called does not have to fit the rate > at which the audio signal is being generated from the source, OnMoreData() would > just provide the latest known pair of values obtained from the device. > > > > Maybe we should take a step back and meet on VC to discuss how to integrate > the > > Chrome audio stack with WebAudio w.r.t. these timing concepts? > Sure! If there are better ways to implement the required functionality I'd be > happy to use those. miu@ : how could we proceed with that?
On 2016/08/31 18:04:22, Mikhail wrote: > miu@ : how could we proceed with that? Hi. Sorry for the delay. I was OOO last week, and am finding it hard to catch up with everything even now. Thanks for being patient with me. :) I need to "refresh my memory" on this issue. So, I'm going to try to re-read this code review's comment thread, the spec that you linked in the change description, and also take a closer look at the blink code changes. I'll reply back by end-of-day Friday once I better understand your proposal.
On 2016/09/01 05:23:44, miu wrote: > On 2016/08/31 18:04:22, Mikhail wrote: > > miu@ : how could we proceed with that? > > Hi. Sorry for the delay. I was OOO last week, and am finding it hard to catch up > with everything even now. Thanks for being patient with me. :) > > I need to "refresh my memory" on this issue. So, I'm going to try to re-read > this code review's comment thread, the spec that you linked in the change > description, and also take a closer look at the blink code changes. I'll reply > back by end-of-day Friday once I better understand your proposal. kindly asking for update
Okay. I reviewed the spec and all the Blink code changes, and I think I understand what you're trying to do here: to provide a sample frame count paired with the performance timestamp in the new WebAudio getOutputTimestamp() API. Let me explain how I think we should proceed: First, remember that there are a couple of changes in-flight to update the audio output render callback API (Chrome side), one of which is this one. The one in https://codereview.chromium.org/2101303004/ will replace the legacy "timing" fields with a "target playout timestamp" as a base::TimeTicks value, and this is plumbed throughout the code that runs in the browser process. That change solves half of your problem for you, so let's push the author to land that first. :-) What that change doesn't do, but you will need, is to continue plumbing the same API throughout the code for the render process (especially in content/renderer/media). In other words, you'll want to plumb the |target_playout_time| and |prior_frames_skipped| via the "Packet" passed through the shared memory (in media/audio/audio_device_thread.cc), then the AudioDeviceThread::Callback::Process() interface, and finally the AudioRendererSink::RenderCallback::Render() interface. By changing these interfaces, you'll ultimately have made the RendererWebAudioDeviceImpl::Render() method have the following signature: void RendererWebAudioDeviceImpl::Render(base::TimeTicks target_playout_time, int prior_frames_skipped, AudioBus* dest); From there, you just need to make your changes within Blink code, mostly the same as they are in Patch Set 6. The pure virtual render() method in third_party/WebKit/public/platform/WebAudioDevice.h should become: virtual void render(const WebVector<float*>& sourceData, const WebVector <float*>& destinationData, size_t numberOfFrames, double seconds, // NEW ARG size_t priorFramesSkipped) = 0; // NEW ARG So, the 4th and 5th arguments would be new, and would be based on the |target_playout_time| and |prior_frames_skipped| that were passed into RendererWebAudioDeviceImpl::Render() (RWADI::Render() calls WebAudioDevice::render().) Now, all that's left is for the Blink implementation to keep a count of the number of frames elapsed. Something like: in constructor: { m_framesElapsed = 0; } in render() override: { m_framesElapsed -= std::min(m_framesElapsed, priorFramesSkipped); ... set frame count and output timestamp in preRender() ... ... do rendering ... m_framesElaped += numberOfFrames; } I've gone through a whole bunch of steps here, so please let me know if you'd like me to clarify any part of this. Otherwise, do you agree this approach will accomplish what you need?
On 2016/09/07 22:03:31, miu wrote: > > I've gone through a whole bunch of steps here, so please let me know if you'd > like me to clarify any part of this. Otherwise, do you agree this approach will > accomplish what you need? Thanks a lot for your detailed proposal! I have a concern that https://codereview.chromium.org/2101303004/ provides slightly different timing data than this CL: it provides a time estimation at which the next sample provided is expected to be played (btw, does it actually fit to a system time estimation for AudioContext.currentTime ?) and 'AudioContext.getOutputTimestamp' should provide what is being currently played from speakers (pls compare chnages https://codereview.chromium.org/2101303004/diff/120001/media/audio/win/audio_... and https://codereview.chromium.org/2060833002/diff/230001/media/audio/win/audio_...) maybe it's possible to plumb |current output timestamp| and |delay frames| so that both use cases are covered?
On 2016/09/08 08:20:24, Mikhail wrote: > On 2016/09/07 22:03:31, miu wrote: > 'AudioContext.getOutputTimestamp' should provide what is being currently played from speakers > > maybe it's possible to plumb |current output timestamp| and |delay frames| so > that both use cases are covered? Ah! I see now. Yes, in order to get the current output timestamp, you'd have to subtract from the |target_playout_time|. However, shouldn't that always be base::TimeTicks::Now() (plus or minus immeasurable clock skew)? If we were to compute |current_output_timestamp| as |target_playout_time| minus |delay_frames/sample_rate|, then WebAudio would be reporting the "current output time at the speakers" back when the OS made the request to render more audio frames. That would tend to be a timestamp around 1-100 microseconds in the past. Thus, it seems like base::TimeTicks::Now() would always be the more-accurate "current output timestamp" that we'd want to provide.
On 2016/09/08 21:10:45, miu wrote: > On 2016/09/08 08:20:24, Mikhail wrote: > However, shouldn't that always be base::TimeTicks::Now() (plus or minus > immeasurable clock skew)? That said, I'm not against plumbing in the frame_delay alongside the target_playout_delay. There may be other good uses for knowing this information (e.g., perhaps it could be used to help the renderer measure propagation delay to the audio hardware to achieve better A/V sync). Feel free to work with James on his change if you want to proceed with this.
haraken@chromium.org changed reviewers: + haraken@chromium.org
https://codereview.chromium.org/2060833002/diff/230001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AudioTimestamp.idl (right): https://codereview.chromium.org/2060833002/diff/230001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AudioTimestamp.idl:6: Conditional=AudioTimestamp We no longer support [Conditional]. I think this will break compilation.
After https://codereview.chromium.org/2101303004/ has landed I'm going to split the implementation. The first part is at https://codereview.chromium.org/2437863004/ - it brings media/ changes, the following Blink changes will be put to the current CL.
The latest patch contains the updated implementation which is inspired by the comment from miu@ https://codereview.chromium.org/2060833002/#msg49 . Please take a look.
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
Description was changed from ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the platform layer to 'AudioContext' JS bindings and also implementation of platform calls on Windows. The 'AudioContext.getOutputTimestamp' is implemented behind 'AudioTimestamp' Runtime Enabled Feature in Blink. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... BUG=619533 ========== to ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the platform layer to 'AudioContext' JS bindings and also implementation of platform calls on Windows. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... BUG=619533 ==========
Description was changed from ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the platform layer to 'AudioContext' JS bindings and also implementation of platform calls on Windows. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... BUG=619533 ========== to ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the content layer to 'AudioContext' JS bindings. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... BUG=619533 ==========
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
On 2016/10/31 19:40:05, Mikhail wrote: > The latest patch contains the updated implementation which is inspired by the > comment from miu@ https://codereview.chromium.org/2060833002/#msg49 . Please > take a look. Raymond, hongchan@ what is your opinion regarding new implementation? thanks
I'm intending to take a look too. :) I'm just swamped on other things. Will try to respond by EOD Thursday.
https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Man... File third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Man... third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html:33: and compare them to 'AudioContext.currentTime' and 'Performance.now()'. </p> Please give some indication of what the expected results should be. In 6 months, no one will remember what the output should be and thus won't know if this is working or not. https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AudioTimestamp.idl (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AudioTimestamp.idl:5: dictionary AudioTimestamp { Add link to spec. https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:106: m_outputPosition() { Don't you want to initialize this to something? It's possible for the context to be suspended at startup so if it gets queried, the output position will be random junk. https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.cpp (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.cpp:222: if (m_callbackBufferSize > framesToProcess * 2) { What happens if framesToProcess <= m_callbackBufferSize <= 2*framesToProcess? (Basically 128 <= callbacksize <= 256)? Are we guaranteed to make progress? I don't quite understand why we don't make progress if size > 2*framesToProcess. https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.h (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.h:88: size_t priorFramesSkipped) override; Can we document what these three parameters really mean?
https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... File content/renderer/media/renderer_webaudiodevice_impl.cc (right): https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... content/renderer/media/renderer_webaudiodevice_impl.cc:114: delay -= media::AudioTimestampHelper::FramesToTime(dest->frames(), Is this correct? I thought |delay| already did not include the samples in |dest|. If I'm mistaken: Is there a possibility of |delay| becoming negative here? Is that acceptable in downstream code?
Thank you for your comments! I'll upload the new patch shortly. https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... File content/renderer/media/renderer_webaudiodevice_impl.cc (right): https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... content/renderer/media/renderer_webaudiodevice_impl.cc:114: delay -= media::AudioTimestampHelper::FramesToTime(dest->frames(), On 2016/11/04 21:29:40, miu wrote: > Is this correct? I thought |delay| already did not include the samples in > |dest|. > > If I'm mistaken: Is there a possibility of |delay| becoming negative here? Is > that acceptable in downstream code? that comes from https://codereview.chromium.org/2437863004/diff/200001/media/audio/audio_outp... (line 313), so that repeated the behaviour existed before. Therefore it should never become negative. https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:106: m_outputPosition() { On 2016/11/04 17:51:40, Raymond Toy wrote: > Don't you want to initialize this to something? It's possible for the context > to be suspended at startup so if it gets queried, the output position will be > random junk. > this invokes "default" constructor for the structure, so both position and timestamp are set to zero. https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.cpp (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.cpp:222: if (m_callbackBufferSize > framesToProcess * 2) { On 2016/11/04 17:51:40, Raymond Toy wrote: > What happens if framesToProcess <= m_callbackBufferSize <= 2*framesToProcess? > (Basically 128 <= callbacksize <= 256)? Are we guaranteed to make progress? I > don't quite understand why we don't make progress if size > 2*framesToProcess. the 'outputPosition' is initialized in 'render' function call when platform requires it, so if platform buffer is long (for example 512 frames) the ' AudioContext.currentTime' will progress for 4 times (512/128) in a row with the same 'outputPosition' value, I put this logic here so that 'outputPosition' does not stuck for too long. If the platform buffer is short, the 'render' function is called frequently enough, so there is no need to estimate 'outputPosition' value in the middle.
The timing logic is a complex problem. I could be right or wrong here, but I'll do my best to explain how I see things working: https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... File content/renderer/media/renderer_webaudiodevice_impl.cc (right): https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... content/renderer/media/renderer_webaudiodevice_impl.cc:114: delay -= media::AudioTimestampHelper::FramesToTime(dest->frames(), On 2016/11/07 18:28:03, Mikhail wrote: > On 2016/11/04 21:29:40, miu wrote: > > Is this correct? I thought |delay| already did not include the samples in > > |dest|. > > > > If I'm mistaken: Is there a possibility of |delay| becoming negative here? Is > > that acceptable in downstream code? > > that comes from > https://codereview.chromium.org/2437863004/diff/200001/media/audio/audio_outp... > (line 313), so that repeated the behaviour existed before. > Therefore it should never become negative. Right, but isn't the output position correct without the subtraction here? Meaning, WebAudio will render audio at the output position derived from |delay| and |delay_timestamp|, and this will be the actual output position (or very close to it) when the browser process *next* reads audio data from the sync socket. To elaborate further: There are 2 calls to AOC::OnMoreData() in the browser process. The first one takes |delay| and increments it by one AudioBus' worth of time, since it is the next OnMoreData() call that will be fetching the rendered data. Let's call this estimated next |delay| value delay_next. In this method, in the render process, the value for |delay| we have here is delay_next. WebAudio will render as if the output position were at delay_next. Then, back in the browser process when the *second* call to AOC::OnMoreData() is made, it's |delay| argument should be equal (or very close) to delay_next, and that is the correct value for the audio being fetched by sync_reader_->Read(dest) at that time. Does this make sense? Or, maybe I have got it wrong? ;) https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.cpp (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.cpp:222: if (m_callbackBufferSize > framesToProcess * 2) { On 2016/11/07 18:28:03, Mikhail wrote: > On 2016/11/04 17:51:40, Raymond Toy wrote: > > What happens if framesToProcess <= m_callbackBufferSize <= 2*framesToProcess? > > (Basically 128 <= callbacksize <= 256)? Are we guaranteed to make progress? I > > don't quite understand why we don't make progress if size > 2*framesToProcess. > > the 'outputPosition' is initialized in 'render' function call when platform > requires it, so if platform buffer is long (for example 512 frames) the ' > AudioContext.currentTime' will progress for 4 times (512/128) in a row with the > same 'outputPosition' value, I put this logic here so that 'outputPosition' does > not stuck for too long. If the platform buffer is short, the 'render' function > is called frequently enough, so there is no need to estimate 'outputPosition' > value in the middle. OOC, why not unconditionally advance outputPosition.position by the number of frames consumed from |m_inputFifo| after each m_callback.render() call? In other words, it seems you should make sure the value for |outputPosition| is always consistent with the first sample about to be pulled out of |m_inputFifo|. So, it seems the diff for this entire method should be: void AudioDestination::provideInput(AudioBus* bus, size_t framesToProcess) { ...same as original... - m_callback.render(sourceBus, bus, framesToProcess); + m_callback.render(sourceBus, bus, framesToProcess, m_outputPosition); + // Finer-granularity update to |m_outputPosition| to reflect the number of frames consumed from |m_inputFifo| above. + m_outputPosition.position += framesToProcess / static_cast<double>(m_sampleRate); } Then, you don't even need the m_outputPositionReceivedTimestamp/TimeTicks::Now() hack. ;-)
BTW--Your earlier change (https://codereview.chromium.org/2437863004/) got reverted shortly after landing. Looks like an issue with speech synthesis on Chrome OS. So, you'll want to figure out what happened there and re-land before landing this change.
https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.cpp (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.cpp:222: if (m_callbackBufferSize > framesToProcess * 2) { On 2016/11/07 18:28:03, Mikhail wrote: > On 2016/11/04 17:51:40, Raymond Toy wrote: > > What happens if framesToProcess <= m_callbackBufferSize <= 2*framesToProcess? > > (Basically 128 <= callbacksize <= 256)? Are we guaranteed to make progress? I > > don't quite understand why we don't make progress if size > 2*framesToProcess. > > the 'outputPosition' is initialized in 'render' function call when platform > requires it, so if platform buffer is long (for example 512 frames) the ' > AudioContext.currentTime' will progress for 4 times (512/128) in a row with the > same 'outputPosition' value, I put this logic here so that 'outputPosition' does > not stuck for too long. If the platform buffer is short, the 'render' function > is called frequently enough, so there is no need to estimate 'outputPosition' > value in the middle. Ah, that makes sense. (Note that a huge number of Android devices have callback buffer size of 3800 or more.) Then in the line below, the delta is then relatively small. For a simple graph, wouldn't TimeTicks::Now() will have advanced just a few 10-100 micro seconds. Is that ok?
Patchset #10 (id:310001) has been deleted
Thank you for your comments and sorry about the delay. https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... File content/renderer/media/renderer_webaudiodevice_impl.cc (right): https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... content/renderer/media/renderer_webaudiodevice_impl.cc:114: delay -= media::AudioTimestampHelper::FramesToTime(dest->frames(), On 2016/11/07 23:49:40, miu wrote: > On 2016/11/07 18:28:03, Mikhail wrote: > > On 2016/11/04 21:29:40, miu wrote: > > > Is this correct? I thought |delay| already did not include the samples in > > > |dest|. > > > > > > If I'm mistaken: Is there a possibility of |delay| becoming negative here? > Is > > > that acceptable in downstream code? > > > > that comes from > > > https://codereview.chromium.org/2437863004/diff/200001/media/audio/audio_outp... > > (line 313), so that repeated the behaviour existed before. > > Therefore it should never become negative. > > Right, but isn't the output position correct without the subtraction here? > Meaning, WebAudio will render audio at the output position derived from |delay| > and |delay_timestamp|, and this will be the actual output position (or very > close to it) when the browser process *next* reads audio data from the sync > socket. > You are right, this will be an estimation of the next buffer start position in future, however 'context.getOutputPosition()' should return actual values from the near past, in this case estimation is more accurate (as we can fully rely on platform API returning same kind of data, i.e. IAudioClock::GetPosition from WASAPI). https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Man... File third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Man... third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html:33: and compare them to 'AudioContext.currentTime' and 'Performance.now()'. </p> On 2016/11/04 17:51:39, Raymond Toy wrote: > Please give some indication of what the expected results should be. In 6 months, > no one will remember what the output should be and thus won't know if this is > working or not. Done. https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/AudioTimestamp.idl (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/AudioTimestamp.idl:5: dictionary AudioTimestamp { On 2016/11/04 17:51:40, Raymond Toy wrote: > Add link to spec. Done. https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.cpp (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.cpp:222: if (m_callbackBufferSize > framesToProcess * 2) { On 2016/11/08 20:47:01, Raymond Toy wrote: > > Ah, that makes sense. (Note that a huge number of Android devices have callback > buffer size of 3800 or more.) > > Then in the line below, the delta is then relatively small. For a simple graph, > wouldn't TimeTicks::Now() will have advanced just a few 10-100 micro seconds. > Is that ok? I think it is fine as long as linear estimation is correct. With the testing graph which contains only oscillator, the delta is about 0.000167 sec on my linux machine (which is quite powerful :) ). https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.cpp:222: if (m_callbackBufferSize > framesToProcess * 2) { On 2016/11/07 23:49:40, miu wrote: > OOC, why not unconditionally advance outputPosition.position by the number of > frames consumed from |m_inputFifo| after each m_callback.render() call? In other > words, it seems you should make sure the value for |outputPosition| is always > consistent with the first sample about to be pulled out of |m_inputFifo|. So, it > seems the diff for this entire method should be: > > void AudioDestination::provideInput(AudioBus* bus, size_t framesToProcess) { > ...same as original... > > - m_callback.render(sourceBus, bus, framesToProcess); > + m_callback.render(sourceBus, bus, framesToProcess, m_outputPosition); > + // Finer-granularity update to |m_outputPosition| to reflect the number of > frames consumed from |m_inputFifo| above. > + m_outputPosition.position += framesToProcess / > static_cast<double>(m_sampleRate); > } > > Then, you don't even need the m_outputPositionReceivedTimestamp/TimeTicks::Now() > hack. ;-) I am more keen on accuracy, as far as I understand time of playback of certain amount of frames (1) is not equal to the time those frames are fulfilled by the context (2) and I'm more interested in (2) so that the user can always make a reliable estimation of performance time from context time (or vice versa). https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.h (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.h:88: size_t priorFramesSkipped) override; On 2016/11/04 17:51:40, Raymond Toy wrote: > Can we document what these three parameters really mean? Done.
lgtm, but with a concern about the "output position fudging" logic in AudioDestination::provideInput(). I leave it up to you and the other reviewers to determine what's the right solution there. https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... File content/renderer/media/renderer_webaudiodevice_impl.cc (right): https://codereview.chromium.org/2060833002/diff/290001/content/renderer/media... content/renderer/media/renderer_webaudiodevice_impl.cc:114: delay -= media::AudioTimestampHelper::FramesToTime(dest->frames(), On 2016/11/28 15:15:59, Mikhail wrote: > On 2016/11/07 23:49:40, miu wrote: > > Right, but isn't the output position correct without the subtraction here? > > > You are right, this will be an estimation of the next buffer start position in > future, however 'context.getOutputPosition()' should return actual values from > the near past, in this case estimation is more accurate (as we can fully rely on > platform API returning same kind of data, i.e. IAudioClock::GetPosition from > WASAPI). Seems right. I went through and re-read the spec again and thought about how this information would be used by the page's JS. I think I see where I was getting confused here. Thanks for being patient with me. :) https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.cpp (right): https://codereview.chromium.org/2060833002/diff/290001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.cpp:222: if (m_callbackBufferSize > framesToProcess * 2) { On 2016/11/28 15:15:59, Mikhail wrote: > On 2016/11/07 23:49:40, miu wrote: > > OOC, why not unconditionally advance outputPosition.position by the number of > > frames consumed from |m_inputFifo| after each m_callback.render() call? In > > > > Then, you don't even need the > m_outputPositionReceivedTimestamp/TimeTicks::Now() > > hack. ;-) > > I am more keen on accuracy, as far as I understand time of playback of certain > amount of frames (1) is not equal to the time those frames are fulfilled by the > context (2) and I'm more interested in (2) so that the user can always make a > reliable estimation of performance time from context time (or vice versa). Right, but consider: 1. The outputPosition will always be rewritten (i.e., made accurate again) very soon in AudioDestination::render(). 2. While working within the same platform buffer, it's reasonable to update the output position based on the number of frames since the skew between the audio hardware's clock and the system clock should be insignificant in such a short time period. 3. TimeTicks::Now() is not necessarily cheap or unbounded. For example, on Windows, a mutex is acquired (inside system libraries). My other suggestion would be to simply not update the output position here at all. Even with larger platform buffers, the semantics are still correct: This is still a "snapshot measurement" of what the context position and performance time were when audio was played out from the hardware in the recent past. https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.h (right): https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.h:86: double delay, // Output delay in seconds. suggestion: Just in case others get confused the way I did: Could you write a method comment like: "|delay| and |delayTimestamp| are high-precision measurements of the state of the system in the recent past. To be clear, |delay| does *not* represent the point-in-time at which the first rendered sample will be played out."
lgtm, but with a concern about the "output position fudging" logic in AudioDestination::provideInput(). I leave it up to you and the other reviewers to determine what's the right solution there.
On 2016/11/29 20:51:04, miu wrote: > Right, but consider: > > 1. The outputPosition will always be rewritten (i.e., made accurate again) very > soon in AudioDestination::render(). > > 2. While working within the same platform buffer, it's reasonable to update the > output position based on the number of frames since the skew between the audio > hardware's clock and the system clock should be insignificant in such a short > time period. As far as I understand the time period between two AudioDestination::provideInput() calls is not equal to the playback time of 128 frames, but it is equal to the time taken by the audio graph to "produce" these frames, so it depends on the graph structure and on the underlying hardware, therefore we should rely on system clock. > > 3. TimeTicks::Now() is not necessarily cheap or unbounded. For example, on > Windows, a mutex is acquired (inside system libraries). > > My other suggestion would be to simply not update the output position here at > all. Even with larger platform buffers, the semantics are still correct: This is > still a "snapshot measurement" of what the context position and performance time > were when audio was played out from the hardware in the recent past. > > https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/Sou... > File third_party/WebKit/Source/platform/audio/AudioDestination.h (right): > > https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/Sou... > third_party/WebKit/Source/platform/audio/AudioDestination.h:86: double delay, > // Output delay in seconds. > suggestion: Just in case others get confused the way I did: Could you write a > method comment like: "|delay| and |delayTimestamp| are high-precision > measurements of the state of the system in the recent past. To be clear, |delay| > does *not* represent the point-in-time at which the first rendered sample will > be played out." Thanks for your comments and proposals! I would agree to just not update the output position until it is updated from the platform (makes the code simpler, does not contradict to the spec, does not affect linear estimation accuracy), however as Raymond mentioned at https://codereview.chromium.org/2060833002/#msg78 some devices use 3800 bytes buffer which would mean around 30 'context.currentTime' updates with the same 'outputPosition' value. Do you think it is OK? Raymond, hongchan@ WDYT?
On 2016/11/30 14:49:30, Mikhail wrote: > Thanks for your comments and proposals! I would agree to just not update the > output position until it is updated from the platform (makes the code simpler, > does not contradict to the spec, does not affect linear estimation accuracy), > however as Raymond mentioned at > https://codereview.chromium.org/2060833002/#msg78 some devices use 3800 bytes > buffer which would mean around 30 'context.currentTime' updates with the same > 'outputPosition' value. Do you think it is OK? Raymond, hongchan@ WDYT? In situations like this, I've usually committed without the optimization; and then see how the community uses the API in the Chrome Canary/Dev channel. Then, if the need materializes, you can always add it later as a "tweak." Also, I wonder if we should focus instead on fixing the "3800-byte buffer" issue? Such high latency audio shouldn't be a requirement on any of Chrome's desktop or mobile platforms, IIRC. It could be due to a left-over "band-aid" someone added long ago and forgot about.
On 2016/11/30 20:36:47, miu wrote: > On 2016/11/30 14:49:30, Mikhail wrote: > > Thanks for your comments and proposals! I would agree to just not update the > > output position until it is updated from the platform (makes the code simpler, > > does not contradict to the spec, does not affect linear estimation accuracy), > > however as Raymond mentioned at > > https://codereview.chromium.org/2060833002/#msg78 some devices use 3800 bytes > > buffer which would mean around 30 'context.currentTime' updates with the same > > 'outputPosition' value. Do you think it is OK? Raymond, hongchan@ WDYT? > > In situations like this, I've usually committed without the optimization; and > then see how the community uses the API in the Chrome Canary/Dev channel. Then, > if the need materializes, you can always add it later as a "tweak." > > Also, I wonder if we should focus instead on fixing the "3800-byte buffer" > issue? Such high latency audio shouldn't be a requirement on any of Chrome's > desktop or mobile platforms, IIRC. It could be due to a left-over "band-aid" > someone added long ago and forgot about. Desktop isn't a problem. It's the many Android devices that don't support the low-latency audio path either because the OS is too old or because they just don't. I don't think there's much we can do about that. I've basically focused on our own Nexus devices because it's easy to talk to people who know and can fix things. I think all Nexus devices support the low-latency path, except, possibly the original Nexus 7 tablet. (And our current UMA stats show that there are many devices with even larger buffer sizes.)
On 2016/12/01 00:18:47, Raymond Toy wrote: > (And our current UMA stats show that there are many devices with even larger > buffer sizes.) Our discussion was so focused on bytes per buffer, but what really matters is the effective buffer duration (samples_per_buffer divided by sampling_rate). It's possible 3800 bytes isn't much if, for example, the audio is 96kHz 2-channel, 32 bits per sample; which by my calculations would be a duration of ~5ms. So, that's a reasonably-good low-latency buffer. (IIRC, on Windows, 96kHz is actually quite common.) Raymond: Do the UMA stats record duration or bytes per buffer or something else? If they record bytes, then there might not be a very widespread problem.
https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.h (right): https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.h:86: double delay, // Output delay in seconds. On 2016/11/29 20:51:04, miu wrote: > suggestion: Just in case others get confused the way I did: Could you write a > method comment like: "|delay| and |delayTimestamp| are high-precision > measurements of the state of the system in the recent past. To be clear, |delay| > does *not* represent the point-in-time at which the first rendered sample will > be played out." Didn't see this addressed in PS10 (and looks like the extra code comments should go in third_party/WebKit/public/platform/WebAudioDevice.h).
On 2016/12/01 05:31:26, miu wrote: > On 2016/12/01 00:18:47, Raymond Toy wrote: > > (And our current UMA stats show that there are many devices with even larger > > buffer sizes.) > > Our discussion was so focused on bytes per buffer, but what really matters is > the effective buffer duration (samples_per_buffer divided by sampling_rate). > It's possible 3800 bytes isn't much if, for example, the audio is 96kHz > 2-channel, 32 bits per sample; which by my calculations would be a duration of > ~5ms. So, that's a reasonably-good low-latency buffer. (IIRC, on Windows, 96kHz > is actually quite common.) > > Raymond: Do the UMA stats record duration or bytes per buffer or something else? > If they record bytes, then there might not be a very widespread problem. They are in frames (number of samples). 3800 frames at a more typical Android sample rate of 44100 Hz is 86 ms. And WebAudio double-buffers so that's 160+ ms. I checked the stats again. Buffer sizes of 3840-3848 (frames) accounts for 40% of all Android devices. And a whopping 8% report 5768. For OSX, 99.98% report 256. For windows 70% reports 480 (most probably because WASAPI is being used and we force 10ms buffer sizes because that's all the faster we can reliably go.) Linux reports 98% at 512. (I thought we hardwired this to 512, so I don't know why it's not 100%.)
content/renderer/media/ lgtm % question about timestamp representation. https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/pub... File third_party/WebKit/public/platform/WebAudioDevice.h (right): https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/pub... third_party/WebKit/public/platform/WebAudioDevice.h:47: double delayTimestamp, This is a rather strange way to represent timestamps, and results in bad-looking media code. Is this the standard representation of point-of-time in Blink?
https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/Sou... File third_party/WebKit/Source/platform/audio/AudioDestination.h (right): https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/Sou... third_party/WebKit/Source/platform/audio/AudioDestination.h:86: double delay, // Output delay in seconds. On 2016/11/29 20:51:04, miu wrote: > suggestion: Just in case others get confused the way I did: Could you write a > method comment like: "|delay| and |delayTimestamp| are high-precision > measurements of the state of the system in the recent past. To be clear, |delay| > does *not* represent the point-in-time at which the first rendered sample will > be played out." Done. Thanks for suggestion! https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/pub... File third_party/WebKit/public/platform/WebAudioDevice.h (right): https://codereview.chromium.org/2060833002/diff/330001/third_party/WebKit/pub... third_party/WebKit/public/platform/WebAudioDevice.h:47: double delayTimestamp, On 2016/12/01 23:55:00, sandersd wrote: > This is a rather strange way to represent timestamps, and results in bad-looking > media code. Is this the standard representation of point-of-time in Blink? Yeah, time stamps in blink are usually represented as 'double' (pls. see https://cs.chromium.org/chromium/src/third_party/WebKit/Source/wtf/CurrentTime.h )
Moved self to CC. sandersd@ has media covered
chcunningham@chromium.org changed reviewers: - chcunningham@chromium.org
PS11 totally lgtm! :)
lgtm for the WebAudio parts.
https://codereview.chromium.org/2060833002/diff/350001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2060833002/diff/350001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:827: AutoLocker locker(this); nit: Add a DCHECK that this can only be called from the main thread.
haraken@, could you please take a look at modules_idl_files.gni and also platform/ changes? https://codereview.chromium.org/2060833002/diff/350001/third_party/WebKit/Sou... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2060833002/diff/350001/third_party/WebKit/Sou... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:827: AutoLocker locker(this); On 2016/12/05 17:25:28, Raymond Toy wrote: > nit: Add a DCHECK that this can only be called from the main thread. Done.
LGTM
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Patchset #13 (id:390001) has been deleted
Dry run: Try jobs failed on following builders: cast_shell_android on master.tryserver.chromium.android (JOB_FAILED, https://build.chromium.org/p/tryserver.chromium.android/builders/cast_shell_a...)
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: ios-device on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios-device/builds...) ios-device-xcode-clang on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios-device-xcode-...) ios-simulator on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios-simulator/bui...) ios-simulator-xcode-clang on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios-simulator-xco...)
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_chromeos_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
Patchset #13 (id:410001) has been deleted
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_chromium_x64_rel_ng on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_x64_...)
Patchset #13 (id:430001) has been deleted
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: cast_shell_linux on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/cast_shell_linu...)
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Patchset #13 (id:450001) has been deleted
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
rtoy@: the latest CL contains improved manual test (now it better illustrates the actual synchronization between context time and performance time), also 'audiocontext-getoutputtimestamp.html' layout test is added and the existing layout tests are modified. Also there is a check that 'delay' cannot be negative. Think, this requires another review round, could you please take a look? Notes: I found two issues with the CL applied, they are addressed in separate CLs 1) Inappropriate 'getOutputTimestamp()' call result values on Android after 30 sec of silence: crrev.com/2567143002: media::SilentSinkSuspender should simulate |delay| and |delay_timestamp| 2) Inaccurate output delay: crrev.com/2562243005: Make AudioTimestampHelper::TimeToFrames() more accurate
lgtm for webaudio with nit. I ran the manual test locally. I can't see the yellow indicator. It always seems to go from green to red. Is that expected. BTW, are these the colors you want? I was kind of expecting green when there's sound because the timestamps are matched (more or less), and red when they're not. (Based on the text for the test.) https://codereview.chromium.org/2060833002/diff/470001/third_party/WebKit/Man... File third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html (right): https://codereview.chromium.org/2060833002/diff/470001/third_party/WebKit/Man... third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html:53: <p>Please see <a href="https://webaudio.github.io/web-audio-api/#methods-1">Web Audio nit: update the link to point to the timestamp section.
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: win_chromium_rel_ng on master.tryserver.chromium.win (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.win/builders/win_chromium_rel_...)
Thanks for your comments! On 2016/12/13 22:02:30, Raymond Toy wrote: > lgtm for webaudio with nit. > > I ran the manual test locally. I can't see the yellow indicator. It always seems > to go from green to red. Is that expected. Yeah, it's hard to see it yellow if test with low latency audio. For example, using of a USB headset makes it more noticeable. I've added a paragraph explaining that to the test description. > BTW, are these the colors you want? > I was kind of expecting green when there's sound because the timestamps are > matched (more or less), and red when they're not. (Based on the text for the > test.) Done. > > https://codereview.chromium.org/2060833002/diff/470001/third_party/WebKit/Man... > File third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html (right): > > https://codereview.chromium.org/2060833002/diff/470001/third_party/WebKit/Man... > third_party/WebKit/ManualTests/webaudio/audiooutputtimestamp.html:53: <p>Please > see <a href="https://webaudio.github.io/web-audio-api/#methods-1">Web Audio > nit: update the link to point to the timestamp section. Done.
The CQ bit was checked by rbyers@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: linux_chromium_rel_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/linux_chromium_...)
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: ios-device on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios-device/builds...) ios-device-xcode-clang on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios-device-xcode-...) ios-simulator-xcode-clang on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/ios-simulator-xco...)
Description was changed from ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the content layer to 'AudioContext' JS bindings. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... BUG=619533 ========== to ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the content layer to 'AudioContext' JS bindings. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... Intent to ship: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/fEWdrU4C-3Y BUG=619533 ==========
webexposed/ test changes LGTM (added the link to the approved intent-to-ship to the CL description).
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by mikhail.pozdnyakov@intel.com to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: This issue passed the CQ dry run.
The CQ bit was checked by mikhail.pozdnyakov@intel.com
The patchset sent to the CQ was uploaded after l-g-t-m from sandersd@chromium.org, miu@chromium.org, haraken@chromium.org, rtoy@chromium.org, rbyers@chromium.org Link to the patchset: https://codereview.chromium.org/2060833002/#ps550001 (title: "rebased")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
CQ is committing da patch. Bot data: {"patchset_id": 550001, "attempt_start_ts": 1482181080805420, "parent_rev": "ba3c52385bf2a87b63f8cbf6f37385c48269e366", "commit_rev": "27e4ab70940661e25bbd9ac62bf67bafab53d1d4"}
Message was sent while issue was closed.
Description was changed from ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the content layer to 'AudioContext' JS bindings. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... Intent to ship: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/fEWdrU4C-3Y BUG=619533 ========== to ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the content layer to 'AudioContext' JS bindings. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... Intent to ship: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/fEWdrU4C-3Y BUG=619533 Review-Url: https://codereview.chromium.org/2060833002 ==========
Message was sent while issue was closed.
Committed patchset #17 (id:550001)
Message was sent while issue was closed.
Description was changed from ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the content layer to 'AudioContext' JS bindings. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... Intent to ship: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/fEWdrU4C-3Y BUG=619533 Review-Url: https://codereview.chromium.org/2060833002 ========== to ========== Implementation of 'AudioContext.getOutputTimestamp' method The newly added 'AudioContext.getOutputTimestamp' method helps to bind audio context time and performance time values. This CL contains generic code which provides the audio output timestamp from the content layer to 'AudioContext' JS bindings. Link to specification: https://webaudio.github.io/web-audio-api/#widl-AudioContext-getOutputTimestam... Intent to ship: https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/fEWdrU4C-3Y BUG=619533 Committed: https://crrev.com/851c52f392dde2c5611453142c737839b65fe1ab Cr-Commit-Position: refs/heads/master@{#439553} ==========
Message was sent while issue was closed.
Patchset 17 (id:??) landed as https://crrev.com/851c52f392dde2c5611453142c737839b65fe1ab Cr-Commit-Position: refs/heads/master@{#439553} |