|
|
Chromium Code Reviews|
Created:
4 years, 3 months ago by mlamouri (slow - plz ping) Modified:
4 years, 2 months ago CC:
asvitkine+watch_chromium.org, blink-reviews, chromium-reviews, haraken Target Ref:
refs/pending/heads/master Project:
chromium Visibility:
Public. |
DescriptionWeb Audio: when media playback requires a user gesture, apply rule to cross origin iframes.
This is an intervention: do not allow cross origin iframes to use Web Audio
without a user gesture. In practice that means that the AudioContext will start
rendering only if it was "unlocked".
Intent to Intervene: https://groups.google.com/a/chromium.org/d/msg/blink-dev/51WbTwn0M_Y/VZuwn8-VAAAJ
This is applying the changes discussed at TPAC and summarised here: https://github.com/WebAudio/web-audio-api/issues/836
BUG=614115
Committed: https://crrev.com/b121dd8b9380574db8d6ebee851c1dfd61369cda
Cr-Commit-Position: refs/heads/master@{#422112}
Patch Set 1 #
Total comments: 8
Patch Set 2 : #
Total comments: 2
Patch Set 3 : fix compile #Patch Set 4 : apply changes from TPAC discussion #
Total comments: 15
Patch Set 5 : review comments #Patch Set 6 : review comments #Patch Set 7 : rebase #Patch Set 8 : rebase for real #Patch Set 9 : re-use click target #Patch Set 10 : rebase #Patch Set 11 : move click simulation to top frame #Messages
Total messages: 63 (27 generated)
Description was changed from ========== Web Audio: when media playback requires a user gesture, apply rule to cross origin iframes. This is an intervention: do not allow cross origin iframes to use Web Audio without a user gesture. In practice that means that the AudioContext will start rendering only if it was "unlocked". BUG=614115 ========== to ========== Web Audio: when media playback requires a user gesture, apply rule to cross origin iframes. This is an intervention: do not allow cross origin iframes to use Web Audio without a user gesture. In practice that means that the AudioContext will start rendering only if it was "unlocked". Intent to Intervene: https://groups.google.com/a/chromium.org/d/msg/blink-dev/51WbTwn0M_Y/VZuwn8-V... BUG=614115 ==========
mlamouri@chromium.org changed reviewers: + foolip@chromium.org, isherman@chromium.org, rtoy@chromium.org
rtoy: please review the webaudio/ changes. foolip: please review the Blink changes. isherman: please review the tools/metrics/ changes. Thanks :)
The CQ bit was checked by mlamouri@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: cast_shell_linux on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/cast_shell_linu...)
Note that I'm opposed to landing this at this time. The spec change issue (https://github.com/WebAudio/web-audio-api/issues/836) hasn't really reached any kind of consensus. Since you filed the issue, I think we need to wait for at least a consensus on the change before proceeding. https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... File third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp (right): https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: context()->startRenderingIfNeeded(); I don't think this is right. When the context is created, startRendering() is called immediately when the destination is created. And if I understand what you're trying to do, this might also interfere with any OfflineAudioContexts. These contexts shouldn't be require a gesture because they don't output any audio. https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:674: LOG(INFO) << "handlePreRenderTasks"; handlePreRenderTask gets called about every 3 ms or less (depends on sample rate). Do you really want logged every time?
hongchan@chromium.org changed reviewers: + hongchan@chromium.org
Can we just make this as one-time check on the context level? Checking if the rendering is enabled every time a AudioScheduledSourceNode is triggered seems to be a bad idea. Also you worked on AudioBufferSourceNode, but left out OscillatorNode. I think fixing AudioScheduledSourceNode is good enough, you should not be touching ABSN or OSC node. https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:137: } Blocking audio rendering should be limited to the realtime audio context because OfflineAudioContext does not render the audio stream into the actual hardware, so there is no reason to block it. I am not sure if this is discussed in the working group discussion. https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:789: UseCounter::count(getExecutionContext(), UseCounter::WebAudioAutoplayCrossOriginIframe); Can you add a comment on what this does? This looks like it will always increment the "WebAudioAutoplayCrossOriginIframe" count. Is this what you want? How do you know this method is called from a cross-origin iframe? Perhaps I am missing something.
histograms.xml lgtm
third_party/WebKit/Source/core/frame/Deprecation.cpp lgtm, but leaving the rest to rtoy@
hongchan@, I can't have the gesture check only in the constructor. This wouldn't be compatible with web contents that try to follow Safari iOS behaviour. If we want to block autoplay, we should try to make it in a consistent way instead of going our own path. https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... File third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp (right): https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: context()->startRenderingIfNeeded(); On 2016/09/07 at 16:13:19, Raymond Toy wrote: > I don't think this is right. When the context is created, startRendering() is called immediately when the destination is created. And if I understand what you're trying to do, this might also interfere with any OfflineAudioContexts. These contexts shouldn't be require a gesture because they don't output any audio. When the AudioContext is created, it might not be rendering yet if there was no user gesture. In this case, start() should resume automatically. Regarding offline audio context, I answered below: it's for compat with Safari iOS. If both of you think there is really no reason to do this, let me know and I can change that. https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:137: } On 2016/09/07 at 16:59:00, hoch wrote: > Blocking audio rendering should be limited to the realtime audio context because OfflineAudioContext does not render the audio stream into the actual hardware, so there is no reason to block it. I am not sure if this is discussed in the working group discussion. The goal here is to be compatible with Safari iOS and as far as I can tell, they don't make any difference between regular and offline audio contexts. I'm not sure why and that's why I prefer to be on the safe side and relax later. Though, if you prefer to not do this, I can make this change. https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:674: LOG(INFO) << "handlePreRenderTasks"; On 2016/09/07 at 16:13:19, Raymond Toy wrote: > handlePreRenderTask gets called about every 3 ms or less (depends on sample rate). Do you really want logged every time? sorry, that's a leftover from some investigations. https://codereview.chromium.org/2314903002/diff/1/third_party/WebKit/Source/m... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:789: UseCounter::count(getExecutionContext(), UseCounter::WebAudioAutoplayCrossOriginIframe); On 2016/09/07 at 16:59:00, hoch wrote: > Can you add a comment on what this does? > > This looks like it will always increment the "WebAudioAutoplayCrossOriginIframe" count. Is this what you want? How do you know this method is called from a cross-origin iframe? Perhaps I am missing something. m_userGestureRequired only applies for cross origin iframes but again, I didn't mean to leave this here either. Sent my CL a bit too early it seems.
> Also you worked on AudioBufferSourceNode, but left out OscillatorNode. I think fixing > AudioScheduledSourceNode is good enough, you should not be touching ABSN or OSC node. I'm not sure I follow this: OscillatorNode is taken into account because it is inheriting from AudioSchedulerSourceHandler. My test also uses OscillatorNode. Did I miss something?
Please fix the compiler errors too. https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp (right): https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: context()->startRenderingIfNeeded(); If we don't have the gesture, we don't want to start. In that case, we should just return early from here. But this depends on what you expect to happen. In this current version, if we don't have the gesture, we don't "render", but we do everything else we would normally do. So, if, say in a second, we do get the gesture, the buffer will start playing, skipping over the first second of audio. Is that the intent? If so, we'll have to think of something else.
PTAL https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... File third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp (right): https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: context()->startRenderingIfNeeded(); On 2016/09/08 at 20:03:33, Raymond Toy wrote: > If we don't have the gesture, we don't want to start. In that case, we should just return early from here. > > But this depends on what you expect to happen. In this current version, if we don't have the gesture, we don't "render", but we do everything else we would normally do. So, if, say in a second, we do get the gesture, the buffer will start playing, skipping over the first second of audio. > > Is that the intent? If so, we'll have to think of something else. This is the intended behaviour, it matches https://github.com/WebKit/webkit/blob/d61d8c1359eab56e2337c50da8f9c168783e722...
On 2016/09/12 17:26:29, mlamouri wrote: > PTAL > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > File third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp > (right): > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: > context()->startRenderingIfNeeded(); > On 2016/09/08 at 20:03:33, Raymond Toy wrote: > > If we don't have the gesture, we don't want to start. In that case, we should > just return early from here. > > > > But this depends on what you expect to happen. In this current version, if we > don't have the gesture, we don't "render", but we do everything else we would > normally do. So, if, say in a second, we do get the gesture, the buffer will > start playing, skipping over the first second of audio. > > > > Is that the intent? If so, we'll have to think of something else. > > This is the intended behaviour, it matches > https://github.com/WebKit/webkit/blob/d61d8c1359eab56e2337c50da8f9c168783e722... Do you have a test URL for this? AFAICT with this CL, when the context is created, startRendering() is still called, so the context is rendering (silence). Having the node call startRenderingIfNeeded() that skips calling startRendering() doesn't change anything. I'm sure I'm missing something, but I don't know what. Can we get a small design document on exactly how this is supposed to behave? What happens to the source if I don't have a gesture now, but get it later? Does the source ever play? I look very briefly at the webkit link, but without digging much further, I can't really tell what's happening there.
On 2016/09/12 at 18:07:47, rtoy wrote: > On 2016/09/12 17:26:29, mlamouri wrote: > > PTAL > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > File third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp > > (right): > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: > > context()->startRenderingIfNeeded(); > > On 2016/09/08 at 20:03:33, Raymond Toy wrote: > > > If we don't have the gesture, we don't want to start. In that case, we should > > just return early from here. > > > > > > But this depends on what you expect to happen. In this current version, if we > > don't have the gesture, we don't "render", but we do everything else we would > > normally do. So, if, say in a second, we do get the gesture, the buffer will > > start playing, skipping over the first second of audio. > > > > > > Is that the intent? If so, we'll have to think of something else. > > > > This is the intended behaviour, it matches > > https://github.com/WebKit/webkit/blob/d61d8c1359eab56e2337c50da8f9c168783e722... > > Do you have a test URL for this? > > AFAICT with this CL, when the context is created, startRendering() is still called, so the context is rendering (silence). > Having the node call startRenderingIfNeeded() that skips calling startRendering() doesn't change anything. I'm sure > I'm missing something, but I don't know what. > > Can we get a small design document on exactly how this is supposed to behave? What happens to the source if I don't have > a gesture now, but get it later? Does the source ever play? > > I look very briefly at the webkit link, but without digging much further, I can't really tell > what's happening there. I had put together this doc together describing WebKit behaviour: https://docs.google.com/document/d/16EzKmZJNNyvKC1UN-heRN2h32ZF6H1EUCbcrcxRH7... I have some test pages but they are very basic. I'm going to write a test page for your use case.
On 2016/09/13 15:49:20, mlamouri wrote: > On 2016/09/12 at 18:07:47, rtoy wrote: > > On 2016/09/12 17:26:29, mlamouri wrote: > > > PTAL > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > File third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp > > > (right): > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: > > > context()->startRenderingIfNeeded(); > > > On 2016/09/08 at 20:03:33, Raymond Toy wrote: > > > > If we don't have the gesture, we don't want to start. In that case, we > should > > > just return early from here. > > > > > > > > But this depends on what you expect to happen. In this current version, > if we > > > don't have the gesture, we don't "render", but we do everything else we > would > > > normally do. So, if, say in a second, we do get the gesture, the buffer > will > > > start playing, skipping over the first second of audio. > > > > > > > > Is that the intent? If so, we'll have to think of something else. > > > > > > This is the intended behaviour, it matches > > > > https://github.com/WebKit/webkit/blob/d61d8c1359eab56e2337c50da8f9c168783e722... > > > > Do you have a test URL for this? > > > > AFAICT with this CL, when the context is created, startRendering() is still > called, so the context is rendering (silence). > > Having the node call startRenderingIfNeeded() that skips calling > startRendering() doesn't change anything. I'm sure > > I'm missing something, but I don't know what. > > > > Can we get a small design document on exactly how this is supposed to behave? > What happens to the source if I don't have > > a gesture now, but get it later? Does the source ever play? > > > > I look very briefly at the webkit link, but without digging much further, I > can't really tell > > what's happening there. > > I had put together this doc together describing WebKit behaviour: > https://docs.google.com/document/d/16EzKmZJNNyvKC1UN-heRN2h32ZF6H1EUCbcrcxRH7... > > I have some test pages but they are very basic. I'm going to write a test page > for your use case. Thanks so much for the document; it clarifies things quite a bit. I do still have a few questions. From the doc it seems as if the audio context is actually in a suspended state at start up. If this is so, I think I can then deduce what happens. This CL, AFAICT, doesn't actually start the context in a suspended state. We'll need to fix that. (A simple change, I think.) However, another question, not covered, in the doc. let's say you have an oscillator and start() it without the gesture. Then at some later time you get the gesture. Will this oscillator then start magically playing? It seems from this CL that this will actually happen. Is that what happens on iOS? Same question holds for an AudioBufferSource, but this is a bit more complicated. If the source does start, where does it start from? The beginning of the buffer, as if you called start() as soon as the gesture happened? Somewhere else in the middle as if it were playing? An alternative implementation would be to force silence from these nodes until the gesture is received. This makes explaining the behavior a little easier. But I guess whatever we do, it should be consistent with iOS if possible.
On 2016/09/13 16:00:50, Raymond Toy wrote: > On 2016/09/13 15:49:20, mlamouri wrote: > > On 2016/09/12 at 18:07:47, rtoy wrote: > > > On 2016/09/12 17:26:29, mlamouri wrote: > > > > PTAL > > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > File third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp > > > > (right): > > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: > > > > context()->startRenderingIfNeeded(); > > > > On 2016/09/08 at 20:03:33, Raymond Toy wrote: > > > > > If we don't have the gesture, we don't want to start. In that case, we > > should > > > > just return early from here. > > > > > > > > > > But this depends on what you expect to happen. In this current version, > > if we > > > > don't have the gesture, we don't "render", but we do everything else we > > would > > > > normally do. So, if, say in a second, we do get the gesture, the buffer > > will > > > > start playing, skipping over the first second of audio. > > > > > > > > > > Is that the intent? If so, we'll have to think of something else. > > > > > > > > This is the intended behaviour, it matches > > > > > > > https://github.com/WebKit/webkit/blob/d61d8c1359eab56e2337c50da8f9c168783e722... > > > > > > Do you have a test URL for this? > > > > > > AFAICT with this CL, when the context is created, startRendering() is still > > called, so the context is rendering (silence). > > > Having the node call startRenderingIfNeeded() that skips calling > > startRendering() doesn't change anything. I'm sure > > > I'm missing something, but I don't know what. > > > > > > Can we get a small design document on exactly how this is supposed to > behave? > > What happens to the source if I don't have > > > a gesture now, but get it later? Does the source ever play? > > > > > > I look very briefly at the webkit link, but without digging much further, I > > can't really tell > > > what's happening there. > > > > I had put together this doc together describing WebKit behaviour: > > > https://docs.google.com/document/d/16EzKmZJNNyvKC1UN-heRN2h32ZF6H1EUCbcrcxRH7... > > > > I have some test pages but they are very basic. I'm going to write a test page > > for your use case. > > Thanks so much for the document; it clarifies things quite a bit. I do still > have a few > questions. From the doc it seems as if the audio context is actually in a > suspended state > at start up. If this is so, I think I can then deduce what happens. This CL, > AFAICT, > doesn't actually start the context in a suspended state. We'll need to fix > that. (A simple > change, I think.) > > However, another question, not covered, in the doc. let's say you have an > oscillator > and start() it without the gesture. Then at some later time you get the > gesture. Will > this oscillator then start magically playing? It seems from this CL that this > will actually > happen. Is that what happens on iOS? > > Same question holds for an AudioBufferSource, but this is a bit more > complicated. If the source > does start, where does it start from? The beginning of the buffer, as if you > called start() as > soon as the gesture happened? Somewhere else in the middle as if it were > playing? > > An alternative implementation would be to force silence from these nodes until > the gesture is > received. This makes explaining the behavior a little easier. > > But I guess whatever we do, it should be consistent with iOS if possible. Another concern I have is that BaseAudioContext::startRendering() actually boots up the audio device thread. I don't think this starts the audio rendering immediately in a synchronous fashion. Are we okay with this little hiccup? I think the alternative is - as rtoy@ pointed out - we start the rendering when the context is instantiated like as we do now, but we zero out all the output until we get the user gesture. Also we might have to investigate the behavior of Safari more closely to figure out several corner cases.
On 2016/09/13 16:43:26, hoch wrote: > On 2016/09/13 16:00:50, Raymond Toy wrote: > > On 2016/09/13 15:49:20, mlamouri wrote: > > > On 2016/09/12 at 18:07:47, rtoy wrote: > > > > On 2016/09/12 17:26:29, mlamouri wrote: > > > > > PTAL > > > > > > > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > > File > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp > > > > > (right): > > > > > > > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > > > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: > > > > > context()->startRenderingIfNeeded(); > > > > > On 2016/09/08 at 20:03:33, Raymond Toy wrote: > > > > > > If we don't have the gesture, we don't want to start. In that case, > we > > > should > > > > > just return early from here. > > > > > > > > > > > > But this depends on what you expect to happen. In this current > version, > > > if we > > > > > don't have the gesture, we don't "render", but we do everything else we > > > would > > > > > normally do. So, if, say in a second, we do get the gesture, the buffer > > > will > > > > > start playing, skipping over the first second of audio. > > > > > > > > > > > > Is that the intent? If so, we'll have to think of something else. > > > > > > > > > > This is the intended behaviour, it matches > > > > > > > > > > > https://github.com/WebKit/webkit/blob/d61d8c1359eab56e2337c50da8f9c168783e722... > > > > > > > > Do you have a test URL for this? > > > > > > > > AFAICT with this CL, when the context is created, startRendering() is > still > > > called, so the context is rendering (silence). > > > > Having the node call startRenderingIfNeeded() that skips calling > > > startRendering() doesn't change anything. I'm sure > > > > I'm missing something, but I don't know what. > > > > > > > > Can we get a small design document on exactly how this is supposed to > > behave? > > > What happens to the source if I don't have > > > > a gesture now, but get it later? Does the source ever play? > > > > > > > > I look very briefly at the webkit link, but without digging much further, > I > > > can't really tell > > > > what's happening there. > > > > > > I had put together this doc together describing WebKit behaviour: > > > > > > https://docs.google.com/document/d/16EzKmZJNNyvKC1UN-heRN2h32ZF6H1EUCbcrcxRH7... > > > > > > I have some test pages but they are very basic. I'm going to write a test > page > > > for your use case. > > > > Thanks so much for the document; it clarifies things quite a bit. I do still > > have a few > > questions. From the doc it seems as if the audio context is actually in a > > suspended state > > at start up. If this is so, I think I can then deduce what happens. This CL, > > AFAICT, > > doesn't actually start the context in a suspended state. We'll need to fix > > that. (A simple > > change, I think.) > > > > However, another question, not covered, in the doc. let's say you have an > > oscillator > > and start() it without the gesture. Then at some later time you get the > > gesture. Will > > this oscillator then start magically playing? It seems from this CL that this > > will actually > > happen. Is that what happens on iOS? > > > > Same question holds for an AudioBufferSource, but this is a bit more > > complicated. If the source > > does start, where does it start from? The beginning of the buffer, as if you > > called start() as > > soon as the gesture happened? Somewhere else in the middle as if it were > > playing? > > > > An alternative implementation would be to force silence from these nodes until > > the gesture is > > received. This makes explaining the behavior a little easier. > > > > But I guess whatever we do, it should be consistent with iOS if possible. > > Another concern I have is that BaseAudioContext::startRendering() actually boots > up the audio device thread. I don't think this starts the audio rendering > immediately in a synchronous fashion. Are we okay with this little hiccup? I > think the alternative is - as rtoy@ pointed out - we start the rendering when > the context is instantiated like as we do now, but we zero out all the output > until we get the user gesture. > > Also we might have to investigate the behavior of Safari more closely to figure > out several corner cases. FWIW, I took a look at WebKit's code and it does not do anything special for the node already started. It just starts the rendering.
On 2016/09/13 at 16:00:50, rtoy wrote: > On 2016/09/13 15:49:20, mlamouri wrote: > > On 2016/09/12 at 18:07:47, rtoy wrote: > > > On 2016/09/12 17:26:29, mlamouri wrote: > > > > PTAL > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > File third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp > > > > (right): > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: > > > > context()->startRenderingIfNeeded(); > > > > On 2016/09/08 at 20:03:33, Raymond Toy wrote: > > > > > If we don't have the gesture, we don't want to start. In that case, we > > should > > > > just return early from here. > > > > > > > > > > But this depends on what you expect to happen. In this current version, > > if we > > > > don't have the gesture, we don't "render", but we do everything else we > > would > > > > normally do. So, if, say in a second, we do get the gesture, the buffer > > will > > > > start playing, skipping over the first second of audio. > > > > > > > > > > Is that the intent? If so, we'll have to think of something else. > > > > > > > > This is the intended behaviour, it matches > > > > > > https://github.com/WebKit/webkit/blob/d61d8c1359eab56e2337c50da8f9c168783e722... > > > > > > Do you have a test URL for this? > > > > > > AFAICT with this CL, when the context is created, startRendering() is still > > called, so the context is rendering (silence). > > > Having the node call startRenderingIfNeeded() that skips calling > > startRendering() doesn't change anything. I'm sure > > > I'm missing something, but I don't know what. > > > > > > Can we get a small design document on exactly how this is supposed to behave? > > What happens to the source if I don't have > > > a gesture now, but get it later? Does the source ever play? > > > > > > I look very briefly at the webkit link, but without digging much further, I > > can't really tell > > > what's happening there. > > > > I had put together this doc together describing WebKit behaviour: > > https://docs.google.com/document/d/16EzKmZJNNyvKC1UN-heRN2h32ZF6H1EUCbcrcxRH7... > > > > I have some test pages but they are very basic. I'm going to write a test page > > for your use case. > > Thanks so much for the document; it clarifies things quite a bit. I do still have a few > questions. From the doc it seems as if the audio context is actually in a suspended state > at start up. If this is so, I think I can then deduce what happens. This CL, AFAICT, > doesn't actually start the context in a suspended state. We'll need to fix that. (A simple > change, I think.) WDYM? The context stays suspended with this CL, right? I'm confused what "start in suspended state" would mean. > However, another question, not covered, in the doc. let's say you have an oscillator > and start() it without the gesture. Then at some later time you get the gesture. Will > this oscillator then start magically playing? It seems from this CL that this will actually > happen. Is that what happens on iOS? Yes, the oscillator will start playing. My understanding is that the AudioContext timer would start at that point. > Same question holds for an AudioBufferSource, but this is a bit more complicated. If the source > does start, where does it start from? The beginning of the buffer, as if you called start() as > soon as the gesture happened? Somewhere else in the middle as if it were playing? It would start at the beginning. See demo page: https://mounirlamouri.github.io/sandbox/autoplay/webaudio.html > An alternative implementation would be to force silence from these nodes until the gesture is > received. This makes explaining the behavior a little easier. I don't think it would match Safari iOS behaviour. > But I guess whatever we do, it should be consistent with iOS if possible. So far I didn't find any inconsistencies.
On 2016/09/13 17:01:19, mlamouri wrote: > On 2016/09/13 at 16:00:50, rtoy wrote: > > On 2016/09/13 15:49:20, mlamouri wrote: > > > On 2016/09/12 at 18:07:47, rtoy wrote: > > > > On 2016/09/12 17:26:29, mlamouri wrote: > > > > > PTAL > > > > > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > > File > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp > > > > > (right): > > > > > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > > > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: > > > > > context()->startRenderingIfNeeded(); > > > > > On 2016/09/08 at 20:03:33, Raymond Toy wrote: > > > > > > If we don't have the gesture, we don't want to start. In that case, > we > > > should > > > > > just return early from here. > > > > > > > > > > > > But this depends on what you expect to happen. In this current > version, > > > if we > > > > > don't have the gesture, we don't "render", but we do everything else we > > > would > > > > > normally do. So, if, say in a second, we do get the gesture, the buffer > > > will > > > > > start playing, skipping over the first second of audio. > > > > > > > > > > > > Is that the intent? If so, we'll have to think of something else. > > > > > > > > > > This is the intended behaviour, it matches > > > > > > > > > https://github.com/WebKit/webkit/blob/d61d8c1359eab56e2337c50da8f9c168783e722... > > > > > > > > Do you have a test URL for this? > > > > > > > > AFAICT with this CL, when the context is created, startRendering() is > still > > > called, so the context is rendering (silence). > > > > Having the node call startRenderingIfNeeded() that skips calling > > > startRendering() doesn't change anything. I'm sure > > > > I'm missing something, but I don't know what. > > > > > > > > Can we get a small design document on exactly how this is supposed to > behave? > > > What happens to the source if I don't have > > > > a gesture now, but get it later? Does the source ever play? > > > > > > > > I look very briefly at the webkit link, but without digging much further, > I > > > can't really tell > > > > what's happening there. > > > > > > I had put together this doc together describing WebKit behaviour: > > > > https://docs.google.com/document/d/16EzKmZJNNyvKC1UN-heRN2h32ZF6H1EUCbcrcxRH7... > > > > > > I have some test pages but they are very basic. I'm going to write a test > page > > > for your use case. > > > > Thanks so much for the document; it clarifies things quite a bit. I do still > have a few > > questions. From the doc it seems as if the audio context is actually in a > suspended state > > at start up. If this is so, I think I can then deduce what happens. This CL, > AFAICT, > > doesn't actually start the context in a suspended state. We'll need to fix > that. (A simple > > change, I think.) > > WDYM? The context stays suspended with this CL, right? I'm confused what "start > in suspended state" would mean. > > > However, another question, not covered, in the doc. let's say you have an > oscillator > > and start() it without the gesture. Then at some later time you get the > gesture. Will > > this oscillator then start magically playing? It seems from this CL that this > will actually > > happen. Is that what happens on iOS? > > Yes, the oscillator will start playing. My understanding is that the > AudioContext timer would start at that point. > > > Same question holds for an AudioBufferSource, but this is a bit more > complicated. If the source > > does start, where does it start from? The beginning of the buffer, as if you > called start() as > > soon as the gesture happened? Somewhere else in the middle as if it were > playing? > > It would start at the beginning. See demo page: > https://mounirlamouri.github.io/sandbox/autoplay/webaudio.html > > > An alternative implementation would be to force silence from these nodes until > the gesture is > > received. This makes explaining the behavior a little easier. > > I don't think it would match Safari iOS behaviour. > > > But I guess whatever we do, it should be consistent with iOS if possible. > > So far I didn't find any inconsistencies. I took a look at Safari's code on this user gesture requirement. It has a lot more than what this CL has. It has a special class of "MediaProducer" and it technically sets the audio producer into the muted state. This is different than calling startRendering later. https://github.com/WebKit/webkit/blob/1faa5bf9266d53f8f9c1608d5eade57c354a00a... Another special class "MediaCanStartListener" actually listens the user interaction. I think these take care of the platform-dependent behavior as well. https://github.com/WebKit/webkit/blob/66e68cd8d7bf4ea1cf52f31ed9cb242f83ea5b5... Does this CL only affect Android? How and where? Please note that we are already doing something different by applying this restriction only for the cross-origin case; iOS blocks the web audio rendering regardless of the origin. So "being consistent to Safari" not a valid reason any more for this change from my perspective. However, the cross-origin restriction sounds reasonable to me. Lastly, can you clarify what you're trying to achieve? a) The cross-origin restriction for Android WebAudio. b) The cross-origin restriction for all platform. (Desktop/ChromeOS/Android)
On 2016/09/13 17:01:19, mlamouri wrote: > On 2016/09/13 at 16:00:50, rtoy wrote: > > On 2016/09/13 15:49:20, mlamouri wrote: > > > On 2016/09/12 at 18:07:47, rtoy wrote: > > > > On 2016/09/12 17:26:29, mlamouri wrote: > > > > > PTAL > > > > > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > > File > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp > > > > > (right): > > > > > > > > > > > > > > https://codereview.chromium.org/2314903002/diff/20001/third_party/WebKit/Sour... > > > > > > third_party/WebKit/Source/modules/webaudio/AudioBufferSourceNode.cpp:451: > > > > > context()->startRenderingIfNeeded(); > > > > > On 2016/09/08 at 20:03:33, Raymond Toy wrote: > > > > > > If we don't have the gesture, we don't want to start. In that case, > we > > > should > > > > > just return early from here. > > > > > > > > > > > > But this depends on what you expect to happen. In this current > version, > > > if we > > > > > don't have the gesture, we don't "render", but we do everything else we > > > would > > > > > normally do. So, if, say in a second, we do get the gesture, the buffer > > > will > > > > > start playing, skipping over the first second of audio. > > > > > > > > > > > > Is that the intent? If so, we'll have to think of something else. > > > > > > > > > > This is the intended behaviour, it matches > > > > > > > > > https://github.com/WebKit/webkit/blob/d61d8c1359eab56e2337c50da8f9c168783e722... > > > > > > > > Do you have a test URL for this? > > > > > > > > AFAICT with this CL, when the context is created, startRendering() is > still > > > called, so the context is rendering (silence). > > > > Having the node call startRenderingIfNeeded() that skips calling > > > startRendering() doesn't change anything. I'm sure > > > > I'm missing something, but I don't know what. > > > > > > > > Can we get a small design document on exactly how this is supposed to > behave? > > > What happens to the source if I don't have > > > > a gesture now, but get it later? Does the source ever play? > > > > > > > > I look very briefly at the webkit link, but without digging much further, > I > > > can't really tell > > > > what's happening there. > > > > > > I had put together this doc together describing WebKit behaviour: > > > > https://docs.google.com/document/d/16EzKmZJNNyvKC1UN-heRN2h32ZF6H1EUCbcrcxRH7... > > > > > > I have some test pages but they are very basic. I'm going to write a test > page > > > for your use case. > > > > Thanks so much for the document; it clarifies things quite a bit. I do still > have a few > > questions. From the doc it seems as if the audio context is actually in a > suspended state > > at start up. If this is so, I think I can then deduce what happens. This CL, > AFAICT, > > doesn't actually start the context in a suspended state. We'll need to fix > that. (A simple > > change, I think.) > > WDYM? The context stays suspended with this CL, right? I'm confused what "start > in suspended state" would mean. No it doesn't. When the context is created, it starts rendering right away. Blink used to wait until a source started before it started rendering. This is non-conforming. We start right away and your CL doesn't do anything to change that, AFAICT. Blink has diverged a bit from Safari. > > > However, another question, not covered, in the doc. let's say you have an > oscillator > > and start() it without the gesture. Then at some later time you get the > gesture. Will > > this oscillator then start magically playing? It seems from this CL that this > will actually > > happen. Is that what happens on iOS? > > Yes, the oscillator will start playing. My understanding is that the > AudioContext timer would start at that point. > > > Same question holds for an AudioBufferSource, but this is a bit more > complicated. If the source > > does start, where does it start from? The beginning of the buffer, as if you > called start() as > > soon as the gesture happened? Somewhere else in the middle as if it were > playing? > > It would start at the beginning. See demo page: > https://mounirlamouri.github.io/sandbox/autoplay/webaudio.html > > > An alternative implementation would be to force silence from these nodes until > the gesture is > > received. This makes explaining the behavior a little easier. > > I don't think it would match Safari iOS behaviour. > > > But I guess whatever we do, it should be consistent with iOS if possible. > > So far I didn't find any inconsistencies.
Description was changed from ========== Web Audio: when media playback requires a user gesture, apply rule to cross origin iframes. This is an intervention: do not allow cross origin iframes to use Web Audio without a user gesture. In practice that means that the AudioContext will start rendering only if it was "unlocked". Intent to Intervene: https://groups.google.com/a/chromium.org/d/msg/blink-dev/51WbTwn0M_Y/VZuwn8-V... BUG=614115 ========== to ========== Web Audio: when media playback requires a user gesture, apply rule to cross origin iframes. This is an intervention: do not allow cross origin iframes to use Web Audio without a user gesture. In practice that means that the AudioContext will start rendering only if it was "unlocked". Intent to Intervene: https://groups.google.com/a/chromium.org/d/msg/blink-dev/51WbTwn0M_Y/VZuwn8-V... This is applying the changes discussed at TPAC and summarised here: https://github.com/WebAudio/web-audio-api/issues/836 BUG=614115 ==========
rtoy@, PTAL
Don't forget to add TEST= to the description. Look good overall, but I have some questions on the details. https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Layo... File third_party/WebKit/LayoutTests/http/tests/webaudio/resources/autoplay-crossorigin-iframe.html (right): https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Layo... third_party/WebKit/LayoutTests/http/tests/webaudio/resources/autoplay-crossorigin-iframe.html:38: // Calling 'start()' will fail to start on a Node the associated AudioContext. How is this determined here (that start() failed to start)? https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:105: // If mediaPlaybackRequiresUserGesture is enabled, cross origin iframes will Is mediaPlaybackRequiresUserGesture enabled for all platforms? Or is this currently intended only for Android? https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:777: toDocument(getExecutionContext())->addConsoleMessage(ConsoleMessage::create(JSMessageSource, WarningMessageLevel, "An AudioContext in a cross origin iframe must be created from a user gesture handler or have resume() called from one.")); The message says the context must be created from a user gesture handler. This is not correct is it? I thought you could create the context, but you must resume it from a gesture handler. https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:781: } I think this block of code should be in the caller. My expectation is that startRendering will start rendering. I think it should be in AudioContext::create(Document&, ExceptionState&).
PTAL https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Layo... File third_party/WebKit/LayoutTests/http/tests/webaudio/resources/autoplay-crossorigin-iframe.html (right): https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Layo... third_party/WebKit/LayoutTests/http/tests/webaudio/resources/autoplay-crossorigin-iframe.html:38: // Calling 'start()' will fail to start on a Node the associated AudioContext. On 2016/09/26 at 16:09:54, Raymond Toy wrote: > How is this determined here (that start() failed to start)? I mean to start an AudioContext. Below, it's testing the AudioContext state. Will update the comment. https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:105: // If mediaPlaybackRequiresUserGesture is enabled, cross origin iframes will On 2016/09/26 at 16:09:54, Raymond Toy wrote: > Is mediaPlaybackRequiresUserGesture enabled for all platforms? Or is this currently intended only for Android? Only enabled on Android. https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:777: toDocument(getExecutionContext())->addConsoleMessage(ConsoleMessage::create(JSMessageSource, WarningMessageLevel, "An AudioContext in a cross origin iframe must be created from a user gesture handler or have resume() called from one.")); On 2016/09/26 at 16:09:54, Raymond Toy wrote: > The message says the context must be created from a user gesture handler. This is not correct is it? I thought you could create the context, but you must resume it from a gesture handler. Both are valid: you can create it from a user gesture or call resume() from a user gesture. You need to do one or the other. This is what I tried to express in the message. I'm glad to rephrase if you have a suggestion (I spent a ridiculous amount of time rewriting this message already :)) https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:781: } On 2016/09/26 at 16:09:54, Raymond Toy wrote: > I think this block of code should be in the caller. My expectation is that startRendering will start rendering. I think it should be in AudioContext::create(Document&, ExceptionState&). Sure. I will add a DCHECK though to make sure we don't miss a call.
https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:105: // If mediaPlaybackRequiresUserGesture is enabled, cross origin iframes will On 2016/09/26 17:01:24, mlamouri (slow) wrote: > On 2016/09/26 at 16:09:54, Raymond Toy wrote: > > Is mediaPlaybackRequiresUserGesture enabled for all platforms? Or is this > currently intended only for Android? > > Only enabled on Android. Can you just a note that this is for Android? Without digging through lots of code, it's not obvious. https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:777: toDocument(getExecutionContext())->addConsoleMessage(ConsoleMessage::create(JSMessageSource, WarningMessageLevel, "An AudioContext in a cross origin iframe must be created from a user gesture handler or have resume() called from one.")); On 2016/09/26 17:01:24, mlamouri (slow) wrote: > On 2016/09/26 at 16:09:54, Raymond Toy wrote: > > The message says the context must be created from a user gesture handler. > This is not correct is it? I thought you could create the context, but you must > resume it from a gesture handler. > > Both are valid: you can create it from a user gesture or call resume() from a > user gesture. You need to do one or the other. This is what I tried to express > in the message. I'm glad to rephrase if you have a suggestion (I spent a > ridiculous amount of time rewriting this message already :)) Yeah, concise messages are really hard. Oh, you already mentioned created or resumed. (My window was small enough that the continuation to the next line made sense so I didn't see the resume part.) I think you can make the message shorter by saying "created or resumed from a user gesture to enable audio output", and get rid of the "or have resume() called...". I think this is a bit clearer on what you need to do to get audio output. https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:781: } On 2016/09/26 17:01:24, mlamouri (slow) wrote: > On 2016/09/26 at 16:09:54, Raymond Toy wrote: > > I think this block of code should be in the caller. My expectation is that > startRendering will start rendering. I think it should be in > AudioContext::create(Document&, ExceptionState&). > > Sure. I will add a DCHECK though to make sure we don't miss a call. Yeah, that's fine with me.
PTAL https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:105: // If mediaPlaybackRequiresUserGesture is enabled, cross origin iframes will On 2016/09/26 at 17:24:39, Raymond Toy wrote: > On 2016/09/26 17:01:24, mlamouri (slow) wrote: > > On 2016/09/26 at 16:09:54, Raymond Toy wrote: > > > Is mediaPlaybackRequiresUserGesture enabled for all platforms? Or is this > > currently intended only for Android? > > > > Only enabled on Android. > > Can you just a note that this is for Android? Without digging through lots of code, it's not obvious. I don't think that this would be a good idea. It is a recipe to end up with outdated comments. https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:777: toDocument(getExecutionContext())->addConsoleMessage(ConsoleMessage::create(JSMessageSource, WarningMessageLevel, "An AudioContext in a cross origin iframe must be created from a user gesture handler or have resume() called from one.")); On 2016/09/26 at 17:24:38, Raymond Toy wrote: > On 2016/09/26 17:01:24, mlamouri (slow) wrote: > > On 2016/09/26 at 16:09:54, Raymond Toy wrote: > > > The message says the context must be created from a user gesture handler. > > This is not correct is it? I thought you could create the context, but you must > > resume it from a gesture handler. > > > > Both are valid: you can create it from a user gesture or call resume() from a > > user gesture. You need to do one or the other. This is what I tried to express > > in the message. I'm glad to rephrase if you have a suggestion (I spent a > > ridiculous amount of time rewriting this message already :)) > > Yeah, concise messages are really hard. Oh, you already mentioned created or resumed. (My window was small enough that the continuation to the next line made sense so I didn't see the resume part.) > > I think you can make the message shorter by saying "created or resumed from a user gesture to enable audio output", and get rid of the "or have resume() called...". I think this is a bit clearer on what you need to do to get audio output. Done.
lgtm https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:105: // If mediaPlaybackRequiresUserGesture is enabled, cross origin iframes will On 2016/09/27 14:11:15, mlamouri (slow) wrote: > On 2016/09/26 at 17:24:39, Raymond Toy wrote: > > On 2016/09/26 17:01:24, mlamouri (slow) wrote: > > > On 2016/09/26 at 16:09:54, Raymond Toy wrote: > > > > Is mediaPlaybackRequiresUserGesture enabled for all platforms? Or is this > > > currently intended only for Android? > > > > > > Only enabled on Android. > > > > Can you just a note that this is for Android? Without digging through lots of > code, it's not obvious. > > I don't think that this would be a good idea. It is a recipe to end up with > outdated comments. Fair enough. But its only outdated if, in the future, you want the gesture to apply on platforms other than Android. I think there would be strong opposition to that.
The CQ bit was checked by mlamouri@chromium.org
Thanks for the review! https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:105: // If mediaPlaybackRequiresUserGesture is enabled, cross origin iframes will On 2016/09/28 at 16:13:47, Raymond Toy wrote: > On 2016/09/27 14:11:15, mlamouri (slow) wrote: > > On 2016/09/26 at 17:24:39, Raymond Toy wrote: > > > On 2016/09/26 17:01:24, mlamouri (slow) wrote: > > > > On 2016/09/26 at 16:09:54, Raymond Toy wrote: > > > > > Is mediaPlaybackRequiresUserGesture enabled for all platforms? Or is this > > > > currently intended only for Android? > > > > > > > > Only enabled on Android. > > > > > > Can you just a note that this is for Android? Without digging through lots of > > code, it's not obvious. > > > > I don't think that this would be a good idea. It is a recipe to end up with > > outdated comments. > > Fair enough. But its only outdated if, in the future, you want the gesture to apply on platforms other than Android. I think there would be strong opposition to that. Indeed. But I don't want to close this door :)
The patchset sent to the CQ was uploaded after l-g-t-m from isherman@chromium.org, foolip@chromium.org Link to the patchset: https://codereview.chromium.org/2314903002/#ps100001 (title: "review comments")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: mac_chromium_compile_dbg_ng on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_comp...)
On 2016/09/28 16:17:25, mlamouri (slow) wrote: > Thanks for the review! > > https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... > File third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp (right): > > https://codereview.chromium.org/2314903002/diff/60001/third_party/WebKit/Sour... > third_party/WebKit/Source/modules/webaudio/BaseAudioContext.cpp:105: // If > mediaPlaybackRequiresUserGesture is enabled, cross origin iframes will > On 2016/09/28 at 16:13:47, Raymond Toy wrote: > > On 2016/09/27 14:11:15, mlamouri (slow) wrote: > > > On 2016/09/26 at 17:24:39, Raymond Toy wrote: > > > > On 2016/09/26 17:01:24, mlamouri (slow) wrote: > > > > > On 2016/09/26 at 16:09:54, Raymond Toy wrote: > > > > > > Is mediaPlaybackRequiresUserGesture enabled for all platforms? Or is > this > > > > > currently intended only for Android? > > > > > > > > > > Only enabled on Android. > > > > > > > > Can you just a note that this is for Android? Without digging through > lots of > > > code, it's not obvious. > > > > > > I don't think that this would be a good idea. It is a recipe to end up with > > > outdated comments. > > > > Fair enough. But its only outdated if, in the future, you want the gesture to > apply on platforms other than Android. I think there would be strong opposition > to that. > > Indeed. But I don't want to close this door :) I wish my comments in the code were that powerful!
The CQ bit was checked by mlamouri@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from isherman@chromium.org, foolip@chromium.org, rtoy@chromium.org Link to the patchset: https://codereview.chromium.org/2314903002/#ps120001 (title: "rebase")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: chromeos_amd64-generic_chromium_compile_only_ng on master.tryserver.chromium.linux (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.linux/builders/chromeos_amd64-...)
The CQ bit was checked by mlamouri@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from isherman@chromium.org, foolip@chromium.org, rtoy@chromium.org Link to the patchset: https://codereview.chromium.org/2314903002/#ps140001 (title: "rebase for real")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: mac_chromium_rel_ng on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_...)
The CQ bit was checked by mlamouri@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from isherman@chromium.org, foolip@chromium.org, rtoy@chromium.org Link to the patchset: https://codereview.chromium.org/2314903002/#ps160001 (title: "re-use click target")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Try jobs failed on following builders: mac_chromium_rel_ng on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_...)
The CQ bit was checked by mlamouri@chromium.org to run a CQ dry run
Dry run: CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
The CQ bit was unchecked by commit-bot@chromium.org
Dry run: Try jobs failed on following builders: mac_chromium_rel_ng on master.tryserver.chromium.mac (JOB_FAILED, http://build.chromium.org/p/tryserver.chromium.mac/builders/mac_chromium_rel_...)
The CQ bit was checked by mlamouri@chromium.org
The patchset sent to the CQ was uploaded after l-g-t-m from isherman@chromium.org, foolip@chromium.org, rtoy@chromium.org Link to the patchset: https://codereview.chromium.org/2314903002/#ps200001 (title: "move click simulation to top frame")
CQ is trying da patch. Follow status at https://chromium-cq-status.appspot.com/v2/patch-status/codereview.chromium.or...
Message was sent while issue was closed.
Committed patchset #11 (id:200001)
Message was sent while issue was closed.
Description was changed from ========== Web Audio: when media playback requires a user gesture, apply rule to cross origin iframes. This is an intervention: do not allow cross origin iframes to use Web Audio without a user gesture. In practice that means that the AudioContext will start rendering only if it was "unlocked". Intent to Intervene: https://groups.google.com/a/chromium.org/d/msg/blink-dev/51WbTwn0M_Y/VZuwn8-V... This is applying the changes discussed at TPAC and summarised here: https://github.com/WebAudio/web-audio-api/issues/836 BUG=614115 ========== to ========== Web Audio: when media playback requires a user gesture, apply rule to cross origin iframes. This is an intervention: do not allow cross origin iframes to use Web Audio without a user gesture. In practice that means that the AudioContext will start rendering only if it was "unlocked". Intent to Intervene: https://groups.google.com/a/chromium.org/d/msg/blink-dev/51WbTwn0M_Y/VZuwn8-V... This is applying the changes discussed at TPAC and summarised here: https://github.com/WebAudio/web-audio-api/issues/836 BUG=614115 Committed: https://crrev.com/b121dd8b9380574db8d6ebee851c1dfd61369cda Cr-Commit-Position: refs/heads/master@{#422112} ==========
Message was sent while issue was closed.
Patchset 11 (id:??) landed as https://crrev.com/b121dd8b9380574db8d6ebee851c1dfd61369cda Cr-Commit-Position: refs/heads/master@{#422112} |
