|
|
Created:
7 years, 1 month ago by no longer working on chromium Modified:
7 years ago Reviewers:
Henrik Grunell, Avi (use Gerrit), tommi (sloooow) - chröme, Jói, piman, DaleCurtis, perkj_chrome CC:
chromium-reviews, joi+watch-content_chromium.org, darin-cc_chromium.org, jam, tommi (sloooow) - chröme Base URL:
svn://svn.chromium.org/chrome/trunk/src Visibility:
Public. |
DescriptionAdded an "enable-audio-processor" flag and WebRtcAudioProcessor class.
This CL is a break-down CL from https://codereview.chromium.org/37793005. As the first step to move WebRtc APM to Chrome, it adds a enable-audio-processor command line flag, the WebRtcAudioProcessor class and its unittest.
WebRtcAudioProcessor is not hooked up the any of the code in Chrome yet, but it will be exercised and tested by its unittest.
BUG=264611
TEST=content_unittest
Committed: https://src.chromium.org/viewvc/chrome?view=rev&revision=237311
Patch Set 1 : #Patch Set 2 : added unittest #
Total comments: 20
Patch Set 3 : addressed Henrik's comment. #
Total comments: 10
Patch Set 4 : addressed Per's comments #
Total comments: 93
Patch Set 5 : addressed most of Dale's comments. #
Total comments: 8
Patch Set 6 : addressed the rest of Dale's comments #
Total comments: 16
Patch Set 7 : addressed the comments. #
Total comments: 34
Patch Set 8 : changed to base::TimeDelta and addressed Joi's comments. #
Total comments: 4
Patch Set 9 : switched to Atomic32 and removed the lock, also fixed some nits. #
Total comments: 6
Patch Set 10 : used a temp int64 and added a check #
Total comments: 38
Patch Set 11 : addressed Henrik and Joi's comments. #Patch Set 12 : fixed the comment in content_switches.cc #
Total comments: 11
Patch Set 13 : addressed piman's comments #Patch Set 14 : fixed the bots #
Total comments: 12
Patch Set 15 : rebased and added an include #
Total comments: 12
Patch Set 16 : addressed Henrik's comments. #
Total comments: 2
Patch Set 17 : rebased with 77803004, and fixed the order of the methods. #
Total comments: 85
Patch Set 18 : addressed Tommi's and Henriks' comments. #
Total comments: 9
Patch Set 19 : changed names to media_stream_* frm webrtc_*, and fixed two nits. #Patch Set 20 : added one comment. #Patch Set 21 : added one comment. #Messages
Total messages: 47 (0 generated)
Hello guys, could you please review this CL? As the first step to add a flag, the processor class and its unittest, I think this will move things forward. BR, SX
It seems like the switch and the new class are separate, can they be done in two CLs? I took a first quick look. To begin with I just looked at the interface for the audio processor to try understand what it does from reading that. https://codereview.chromium.org/54383003/diff/50001/content/public/common/con... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/50001/content/public/common/con... content/public/common/content_switches.cc:916: // Enable WebRtc audio processor in Chrome. I think it would be nice to have a name (of both the variable and the flag) that shows that this will run audio processing at a different place than without the flag. The audio processing is always enabled. Though it's hard to find a very good name, I'd like to hear what the other reviewers think of this? One idea (from the previous CL) could be something like "use-audio-processing-per-track". Despite the name, write in the comment what this does (enables audio processing on a webrtc track in Chrome, disables it in the webrtc stack). https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:28: // processing components based on the constraints, process the data and output I would suggest: "...processes the data and outputs it in a unit of 10 ms data chunk." (Language corrections + suggest to simple write "it" instead of repeating.) https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:30: class CONTENT_EXPORT WebRtcAudioProcessor { Is this class thread safe? Does it live on one thread only? https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:36: // Pushes in capture data for processing. How much data? (In ms.) 10 ms? Everything in |audio_bus|? https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:39: // Processes a block of 10ms data and output the post-processed data via "Processes a block of 10 ms data and outputs it via" https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:40: // |out|. The output data format is exposed via |sample_rate|, I don't understand "The output data format is exposed via...". What are the variables sample_rate etc. they referring to? They are not in this function. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:42: // Returns true if it has 10ms data for processing, otherwise false. "10 ms" It's a bit unclear. I assume returning false means that it hasn't processed anything since it doesn't have enough input data? And there is nothing written to the output? Or has a part been processed or written? Please clarify the comment. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:53: void FeedRenderDataToAudioProcessing(const int16* render_audio, Maybe it should be named as for capture: PushRenderData? "ToAudioProcessing" is redundant since that's was the class is about. How much data? (In ms.) Is it determined by |number_of_frames|? Is a frame always 10 ms? https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:62: // Accessor to check if the audio processing is enabled or not. It can't be set, right? So, in what cases is it enabled and in what cases not? "has_audio_processing()" sounds to me that, if returns false, it doesn't have the capabilities to do processing at all (i.e. not possible to enable). Maybe clarify name and/or comment. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:92: scoped_ptr<webrtc::AudioProcessing> audio_processing_; Comment.
https://codereview.chromium.org/54383003/diff/50001/content/public/common/con... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/50001/content/public/common/con... content/public/common/content_switches.cc:916: // Enable WebRtc audio processor in Chrome. On 2013/11/01 07:22:36, Henrik Grunell wrote: > I think it would be nice to have a name (of both the variable and the flag) that > shows that this will run audio processing at a different place than without the > flag. The audio processing is always enabled. Though it's hard to find a very > good name, I'd like to hear what the other reviewers think of this? One idea > (from the previous CL) could be something like "use-audio-processing-per-track". > > Despite the name, write in the comment what this does (enables audio processing > on a webrtc track in Chrome, disables it in the webrtc stack). updated the comment a bit, and I am still opened to the naming, lets wait for other reviewers' opinions. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:28: // processing components based on the constraints, process the data and output On 2013/11/01 07:22:36, Henrik Grunell wrote: > I would suggest: "...processes the data and outputs it in a unit of 10 ms data > chunk." > > (Language corrections + suggest to simple write "it" instead of repeating.) Done. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:30: class CONTENT_EXPORT WebRtcAudioProcessor { On 2013/11/01 07:22:36, Henrik Grunell wrote: > Is this class thread safe? Does it live on one thread only? No, I can add comment to explain which thread the methods are called. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:36: // Pushes in capture data for processing. On 2013/11/01 07:22:36, Henrik Grunell wrote: > How much data? (In ms.) 10 ms? Everything in |audio_bus|? everything in the |audio_bus| https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:39: // Processes a block of 10ms data and output the post-processed data via On 2013/11/01 07:22:36, Henrik Grunell wrote: > "Processes a block of 10 ms data and outputs it via" Done. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:40: // |out|. The output data format is exposed via |sample_rate|, On 2013/11/01 07:22:36, Henrik Grunell wrote: > I don't understand "The output data format is exposed via...". What are the > variables sample_rate etc. they referring to? They are not in this function. Old comment, I removed the code but forgot the update the comment. Removed now. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:42: // Returns true if it has 10ms data for processing, otherwise false. On 2013/11/01 07:22:36, Henrik Grunell wrote: > "10 ms" > > It's a bit unclear. I assume returning false means that it hasn't processed > anything since it doesn't have enough input data? And there is nothing written > to the output? Or has a part been processed or written? Please clarify the > comment. Done. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:53: void FeedRenderDataToAudioProcessing(const int16* render_audio, On 2013/11/01 07:22:36, Henrik Grunell wrote: > Maybe it should be named as for capture: PushRenderData? > > "ToAudioProcessing" is redundant since that's was the class is about. > > How much data? (In ms.) Is it determined by |number_of_frames|? Is a frame > always 10 ms? Done. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:62: // Accessor to check if the audio processing is enabled or not. On 2013/11/01 07:22:36, Henrik Grunell wrote: > It can't be set, right? So, in what cases is it enabled and in what cases not? > "has_audio_processing()" sounds to me that, if returns false, it doesn't have > the capabilities to do processing at all (i.e. not possible to enable). Maybe > clarify name and/or comment. When the constraints are all set to be false, there is no audio processing in this class, and this class will work like a FIFO. https://codereview.chromium.org/54383003/diff/50001/content/renderer/media/we... content/renderer/media/webrtc_audio_processor.h:92: scoped_ptr<webrtc::AudioProcessing> audio_processing_; On 2013/11/01 07:22:36, Henrik Grunell wrote: > Comment. Done.
I think you will need a better audio level reviewer than me but here are a few comments. https://codereview.chromium.org/54383003/diff/110001/content/browser/renderer... File content/browser/renderer_host/render_process_host_impl.cc (right): https://codereview.chromium.org/54383003/diff/110001/content/browser/renderer... content/browser/renderer_host/render_process_host_impl.cc:1049: switches::kEnableWebRtcAudioProcessor, kEnableAudioTrackProcessing? https://codereview.chromium.org/54383003/diff/110001/content/public/common/co... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/110001/content/public/common/co... content/public/common/content_switches.cc:916: // Enables WebRtc audio processor in Chrome. When this flag is on, it will Enables audio processing in a MediaStreamTrack ? When this flag is on, aec, Ns and AGC will be done per MediaStreamTrack instead of in PeerConnection. https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:37: // Extract all this methods to a helper class. nit; these https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:27: // This class owns an object of webrtc::AudioProcessing, it enables the audio explain what type of processing it enables https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:38: void PushCaptureData(media::AudioBus* audio_source); Is it really necessary to have both PushCaptureData and PushRenderData? Can't you let the WebRtcAudioProcessor implementation do separate things depending on what processing is enabled and only expose one push function and one consume function?
Tommi is busy, cc Tommi and add Dale. Dale, could you please review this CL? It adds webrtc audio processing to media stream track in Chrome behind a flag. Thanks, SX https://codereview.chromium.org/54383003/diff/110001/content/browser/renderer... File content/browser/renderer_host/render_process_host_impl.cc (right): https://codereview.chromium.org/54383003/diff/110001/content/browser/renderer... content/browser/renderer_host/render_process_host_impl.cc:1049: switches::kEnableWebRtcAudioProcessor, On 2013/11/04 12:30:14, perkj wrote: > kEnableAudioTrackProcessing? Since Henrik G has similar concern, so I am going to make it kEnableAudioTrackProcessing. Henrik, hope this sounds OK to you as well. https://codereview.chromium.org/54383003/diff/110001/content/public/common/co... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/110001/content/public/common/co... content/public/common/content_switches.cc:916: // Enables WebRtc audio processor in Chrome. When this flag is on, it will On 2013/11/04 12:30:14, perkj wrote: > Enables audio processing in a MediaStreamTrack ? When this flag is on, aec, Ns > and AGC will be done per MediaStreamTrack instead of in PeerConnection. Done. https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:37: // Extract all this methods to a helper class. On 2013/11/04 12:30:14, perkj wrote: > nit; these Done. https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:27: // This class owns an object of webrtc::AudioProcessing, it enables the audio On 2013/11/04 12:30:14, perkj wrote: > explain what type of processing it enables Done. https://codereview.chromium.org/54383003/diff/110001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:38: void PushCaptureData(media::AudioBus* audio_source); On 2013/11/04 12:30:14, perkj wrote: > Is it really necessary to have both PushCaptureData and PushRenderData? > > Can't you let the WebRtcAudioProcessor implementation do separate things > depending on what processing is enabled and only expose one push function and > one consume function? It is possible, but the client of the processor needs to pass a flag or type to indicate what data it is. Since PushCaptureData is happening on capture thread, while PushRenderData is called on render thread. I think it introduces more confusion on the thread model of the method if we intergrate two functionalities into one interface. Hope it is fine to keep the existing way.
I'm not sure what this is. Do you have a design doc anywhere?
https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:37: // Extract all these methods to a helper class. Mark as TODO(xians) or do now. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:39: DCHECK(audio_processing); No point in DCHECK() if you're just going to deref immediately below. Remove here and elsewhere. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:50: #endif Remove return above and use: #else ... #endif https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:101: LOG(ERROR) << "Fail to start AEC debug recording"; remove debug text? DLOG() ? Ditto below. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:112: : public media::AudioConverter::InputCallback { Add a thread_checker for construction/destruction or at least document which thread is calling what. It's important that access to audio_converter_ is thread safe as it doesn't include a lock. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:116: source_params_ = source_params; set in constructor syntax: WebRtcAudioConverter(...) : source_params_(source_params) etc. Ditto for all of these really. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:130: kMaxNumberOfBuffersInFifo * sink_params_.frames_per_buffer()); Consistently use member variables. I.e., use source_params_ if you're going to use sink_params_. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:137: ~WebRtcAudioConverter() { virtual https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:142: DCHECK(fifo_->frames() + audio_source->frames() <= fifo_->max_frames()); AudioFifo will check this already. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:156: audio_wrapper_->ToInterleaved(audio_wrapper_->frames(), 2, Instead of 2 maybe sizeof(*(data_[0])) ? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:168: webrtc::AudioFrame* audio_frame() { return &audio_frame_; } const& as well? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:180: // The first Convert() can trigger ProvideInput two times, use SincResampler Elaborate? Comment should be a TODO() if it's something that needs fixing. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:191: // TODO(xians): consider using SincResampler to save some memcpy. What does this mean? SincResampler is automatically invoked by AudioConverter if sample rates differ. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:193: scoped_ptr<media::AudioConverter> audio_converter_; AudioConverter and AudioFifo can be stack allocated, no need for scoped_ptr https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:201: WebRtcAudioProcessor::WebRtcAudioProcessor( Again, it'd be nice to have a ThreadChecker or some DCHECKs for which message loop you're on. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:216: int sink_sample_rate = audio_processing_.get() ? get shouldn't be necessary anymore. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:218: media::ChannelLayout sink_channel_layout = audio_processing_.get() ? Comments? Why the separate path for when audio_processing_ is available? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:230: DCHECK(capture_converter_.get()); No point in checking, scoped_ptr has an internal check. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:244: *out = capture_converter_->audio_frame()->data_; Is out already allocated? Why not just convert directly into out instead of the AudioFrame? It doesn't look like you're actually using anything from the AudioFrame despite setting it all up in the converter. Leaving a stale audio_Frame() around across Convert() calls kind of smells to. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:256: if (!audio_processing_.get()) No more .get(). https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:281: NOTREACHED() << "ProcessStream() error: " << err; DLOG(ERROR) instead? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:284: // TODO(xians): Fixed the AGC, typing detectin, audio level calculation, Comment doesn't make sense? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:288: void WebRtcAudioProcessor::FeedRenderDataToAudioProcessing( This method isn't defined in the header file?? Is it even being used? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:292: if (!audio_processing_.get() || Remove .get() https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:305: DCHECK(render_converter_.get()); Unnecessary https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:308: scoped_ptr<media::AudioBus> data_bus = media::AudioBus::Create( Avoid allocating this every callback, it's slow to do so. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:319: void WebRtcAudioProcessor::InitializeAudioProcessingModule( I have no idea about any of these settings below, someone from WebRTC should review this part. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:321: const CommandLine& command_line = *CommandLine::ForCurrentProcess(); Why the const&? Just use -> https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:328: bool enable_aec = GetPropertyFromConstraints( const all these? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:341: #if defined(IOS) || defined(ANDROID) Use #if .. #else and const these too. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:386: void WebRtcAudioProcessor::InitializeRenderConverterIfNeeded( Is this necessary? source_params_ and sink_params_ shouldn't change right? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:34: const webrtc::MediaConstraintsInterface* constraints); Should this be a const& instead? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:46: bool ProcessAndConsume10MsData(int capture_audio_delay_ms, Are you sure you want 10ms in the name. Kind of a pain if it ever changes :) https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:62: int render_delay_ms); How about using base::TimeDelta everywhere instead of xxx_ms? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:75: const webrtc::MediaConstraintsInterface* constraints); const&? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:75: for (size_t i = 0; i < ARRAYSIZE_UNSAFE(kDefaultAudioConstraints); ++i) { just arraysize() should be fine here since it's a global value. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:85: virtual void SetUp() OVERRIDE { Why SetUp instead of constructor? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:86: CommandLine::ForCurrentProcess()->AppendSwitch( Is this the right way to do this? Will this affect other tests? Should you instead have a enable_for_testing() method? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:104: // Read the audio data from a file. There's a lot of common setup here, move to a class method? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:118: audio_processor->FeedRenderDataToAudioProcessing( Not sure how this compiles?? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:123: // Process and consume the data in the processor. Ditto for all this.
Hi Dale, I have addressed most of your comments, but missing 3 of the comments in the unittest since I have to leave office earlier today. I will address them in the next version, hope it is fine. Meanwhile, would you please take a second look? Thanks, SX https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:37: // Extract all these methods to a helper class. On 2013/11/06 00:21:18, DaleCurtis wrote: > Mark as TODO(xians) or do now. Done with extracting the methods to a separate util file. The methods will grow substantially as we start supporting more features. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:39: DCHECK(audio_processing); On 2013/11/06 00:21:18, DaleCurtis wrote: > No point in DCHECK() if you're just going to deref immediately below. Remove > here and elsewhere. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:50: #endif On 2013/11/06 00:21:18, DaleCurtis wrote: > Remove return above and use: > > #else > ... > #endif Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:101: LOG(ERROR) << "Fail to start AEC debug recording"; On 2013/11/06 00:21:18, DaleCurtis wrote: > remove debug text? DLOG() ? Ditto below. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:112: : public media::AudioConverter::InputCallback { On 2013/11/06 00:21:18, DaleCurtis wrote: > Add a thread_checker for construction/destruction or at least document which > thread is calling what. It's important that access to audio_converter_ is > thread safe as it doesn't include a lock. Done with adding a thread check in destructor. Thanks for the info, WebRtcAudioConverter is thread safe in terms of how it is being used in this class. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:130: kMaxNumberOfBuffersInFifo * sink_params_.frames_per_buffer()); On 2013/11/06 00:21:18, DaleCurtis wrote: > Consistently use member variables. I.e., use source_params_ if you're going to > use sink_params_. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:137: ~WebRtcAudioConverter() { On 2013/11/06 00:21:18, DaleCurtis wrote: > virtual Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:142: DCHECK(fifo_->frames() + audio_source->frames() <= fifo_->max_frames()); On 2013/11/06 00:21:18, DaleCurtis wrote: > AudioFifo will check this already. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:156: audio_wrapper_->ToInterleaved(audio_wrapper_->frames(), 2, On 2013/11/06 00:21:18, DaleCurtis wrote: > Instead of 2 maybe sizeof(*(data_[0])) ? Done with using sink_params_.bits_per_sample() / 8, https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:168: webrtc::AudioFrame* audio_frame() { return &audio_frame_; } On 2013/11/06 00:21:18, DaleCurtis wrote: > const& as well? No, we can't. The webrtc::AudioProcessing takes the pointer of the audio_frame as input, processes the content, and modify it directly. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:180: // The first Convert() can trigger ProvideInput two times, use SincResampler On 2013/11/06 00:21:18, DaleCurtis wrote: > Elaborate? Comment should be a TODO() if it's something that needs fixing. I am not sure why, but the first audio_converter_->Convert(audio_wrapper_.get()) can trigger two ProvideInput, adding a TODO to figure it out why. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:191: // TODO(xians): consider using SincResampler to save some memcpy. On 2013/11/06 00:21:18, DaleCurtis wrote: > What does this mean? SincResampler is automatically invoked by AudioConverter > if sample rates differ. I am only targeting the case where resampling is needed. And since AudioConverter is designed to handle multiple InputCallbacks, it needs to get the data from all the inputs, mix them, down mixing(if needed), then resampling. But for this WebRtcAudioProcessor::WebRtcAudioConverter, it only takes one InputCallback as input, so we probably can save some memcpy by avoid intermediate buffering. Though I am not very sure if it worths the effort. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:193: scoped_ptr<media::AudioConverter> audio_converter_; On 2013/11/06 00:21:18, DaleCurtis wrote: > AudioConverter and AudioFifo can be stack allocated, no need for scoped_ptr Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:201: WebRtcAudioProcessor::WebRtcAudioProcessor( On 2013/11/06 00:21:18, DaleCurtis wrote: > Again, it'd be nice to have a ThreadChecker or some DCHECKs for which message > loop you're on. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:216: int sink_sample_rate = audio_processing_.get() ? On 2013/11/06 00:21:18, DaleCurtis wrote: > get shouldn't be necessary anymore. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:218: media::ChannelLayout sink_channel_layout = audio_processing_.get() ? On 2013/11/06 00:21:18, DaleCurtis wrote: > Comments? Why the separate path for when audio_processing_ is available? Done with adding the comment. When the webrtc::audio_processing is valid, it means some processing components are on. The output format of the post-processed data is 32k mono for desktops and 16k mono for Android. In case the webrtc::audio_processing is NULL, we do not have any processing on the data, so the output format will be the same as the input format. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:230: DCHECK(capture_converter_.get()); On 2013/11/06 00:21:18, DaleCurtis wrote: > No point in checking, scoped_ptr has an internal check. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:244: *out = capture_converter_->audio_frame()->data_; On 2013/11/06 00:21:18, DaleCurtis wrote: > Is out already allocated? Why not just convert directly into out instead of the > AudioFrame? It doesn't look like you're actually using anything from the > AudioFrame despite setting it all up in the converter. > > Leaving a stale audio_Frame() around across Convert() calls kind of smells to. |out| is a pointer to the data in AudioFrame. It is because webrtc::AudioProcessing only takes AudioFrame as input, the AudioFrame is used like: audio_processing_->ProcessStream(capture_converter_->audio_frame()); audio_processing_->AnalyzeReverseStream(render_converter_->audio_frame()); https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:256: if (!audio_processing_.get()) On 2013/11/06 00:21:18, DaleCurtis wrote: > No more .get(). Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:281: NOTREACHED() << "ProcessStream() error: " << err; On 2013/11/06 00:21:18, DaleCurtis wrote: > DLOG(ERROR) instead? We want to DCHECK this as well, is it fine to keep NOTREACHED()? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:284: // TODO(xians): Fixed the AGC, typing detectin, audio level calculation, On 2013/11/06 00:21:18, DaleCurtis wrote: > Comment doesn't make sense? Done with changing the comment. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:288: void WebRtcAudioProcessor::FeedRenderDataToAudioProcessing( On 2013/11/06 00:21:18, DaleCurtis wrote: > This method isn't defined in the header file?? Is it even being used? I forgot to update the naming. Fixed now. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:292: if (!audio_processing_.get() || On 2013/11/06 00:21:18, DaleCurtis wrote: > Remove .get() Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:305: DCHECK(render_converter_.get()); On 2013/11/06 00:21:18, DaleCurtis wrote: > Unnecessary Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:308: scoped_ptr<media::AudioBus> data_bus = media::AudioBus::Create( On 2013/11/06 00:21:18, DaleCurtis wrote: > Avoid allocating this every callback, it's slow to do so. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:319: void WebRtcAudioProcessor::InitializeAudioProcessingModule( On 2013/11/06 00:21:18, DaleCurtis wrote: > I have no idea about any of these settings below, someone from WebRTC should > review this part. OK, I will ask Henrik G to review this. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:321: const CommandLine& command_line = *CommandLine::ForCurrentProcess(); On 2013/11/06 00:21:18, DaleCurtis wrote: > Why the const&? Just use -> Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:328: bool enable_aec = GetPropertyFromConstraints( On 2013/11/06 00:21:18, DaleCurtis wrote: > const all these? Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:341: #if defined(IOS) || defined(ANDROID) On 2013/11/06 00:21:18, DaleCurtis wrote: > Use #if .. #else and const these too. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:386: void WebRtcAudioProcessor::InitializeRenderConverterIfNeeded( On 2013/11/06 00:21:18, DaleCurtis wrote: > Is this necessary? source_params_ and sink_params_ shouldn't change right? Yes, it is necessary. The first PushRenderData(const int16* render_audio, int sample_rate, int number_of_channels, int number_of_frames, int render_delay_ms) will trigger the initialization of the render_converter_, then the source_params might be changed on the fly (this is not completely clear yet, depending on how we implement the render mixer for webrtc). https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:34: const webrtc::MediaConstraintsInterface* constraints); On 2013/11/06 00:21:18, DaleCurtis wrote: > Should this be a const& instead? It is following the caller WebRtcLocalAudioTrack, which has: scoped_refptr<WebRtcLocalAudioTrack> Create( const std::string& id, const scoped_refptr<WebRtcAudioCapturer>& capturer, WebAudioCapturerSource* webaudio_source, webrtc::AudioSourceInterface* track_source, const webrtc::MediaConstraintsInterface* constraints); I believe it is because the constraints can be NULL before, so it takes it as a pointer, but things might have been changed and a const reference might be more appropriate. How about keeping this way in this CL and change the whole chain of code as a follow up? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:46: bool ProcessAndConsume10MsData(int capture_audio_delay_ms, On 2013/11/06 00:21:18, DaleCurtis wrote: > Are you sure you want 10ms in the name. Kind of a pain if it ever changes :) I am glad to remove 10ms. Done https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:62: int render_delay_ms); On 2013/11/06 00:21:18, DaleCurtis wrote: > How about using base::TimeDelta everywhere instead of xxx_ms? May I ask why base::TimeDelta is preferred? I think all the renderer code is accepting a xxx_ms as delay value, starting from RenderCallback and CaptureCallback, and WebRtc also accept a int, I am not sure why we want to use a base::TimeDelta and switch back and forth. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:75: const webrtc::MediaConstraintsInterface* constraints); On 2013/11/06 00:21:18, DaleCurtis wrote: > const&? a separate refactor CL? https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:75: for (size_t i = 0; i < ARRAYSIZE_UNSAFE(kDefaultAudioConstraints); ++i) { On 2013/11/06 00:21:18, DaleCurtis wrote: > just arraysize() should be fine here since it's a global value. The compiler complains about: no matching function for call to ‘ArraySizeHelper(const content::{anonymous}::<anonymous struct> [5])’ https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:85: virtual void SetUp() OVERRIDE { On 2013/11/06 00:21:18, DaleCurtis wrote: > Why SetUp instead of constructor? Fine with using the constructor. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:86: CommandLine::ForCurrentProcess()->AppendSwitch( On 2013/11/06 00:21:18, DaleCurtis wrote: > Is this the right way to do this? Will this affect other tests? Should you > instead have a enable_for_testing() method? I am going to see if this will affect other content unittest, and might have to switch to enable_for_testing() if this approach turns on the switch for other unittest. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:104: // Read the audio data from a file. On 2013/11/06 00:21:18, DaleCurtis wrote: > There's a lot of common setup here, move to a class method? OK, I will address this comment in a later version. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:118: audio_processor->FeedRenderDataToAudioProcessing( On 2013/11/06 00:21:18, DaleCurtis wrote: > Not sure how this compiles?? Done.
https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:191: // TODO(xians): consider using SincResampler to save some memcpy. On 2013/11/06 16:45:14, xians1 wrote: > On 2013/11/06 00:21:18, DaleCurtis wrote: > > What does this mean? SincResampler is automatically invoked by AudioConverter > > if sample rates differ. > > I am only targeting the case where resampling is needed. > > And since AudioConverter is designed to handle multiple InputCallbacks, it needs > to get the data from all the inputs, mix them, down mixing(if needed), then > resampling. > But for this WebRtcAudioProcessor::WebRtcAudioConverter, it only takes one > InputCallback as input, so we probably can save some memcpy by avoid > intermediate buffering. > > Though I am not very sure if it worths the effort. If you just need resampling you can use MultiChannelResampler or just SincResampler for mono. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:244: *out = capture_converter_->audio_frame()->data_; On 2013/11/06 16:45:14, xians1 wrote: > On 2013/11/06 00:21:18, DaleCurtis wrote: > > Is out already allocated? Why not just convert directly into out instead of > the > > AudioFrame? It doesn't look like you're actually using anything from the > > AudioFrame despite setting it all up in the converter. > > > > Leaving a stale audio_Frame() around across Convert() calls kind of smells to. > > |out| is a pointer to the data in AudioFrame. > > It is because webrtc::AudioProcessing only takes AudioFrame as input, the > AudioFrame is used like: > audio_processing_->ProcessStream(capture_converter_->audio_frame()); > audio_processing_->AnalyzeReverseStream(render_converter_->audio_frame()); > > I think the AudioFrame should be owned by WebRtcAudioProcessor and just the out* passed into the converter. It's weird for the Converter to hold onto an AudioFrame for the sole purpose of an external consumer. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:281: NOTREACHED() << "ProcessStream() error: " << err; On 2013/11/06 16:45:14, xians1 wrote: > On 2013/11/06 00:21:18, DaleCurtis wrote: > > DLOG(ERROR) instead? > > We want to DCHECK this as well, is it fine to keep NOTREACHED()? Why not just DCHECK(err) << "ProcessStream() error: " ... https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:34: const webrtc::MediaConstraintsInterface* constraints); On 2013/11/06 16:45:14, xians1 wrote: > On 2013/11/06 00:21:18, DaleCurtis wrote: > > Should this be a const& instead? > > It is following the caller WebRtcLocalAudioTrack, which has: > scoped_refptr<WebRtcLocalAudioTrack> Create( > const std::string& id, > const scoped_refptr<WebRtcAudioCapturer>& capturer, > WebAudioCapturerSource* webaudio_source, > webrtc::AudioSourceInterface* track_source, > const webrtc::MediaConstraintsInterface* constraints); > > I believe it is because the constraints can be NULL before, so it takes it as a > pointer, but things might have been changed and a const reference might be more > appropriate. > > How about keeping this way in this CL and change the whole chain of code as a > follow up? sgtm https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:62: int render_delay_ms); On 2013/11/06 16:45:14, xians1 wrote: > On 2013/11/06 00:21:18, DaleCurtis wrote: > > How about using base::TimeDelta everywhere instead of xxx_ms? > > May I ask why base::TimeDelta is preferred? > I think all the renderer code is accepting a xxx_ms as delay value, starting > from RenderCallback and CaptureCallback, and WebRtc also accept a int, I am not > sure why we want to use a base::TimeDelta and switch back and forth. As time goes forward we want to replace all _ms values with TimeDelta. I want to change all Render() calls to use base::TimeDelta too. https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:49: fifo_.reset(new media::AudioFifo(source_params_.channels(), buffer_size)); This could also be stack allocated if you want. Just need to move buffer_size calculation inline. https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:64: bool Convert() { Comment on what thread this runs on? Ditto for Push(). https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:99: // TODO(xians): Figure out why the first Convert() triggers ProvideInput Probably because output frames > input_frames * input_rate/output_rate. https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:141: int sink_sample_rate = audio_processing_ ? const these two.
Hi Dale, how about now? I can change the code to base::TimeDelta once you confirm that the whole replacement is planned for the near future. BR, SX https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:191: // TODO(xians): consider using SincResampler to save some memcpy. On 2013/11/07 01:36:00, DaleCurtis wrote: > On 2013/11/06 16:45:14, xians1 wrote: > > On 2013/11/06 00:21:18, DaleCurtis wrote: > > > What does this mean? SincResampler is automatically invoked by > AudioConverter > > > if sample rates differ. > > > > I am only targeting the case where resampling is needed. > > > > And since AudioConverter is designed to handle multiple InputCallbacks, it > needs > > to get the data from all the inputs, mix them, down mixing(if needed), then > > resampling. > > But for this WebRtcAudioProcessor::WebRtcAudioConverter, it only takes one > > InputCallback as input, so we probably can save some memcpy by avoid > > intermediate buffering. > > > > Though I am not very sure if it worths the effort. > > If you just need resampling you can use MultiChannelResampler or just > SincResampler for mono. We need down-mixing (if input is stereo) and resampling (SincResampler), so it is more or less like AudioConverter. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:244: *out = capture_converter_->audio_frame()->data_; On 2013/11/07 01:36:00, DaleCurtis wrote: > On 2013/11/06 16:45:14, xians1 wrote: > > On 2013/11/06 00:21:18, DaleCurtis wrote: > > > Is out already allocated? Why not just convert directly into out instead of > > the > > > AudioFrame? It doesn't look like you're actually using anything from the > > > AudioFrame despite setting it all up in the converter. > > > > > > Leaving a stale audio_Frame() around across Convert() calls kind of smells > to. > > > > |out| is a pointer to the data in AudioFrame. > > > > It is because webrtc::AudioProcessing only takes AudioFrame as input, the > > AudioFrame is used like: > > audio_processing_->ProcessStream(capture_converter_->audio_frame()); > > audio_processing_->AnalyzeReverseStream(render_converter_->audio_frame()); > > > > > > I think the AudioFrame should be owned by WebRtcAudioProcessor and just the out* > passed into the converter. It's weird for the Converter to hold onto an > AudioFrame for the sole purpose of an external consumer. Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:281: NOTREACHED() << "ProcessStream() error: " << err; On 2013/11/07 01:36:00, DaleCurtis wrote: > On 2013/11/06 16:45:14, xians1 wrote: > > On 2013/11/06 00:21:18, DaleCurtis wrote: > > > DLOG(ERROR) instead? > > > > We want to DCHECK this as well, is it fine to keep NOTREACHED()? > > Why not just DCHECK(err) << "ProcessStream() error: " ... Done. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:62: int render_delay_ms); On 2013/11/07 01:36:00, DaleCurtis wrote: > On 2013/11/06 16:45:14, xians1 wrote: > > On 2013/11/06 00:21:18, DaleCurtis wrote: > > > How about using base::TimeDelta everywhere instead of xxx_ms? > > > > May I ask why base::TimeDelta is preferred? > > I think all the renderer code is accepting a xxx_ms as delay value, starting > > from RenderCallback and CaptureCallback, and WebRtc also accept a int, I am > not > > sure why we want to use a base::TimeDelta and switch back and forth. > > As time goes forward we want to replace all _ms values with TimeDelta. I want to > change all Render() calls to use base::TimeDelta too. Could you please explain why you think base::TimeDelta is a better fit? I know webrtc and pepper flash need it to do echo cancellation and video-audio sync (webrtc only). And webrtc is taking a xxx_ms int, does some other clients want a higher precision? Anyhow, I think it is also fine to make it base::TimeDelta if we have a concrete plan for this. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:86: CommandLine::ForCurrentProcess()->AppendSwitch( On 2013/11/06 16:45:14, xians1 wrote: > On 2013/11/06 00:21:18, DaleCurtis wrote: > > Is this the right way to do this? Will this affect other tests? Should you > > instead have a enable_for_testing() method? > > I am going to see if this will affect other content unittest, and might have to > switch to enable_for_testing() if this approach turns on the switch for other > unittest. It proves to be the correct approach, and setting up the switch here won't affect other content unittests. https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:104: // Read the audio data from a file. On 2013/11/06 16:45:14, xians1 wrote: > On 2013/11/06 00:21:18, DaleCurtis wrote: > > There's a lot of common setup here, move to a class method? > > OK, I will address this comment in a later version. Done https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:123: // Process and consume the data in the processor. On 2013/11/06 00:21:18, DaleCurtis wrote: > Ditto for all this. Done. https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:49: fifo_.reset(new media::AudioFifo(source_params_.channels(), buffer_size)); On 2013/11/07 01:36:00, DaleCurtis wrote: > This could also be stack allocated if you want. Just need to move buffer_size > calculation inline. I forgot to explain why I would like to keep it as a scoped_ptr, it is because I want to keep the calculation of the buffer_size and comment in a clear way. But I can reconsider it if you strongly want to to be a stack. https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:64: bool Convert() { On 2013/11/07 01:36:00, DaleCurtis wrote: > Comment on what thread this runs on? Ditto for Push(). Done. https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:99: // TODO(xians): Figure out why the first Convert() triggers ProvideInput On 2013/11/07 01:36:00, DaleCurtis wrote: > Probably because output frames > input_frames * input_rate/output_rate. Acknowledged. https://codereview.chromium.org/54383003/diff/310001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:141: int sink_sample_rate = audio_processing_ ? On 2013/11/07 01:36:00, DaleCurtis wrote: > const these two. Done.
Avi, may I have your stamp for content*.gypi files and content/browser/renderer_host/render_process_host_impl.cc Joi, content/public. Thanks, SX
https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/200001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:62: int render_delay_ms); On 2013/11/07 14:43:12, xians1 wrote: > On 2013/11/07 01:36:00, DaleCurtis wrote: > > On 2013/11/06 16:45:14, xians1 wrote: > > > On 2013/11/06 00:21:18, DaleCurtis wrote: > > > > How about using base::TimeDelta everywhere instead of xxx_ms? > > > > > > May I ask why base::TimeDelta is preferred? > > > I think all the renderer code is accepting a xxx_ms as delay value, starting > > > from RenderCallback and CaptureCallback, and WebRtc also accept a int, I am > > not > > > sure why we want to use a base::TimeDelta and switch back and forth. > > > > As time goes forward we want to replace all _ms values with TimeDelta. I want > to > > change all Render() calls to use base::TimeDelta too. > > Could you please explain why you think base::TimeDelta is a better fit? > I know webrtc and pepper flash need it to do echo cancellation and video-audio > sync (webrtc only). And webrtc is taking a xxx_ms int, does some other clients > want a higher precision? > Anyhow, I think it is also fine to make it base::TimeDelta if we have a concrete > plan for this. You should be using base::TimeDelta (or Time, TimeTicks) for all time storage and calculations nowadays. It's the standard way for handling time in Chrome. Which means it's well tested and well understood by other Chrome developers. It reduces to a single int64 value, so you don't need to worry about any performance issues. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:62: // calling Convert(). Isn't this called on the capture thread while Convert() is called on the render thread? Also, Please use consistent naming instead of "realtime audio thread". If Push() and Convert() must be called on the same thread, consider another ThreadChecker which is bound on Push() or Convert(). https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:101: // Called on realtime audio thread. Ditto. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:162: int16** out) { Instead of int16**, should this just be a WebRtcAudioFrame? Out is now pointing at an internally owned construct. it'd be better if whoever calls ProcessAndConsumeData owned the data structure. Alternatively you can copy the data out at this point. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:179: void WebRtcAudioProcessor::ProcessData(webrtc::AudioFrame* audio_frame, As with above, maybe having a DCHECK(capture_thread_checker_.CalledOnValidThread()); and DCHECK(render_thread_checker_.CalledOnValidThread()); might improve clarity of this code a lot. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:102: scoped_ptr<char[]> capture_data(new char[length]); You can use media/base/test_data_util.h to simplify this a bit if you want. Up to you. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:155: scoped_ptr<WebRtcAudioProcessor> audio_processor( Doesn't need to be a scoped_ptr. Ditto above. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_util.cc (right): https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_util.cc:5: #include "content/renderer/media/webrtc_audio_processor_util.h" Instead of _util, maybe _options ? https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_util.cc:45: DCHECK(audio_processing); Remove dcheck here and below.
Thanks Dale. PTAL. And a gentle ping to others. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:62: // calling Convert(). On 2013/11/07 20:44:08, DaleCurtis wrote: > Isn't this called on the capture thread while Convert() is called on the render > thread? Also, Please use consistent naming instead of "realtime audio thread". > There are two Converters used by the processor: capture_converter_ and render_converter_. capture_converter_ is used to convert the capture data, and render_converter_ is used to convert the render data. The Push() and Convert() methods of capture_converter_ are called on the capture audio thread, while render_converter_ is called on the render audio thread. I updated the comment and hopefully make it more clear. > If Push() and Convert() must be called on the same thread, consider another > ThreadChecker which is bound on Push() or Convert(). Done with adding another threadchecker, and I have to add a flag to avoid unnecessary DetachFromThread() call. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:101: // Called on realtime audio thread. On 2013/11/07 20:44:08, DaleCurtis wrote: > Ditto. The same. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:162: int16** out) { On 2013/11/07 20:44:08, DaleCurtis wrote: > Instead of int16**, should this just be a WebRtcAudioFrame? Out is now pointing > at an internally owned construct. it'd be better if whoever calls > ProcessAndConsumeData owned the data structure. > out is supposed to be used by all clients of the track, like webaudio, local renderer, webrtc peer connection, so it'd better be generic type. WebRtcAudioFrame is not a good candidate. > Alternatively you can copy the data out at this point. I would like to avoid the memcpy, if this does not look good to you, I can make a accessor in WebRtcAudioProcessor like : int16* output_data() const { capture_frame_.data_; } And the client of WebRtcAudioProcessor can get the output data via the accesor. wdyt? https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:179: void WebRtcAudioProcessor::ProcessData(webrtc::AudioFrame* audio_frame, On 2013/11/07 20:44:08, DaleCurtis wrote: > As with above, maybe having a > DCHECK(capture_thread_checker_.CalledOnValidThread()); > > and > DCHECK(render_thread_checker_.CalledOnValidThread()); > > might improve clarity of this code a lot. Done. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:102: scoped_ptr<char[]> capture_data(new char[length]); On 2013/11/07 20:44:08, DaleCurtis wrote: > You can use media/base/test_data_util.h to simplify this a bit if you want. Up > to you. Thanks for the tips, but I think I will just keep what it is now. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_unittest.cc:155: scoped_ptr<WebRtcAudioProcessor> audio_processor( On 2013/11/07 20:44:08, DaleCurtis wrote: > Doesn't need to be a scoped_ptr. Ditto above. Done. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_util.cc (right): https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_util.cc:5: #include "content/renderer/media/webrtc_audio_processor_util.h" On 2013/11/07 20:44:08, DaleCurtis wrote: > Instead of _util, maybe _options ? Done. https://codereview.chromium.org/54383003/diff/450001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_util.cc:45: DCHECK(audio_processing); On 2013/11/07 20:44:08, DaleCurtis wrote: > Remove dcheck here and below. Done.
https://codereview.chromium.org/54383003/diff/600001/content/public/common/co... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/600001/content/public/common/co... content/public/common/content_switches.cc:920: // Enables audio processing in a MediaStreamTrack ? When this flag is on, aec, Was the ? supposed to be a dash or a period perhaps? Like this it looks weird. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:67: worker_thread_checker_.DetachFromThread(); You can simply do this from the constructor I think. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:174: render_thread_checker_.DetachFromThread(); I think you meant capture_thread_checker_.DetachFromThread(), but see my other comments about simply doing this in the constructor. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:238: render_thread_checker_.DetachFromThread(); I think you can simply call this in the constructor and remove the render_thread_detach_ member. Same for the capture thread one. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:338: // TODO, figure out if we need to handle the buffer size change. TODO needs an owner. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:50: int16** out); The semantics of |out| need to be documented. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:59: void PushRenderData(const int16* render_audio, The semantics of |render_audio| need to be documented, in particular how many bytes does it need to have available, who is the owner and what guarantees are made about lifetime (presumably just that it's alive during the method call). https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:96: // TODO(xians): Can we get rid of the lock? It looks like you just need base::subtle::NoBarrier_Load() and base::subtle::NoBarrier_Store(). https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:119: // Used to DCHECK that some methods are called on the correct thread. Document which thread that is, since you do so for the other ThreadChecker objects. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_options.cc (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:74: void StartAecDump(AudioProcessing* audio_processin) { audio_processing -> audio_processing https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:75: static const char kAecDumpFilename[] = "/tmp/audio.aecdump"; This won't work on Windows. Can you build a path using PathService? https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:80: void StopAecDump(AudioProcessing* audio_processin) { audio_processin -> audio_processing https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_options.h (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.h:23: bool GetPropertyFromConstraints( All of these methods need comments. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.h:27: void EnableEchoCancellation(AudioProcessing* audio_processing); I guess I don't understand the motivation here well enough. Why do these act on AudioProcessing instead of WebRtcAudioProcessor? Also, why aren't they just members of WebRtcAudioProcessing?
PTAL. Thanks, SX https://codereview.chromium.org/54383003/diff/600001/content/public/common/co... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/600001/content/public/common/co... content/public/common/content_switches.cc:920: // Enables audio processing in a MediaStreamTrack ? When this flag is on, aec, On 2013/11/08 13:36:40, Jói wrote: > Was the ? supposed to be a dash or a period perhaps? Like this it looks weird. Sorry, it should be a period. Fixed now. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:67: worker_thread_checker_.DetachFromThread(); On 2013/11/08 13:36:40, Jói wrote: > You can simply do this from the constructor I think. Done, and changed the name of destructor_thread_checker_ to create_thrad_checker_. Thanks for the tips. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:174: render_thread_checker_.DetachFromThread(); On 2013/11/08 13:36:40, Jói wrote: > I think you meant capture_thread_checker_.DetachFromThread(), but see my other > comments about simply doing this in the constructor. Right, it should be capture_thread_checker_. Sorry for the copy and paste error. Done. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:238: render_thread_checker_.DetachFromThread(); On 2013/11/08 13:36:40, Jói wrote: > I think you can simply call this in the constructor and remove the > render_thread_detach_ member. Same for the capture thread one. Done. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:338: // TODO, figure out if we need to handle the buffer size change. On 2013/11/08 13:36:40, Jói wrote: > TODO needs an owner. Done. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:50: int16** out); On 2013/11/08 13:36:40, Jói wrote: > The semantics of |out| need to be documented. Done. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:59: void PushRenderData(const int16* render_audio, On 2013/11/08 13:36:40, Jói wrote: > The semantics of |render_audio| need to be documented, in particular how many > bytes does it need to have available, who is the owner and what guarantees are > made about lifetime (presumably just that it's alive during the method call). adding a comment like: // |render_audio| is the pointer to the render audio data, its format // is specified by |sample_rate|, |number_of_channels| and |number_of_frames|. // Called on the render audio thread. Let me know if it is insufficient. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:96: // TODO(xians): Can we get rid of the lock? On 2013/11/08 13:36:40, Jói wrote: > It looks like you just need base::subtle::NoBarrier_Load() and > base::subtle::NoBarrier_Store(). Dale would like to change all the xxx_ms into base::TimeDelta, so NoBarrier_Load() and NoBarrier_Store() might not work. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:119: // Used to DCHECK that some methods are called on the correct thread. On 2013/11/08 13:36:40, Jói wrote: > Document which thread that is, since you do so for the other ThreadChecker > objects. Done. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_options.cc (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:74: void StartAecDump(AudioProcessing* audio_processin) { On 2013/11/08 13:36:40, Jói wrote: > audio_processing -> audio_processing Done. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:75: static const char kAecDumpFilename[] = "/tmp/audio.aecdump"; On 2013/11/08 13:36:40, Jói wrote: > This won't work on Windows. Can you build a path using PathService? Talked to Henrik G about this, and decided to keep exactly the same behaviours as what Libjingle does today: static const char kAecDumpByAudioOptionFilename[] = "/tmp/audio.aecdump"; #elif defined(ANDROID) static const char kAecDumpByAudioOptionFilename[] = "/sdcard/audio.aecdump"; #else static const char kAecDumpByAudioOptionFilename[] = "audio.aecdump"; #endif Joi, I have to admit that I don't know much about the PathService and FilePath, please point it out if what I am doing is not proper. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:80: void StopAecDump(AudioProcessing* audio_processin) { On 2013/11/08 13:36:40, Jói wrote: > audio_processin -> audio_processing Done. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_options.h (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.h:23: bool GetPropertyFromConstraints( On 2013/11/08 13:36:40, Jói wrote: > All of these methods need comments. Done. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.h:27: void EnableEchoCancellation(AudioProcessing* audio_processing); On 2013/11/08 13:36:40, Jói wrote: > I guess I don't understand the motivation here well enough. Why do these act on > AudioProcessing instead of WebRtcAudioProcessor? Also, why aren't they just > members of WebRtcAudioProcessing? AudioProcessing is the webrtc components which does all the processing on the signal, like aec, ns, agc, ..etc. WebRtcAudioProcessor is the class owns and use AudioProcessing. The reason why I extract these methods out to a separate files is that the list of the methods will grow substantially when we support more features. The list in this CL is just part of it. And the plan is to gradually support all the features behind a flag, then turn on the flag by default.
https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:59: void PushRenderData(const int16* render_audio, On 2013/11/08 15:39:30, xians1 wrote: > On 2013/11/08 13:36:40, Jói wrote: > > The semantics of |render_audio| need to be documented, in particular how many > > bytes does it need to have available, who is the owner and what guarantees are > > made about lifetime (presumably just that it's alive during the method call). > > adding a comment like: > // |render_audio| is the pointer to the render audio data, its format > // is specified by |sample_rate|, |number_of_channels| and |number_of_frames|. > // Called on the render audio thread. > > Let me know if it is insufficient. Looks good. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:96: // TODO(xians): Can we get rid of the lock? On 2013/11/08 15:39:30, xians1 wrote: > On 2013/11/08 13:36:40, Jói wrote: > > It looks like you just need base::subtle::NoBarrier_Load() and > > base::subtle::NoBarrier_Store(). > > Dale would like to change all the xxx_ms into base::TimeDelta, so > NoBarrier_Load() and NoBarrier_Store() might not work. Your private cache could be an int64 that gets shifted to/from TimeDelta using TimeDelta::ToInternalValue/FromInternalValue. Alternatively, if you are going to keep using the delta as a millisecond value (currently you retrieve it using InMilliseconds()) then maybe your cache should stay as an integer value of milliseconds while the interface to your class uses only TimeDelta. A lock would be OK if contention was minimal, but it seems like there might be a fair bit of contention? https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_options.cc (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:75: static const char kAecDumpFilename[] = "/tmp/audio.aecdump"; On 2013/11/08 15:39:30, xians1 wrote: > On 2013/11/08 13:36:40, Jói wrote: > > This won't work on Windows. Can you build a path using PathService? > > Talked to Henrik G about this, and decided to keep exactly the same behaviours > as what Libjingle does today: > static const char kAecDumpByAudioOptionFilename[] = "/tmp/audio.aecdump"; > #elif defined(ANDROID) > static const char kAecDumpByAudioOptionFilename[] = "/sdcard/audio.aecdump"; > #else > static const char kAecDumpByAudioOptionFilename[] = "audio.aecdump"; > #endif > > > Joi, I have to admit that I don't know much about the PathService and FilePath, > please point it out if what I am doing is not proper. Looks good. https://codereview.chromium.org/54383003/diff/690001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/690001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:45: // the post-processed data if the method is returning a true. What about ownership? Is the data allocated by this object and ownership passed to the caller?
lgtm % nits. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:96: // TODO(xians): Can we get rid of the lock? On 2013/11/08 16:44:22, Jói wrote: > On 2013/11/08 15:39:30, xians1 wrote: > > On 2013/11/08 13:36:40, Jói wrote: > > > It looks like you just need base::subtle::NoBarrier_Load() and > > > base::subtle::NoBarrier_Store(). > > > > Dale would like to change all the xxx_ms into base::TimeDelta, so > > NoBarrier_Load() and NoBarrier_Store() might not work. > > Your private cache could be an int64 that gets shifted to/from TimeDelta using > TimeDelta::ToInternalValue/FromInternalValue. > > Alternatively, if you are going to keep using the delta as a millisecond value > (currently you retrieve it using InMilliseconds()) then maybe your cache should > stay as an integer value of milliseconds while the interface to your class uses > only TimeDelta. > > A lock would be OK if contention was minimal, but it seems like there might be a > fair bit of contention? I think avoiding lock contention (which may be an issue here given the 3 threads) is more important than keeping everything as a base::TimeDelta. Feel free to change it back to an int or Atomic32 or whatever you need. https://codereview.chromium.org/54383003/diff/690001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/690001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:49: bool ProcessAndConsumeData(const base::TimeDelta& capture_delay, base::TimeDelta can be passed by value efficiently instead. See base/time/time.h
Thanks Dale, I switched to Atomic32 to remove the lock. Joi, is it fine to use Atomic32, NoBarrier_Load() and NoBarrier_Store()? SX https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:96: // TODO(xians): Can we get rid of the lock? On 2013/11/08 16:44:22, Jói wrote: > On 2013/11/08 15:39:30, xians1 wrote: > > On 2013/11/08 13:36:40, Jói wrote: > > > It looks like you just need base::subtle::NoBarrier_Load() and > > > base::subtle::NoBarrier_Store(). > > > > Dale would like to change all the xxx_ms into base::TimeDelta, so > > NoBarrier_Load() and NoBarrier_Store() might not work. > > Your private cache could be an int64 that gets shifted to/from TimeDelta using > TimeDelta::ToInternalValue/FromInternalValue. > > Alternatively, if you are going to keep using the delta as a millisecond value > (currently you retrieve it using InMilliseconds()) then maybe your cache should > stay as an integer value of milliseconds while the interface to your class uses > only TimeDelta. > > A lock would be OK if contention was minimal, but it seems like there might be a > fair bit of contention? Thanks for the tips. I am happy to change it to Atomic32 to remove the lock, please take another look. https://codereview.chromium.org/54383003/diff/600001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:96: // TODO(xians): Can we get rid of the lock? On 2013/11/08 21:00:48, DaleCurtis wrote: > On 2013/11/08 16:44:22, Jói wrote: > > On 2013/11/08 15:39:30, xians1 wrote: > > > On 2013/11/08 13:36:40, Jói wrote: > > > > It looks like you just need base::subtle::NoBarrier_Load() and > > > > base::subtle::NoBarrier_Store(). > > > > > > Dale would like to change all the xxx_ms into base::TimeDelta, so > > > NoBarrier_Load() and NoBarrier_Store() might not work. > > > > Your private cache could be an int64 that gets shifted to/from TimeDelta using > > TimeDelta::ToInternalValue/FromInternalValue. > > > > Alternatively, if you are going to keep using the delta as a millisecond value > > (currently you retrieve it using InMilliseconds()) then maybe your cache > should > > stay as an integer value of milliseconds while the interface to your class > uses > > only TimeDelta. > > > > A lock would be OK if contention was minimal, but it seems like there might be > a > > fair bit of contention? > > I think avoiding lock contention (which may be an issue here given the 3 > threads) is more important than keeping everything as a base::TimeDelta. Feel > free to change it back to an int or Atomic32 or whatever you need. Done with changing the internal member to Atomic32 and remove the lock. https://codereview.chromium.org/54383003/diff/690001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/690001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:45: // the post-processed data if the method is returning a true. On 2013/11/08 16:44:22, Jói wrote: > What about ownership? Is the data allocated by this object and ownership passed > to the caller? This processor still owns the pointer, |out| just gets a pointer to the data and the data is guaranteed to outlive the method call. I added a comment to explain this. https://codereview.chromium.org/54383003/diff/690001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:49: bool ProcessAndConsumeData(const base::TimeDelta& capture_delay, On 2013/11/08 21:00:48, DaleCurtis wrote: > base::TimeDelta can be passed by value efficiently instead. See > base/time/time.h Done with passing the value.
LGTM with a nit. https://codereview.chromium.org/54383003/diff/820001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/820001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:209: int total_delay_ms = capture_delay.InMilliseconds() + render_delay_ms; Should this be an int64 since TimeDelta::InMilliseconds returns an int64? I guess you have an assumption that it's not that large; should that be checked anywhere (i.e. in PushRenderData where you store it as an int32 in new_render_delay_ms)?
Joi, one question. https://codereview.chromium.org/54383003/diff/820001/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/820001/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:209: int total_delay_ms = capture_delay.InMilliseconds() + render_delay_ms; On 2013/11/11 14:45:22, Jói wrote: > Should this be an int64 since TimeDelta::InMilliseconds returns an int64? I > guess you have an assumption that it's not that large; should that be checked > anywhere (i.e. in PushRenderData where you store it as an int32 in > new_render_delay_ms)? Exactly, the normal delay should be around 50 ms, and not larger than 200 ms, so truncation should never happen. And WebRtc is taking an int, so there is not much reason to store a int64 here. I am not sure what to check, we get the delay value from the hardware, and just pass it to AEC. In reality we do not care about the value and leave it to AEC to decide how to do with it. It is fine to just leave this? Otherwise I suggest put up a DLOG(WARNING) if the value is bigger than 1000. Wdty?
Since you're storing it as an int32, it would make sense to read it into a temporary int64 variable and CHECK that it's less than MAX_INT (or whatever the std::limits equivalent is) before storing it (or the result of it plus some other 32-bit value) in an int32. You could have the DLOG in there as well, for sure. Cheers, Jói On Mon, Nov 11, 2013 at 3:04 PM, <xians@chromium.org> wrote: > > Joi, one question. > > > > https://codereview.chromium.org/54383003/diff/820001/ > content/renderer/media/webrtc_audio_processor.cc > File content/renderer/media/webrtc_audio_processor.cc (right): > > https://codereview.chromium.org/54383003/diff/820001/ > content/renderer/media/webrtc_audio_processor.cc#newcode209 > content/renderer/media/webrtc_audio_processor.cc:209: int total_delay_ms > = capture_delay.InMilliseconds() + render_delay_ms; > On 2013/11/11 14:45:22, Jói wrote: > >> Should this be an int64 since TimeDelta::InMilliseconds returns an >> > int64? I > >> guess you have an assumption that it's not that large; should that be >> > checked > >> anywhere (i.e. in PushRenderData where you store it as an int32 in >> new_render_delay_ms)? >> > > Exactly, the normal delay should be around 50 ms, and not larger than > 200 ms, so truncation should never happen. And WebRtc is taking an int, > so there is not much reason to store a int64 here. > > I am not sure what to check, we get the delay value from the hardware, > and just pass it to AEC. In reality we do not care about the value and > leave it to AEC to decide how to do with it. > > It is fine to just leave this? Otherwise I suggest put up a > DLOG(WARNING) if the value is bigger than 1000. Wdty? > > https://codereview.chromium.org/54383003/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
Hi Joi, I have added temp int64 and a check. Avi and Henrik, please take a look. Thanks, SX
Instead of INT_MAX, can you use std::numeric_limits<base::subtle::Atomic32>::max() ? On Mon, Nov 11, 2013 at 5:38 PM, <xians@chromium.org> wrote: > Hi Joi, I have added temp int64 and a check. > > Avi and Henrik, please take a look. > > Thanks, > SX > > https://codereview.chromium.org/54383003/ > To unsubscribe from this group and stop receiving emails from it, send an email to chromium-reviews+unsubscribe@chromium.org.
https://codereview.chromium.org/54383003/diff/820001/content/public/common/co... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/820001/content/public/common/co... content/public/common/content_switches.cc:923: // Enables audio processing in a MediaStreamTrack. When this flag is on, aec, Use caps for the abbreviations: AEC, NS and AGC. https://codereview.chromium.org/54383003/diff/820001/content/public/common/co... content/public/common/content_switches.cc:924: // Ns and AGC will be done per MediaStreamTrack instead of in PeerConnection. In PeerConnection isn't really true. I think it's more accurate to write something like "in the WebRTC audio stack library". https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:143: void WebRtcAudioProcessor::SetCaptureFormat( So this has to be called before the functions below. Or |capture_converter_| isn't valid. Is this commented in the header file? https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:213: LOG(WARNING) << "Large audio delay, capture delay: " << capture_delay_ms Should we really log a warning here? What if it gets stuck in a large delay, lots of warnings. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:224: // TODO(xians): Add support for AGC, typing detectin, audio level calculation, Nit: detection https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:286: // Reset the audio processing to NULL if no audio processing component is I don't see a reset here. Old comment? https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:300: Nit: I think you can skip the blank line. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:32: // on the constraints, processes the data and outputs it in a unit of 10 ms What constraints? Please add that to the comment. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:32: // on the constraints, processes the data and outputs it in a unit of 10 ms Nit: This class/object doesn't actually process the data, the webrtc::AudioProcessing object does. I think that should be clarified. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:48: // Returns true if the internal FIFO has at least 10ms data for processing, Nit: 10 ms (add space between) https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:51: bool ProcessAndConsumeData(base::TimeDelta capture_delay, Nit: The parameters are just passed on to the webrtc::AudioProcessing object, right? Maybe comment on that? https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_options.cc (right): https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:21: // Extract all these methods to a helper class. Comment doesn't seem to make sense. Is this a todo? Old todo? https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:22: void EnableEchoCancellation(AudioProcessing* audio_processing) { Should we have disable calls? Are all components disabled by default? https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:23: #if defined(IOS) || defined(ANDROID) Note: In libjingle, for iOS built-in EC and AGC is used. (See WebRtcVoiceEngine::ApplyOptions.) So when we'll later use the audio processor in the track, it should likely do the same and not enable them. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:25: if (audio_processing->echo_control_mobile()->Enable(true)) Does ordinary EC need to be disabled before enabling the mobile EC? https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:26: NOTREACHED(); Can't enabling fail for this and all below calls? If not, should the interface have void return type? If yes, when? https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:30: NOTREACHED(); Do we need to set CN for mobile EC to false or is that default? https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:32: if (audio_processing->echo_cancellation()->Enable(true)) Does mobile EC need to be disabled before enabling the ordinary EC? https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:35: webrtc::EchoCancellation::kHighSuppression)) For mobile, the level should be set to moderate. See VoEAudioProcessingImpl::SetEcStatus in webrtc and WebRtcVoiceEngine::ApplyOptions in libjingle. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:55: void EnableHighPassFilter(AudioProcessing* audio_processing) { Isn't this on by default? Since there's no disable this function doesn't make much sense. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:77: // TODO(xians): Figure out a more suitable directory for the audio dump data. Did you check with ajm@ about the locations?
Joi, I have already replaced INT_MAX with numeric_limits<base::subtle::Atomic32>::max(). Henrik, PTAL. Avi is busy, piman, can I have your owner stamp for content*gypi ? Thanks, SX https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:143: void WebRtcAudioProcessor::SetCaptureFormat( On 2013/11/12 12:46:37, Henrik Grunell wrote: > So this has to be called before the functions below. Or |capture_converter_| > isn't valid. Is this commented in the header file? Done with adding a comment in the header. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:213: LOG(WARNING) << "Large audio delay, capture delay: " << capture_delay_ms On 2013/11/12 12:46:37, Henrik Grunell wrote: > Should we really log a warning here? What if it gets stuck in a large delay, > lots of warnings. Yes, when the delay is unusually large, we would like to get the warning log. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:224: // TODO(xians): Add support for AGC, typing detectin, audio level calculation, On 2013/11/12 12:46:37, Henrik Grunell wrote: > Nit: detection Done. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:286: // Reset the audio processing to NULL if no audio processing component is On 2013/11/12 12:46:37, Henrik Grunell wrote: > I don't see a reset here. Old comment? Fixed the comment. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.cc:300: On 2013/11/12 12:46:37, Henrik Grunell wrote: > Nit: I think you can skip the blank line. Done. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:32: // on the constraints, processes the data and outputs it in a unit of 10 ms On 2013/11/12 12:46:37, Henrik Grunell wrote: > What constraints? Please add that to the comment. Done with specifying getUserMedia constraints. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:32: // on the constraints, processes the data and outputs it in a unit of 10 ms On 2013/11/12 12:46:37, Henrik Grunell wrote: > Nit: This class/object doesn't actually process the data, the > webrtc::AudioProcessing object does. I think that should be clarified. I think it is the implementation details that it is the webrtc::AudioProcessing that does the processing. To the client of this object, this comment is correct. Also the comment has explained that the class owns webrtc::AudioProcessing containing processing components, I don't think we need further clarification. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:48: // Returns true if the internal FIFO has at least 10ms data for processing, On 2013/11/12 12:46:37, Henrik Grunell wrote: > Nit: 10 ms (add space between) Done. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor.h:51: bool ProcessAndConsumeData(base::TimeDelta capture_delay, On 2013/11/12 12:46:37, Henrik Grunell wrote: > Nit: The parameters are just passed on to the webrtc::AudioProcessing object, > right? Maybe comment on that? Done. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... File content/renderer/media/webrtc_audio_processor_options.cc (right): https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:21: // Extract all these methods to a helper class. On 2013/11/12 12:46:37, Henrik Grunell wrote: > Comment doesn't seem to make sense. Is this a todo? Old todo? Done. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:22: void EnableEchoCancellation(AudioProcessing* audio_processing) { On 2013/11/12 12:46:37, Henrik Grunell wrote: > Should we have disable calls? Are all components disabled by default? No, all the components are disabled by default. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:23: #if defined(IOS) || defined(ANDROID) On 2013/11/12 12:46:37, Henrik Grunell wrote: > Note: In libjingle, for iOS built-in EC and AGC is used. (See > WebRtcVoiceEngine::ApplyOptions.) So when we'll later use the audio processor in > the track, it should likely do the same and not enable them. OK, I was not taking this code seriously since chrome does support IOS yet. Fixed now. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:25: if (audio_processing->echo_control_mobile()->Enable(true)) On 2013/11/12 12:46:37, Henrik Grunell wrote: > Does ordinary EC need to be disabled before enabling the mobile EC? We never enable the ordinary EC for mobile. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:26: NOTREACHED(); On 2013/11/12 12:46:37, Henrik Grunell wrote: > Can't enabling fail for this and all below calls? If not, should the interface > have void return type? If yes, when? After the offline discussion, we agreed that a NOTREACHED() is suitable here. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:30: NOTREACHED(); On 2013/11/12 12:46:37, Henrik Grunell wrote: > Do we need to set CN for mobile EC to false or is that default? I think the defaults are all false https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:32: if (audio_processing->echo_cancellation()->Enable(true)) On 2013/11/12 12:46:37, Henrik Grunell wrote: > Does mobile EC need to be disabled before enabling the ordinary EC? We never enable Mobile EC for desktops. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:35: webrtc::EchoCancellation::kHighSuppression)) On 2013/11/12 12:46:37, Henrik Grunell wrote: > For mobile, the level should be set to moderate. See > VoEAudioProcessingImpl::SetEcStatus in webrtc and > WebRtcVoiceEngine::ApplyOptions in libjingle. Addressed this offline, no need to change anything. https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:55: void EnableHighPassFilter(AudioProcessing* audio_processing) { On 2013/11/12 12:46:37, Henrik Grunell wrote: > Isn't this on by default? Since there's no disable this function doesn't make > much sense. I think all the processing components are set to false by default: https://code.google.com/p/chromium/codesearch#chromium/src/third_party/webrtc... https://codereview.chromium.org/54383003/diff/890002/content/renderer/media/w... content/renderer/media/webrtc_audio_processor_options.cc:77: // TODO(xians): Figure out a more suitable directory for the audio dump data. On 2013/11/12 12:46:37, Henrik Grunell wrote: > Did you check with ajm@ about the locations? Not yet, but will do it. But this should not block this CL, we can always change it when getting answer from ajm. Also, the code is behind a flag, so there is no impact at all.
https://codereview.chromium.org/54383003/diff/820001/content/public/common/co... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/820001/content/public/common/co... content/public/common/content_switches.cc:923: // Enables audio processing in a MediaStreamTrack. When this flag is on, aec, On 2013/11/12 12:46:37, Henrik Grunell wrote: > Use caps for the abbreviations: AEC, NS and AGC. Done. https://codereview.chromium.org/54383003/diff/820001/content/public/common/co... content/public/common/content_switches.cc:924: // Ns and AGC will be done per MediaStreamTrack instead of in PeerConnection. On 2013/11/12 12:46:37, Henrik Grunell wrote: > In PeerConnection isn't really true. I think it's more accurate to write > something like "in the WebRTC audio stack library". Talked offline, agreed to keep what it is.
LGTM
https://codereview.chromium.org/54383003/diff/1130001/content/browser/rendere... File content/browser/renderer_host/render_process_host_impl.cc (right): https://codereview.chromium.org/54383003/diff/1130001/content/browser/rendere... content/browser/renderer_host/render_process_host_impl.cc:1051: switches::kEnableAudioTrackProcessing, Please add to kForwardSwitches in chrome/browser/chromeos/login/chrome_restart_request.cc so that it also applies to Chrome OS guest mode. https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:208: base::subtle::NoBarrier_Load(&render_delay_ms_); I don't see the point of making render_delay_ms_ an Atomic32 if you only load and store without barrier. What are you trying to achieve? https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:210: CHECK_LT(capture_delay_ms, DCHECK_LT https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:242: CHECK_LT(new_render_delay_ms, DCHECK_LT https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:102: volatile base::subtle::Atomic32 render_delay_ms_; Shouldn't need volatile if you use the proper functions to access. Also, POD field needs initializer.
Thanks Piman, PTAL. SX https://codereview.chromium.org/54383003/diff/1130001/content/browser/rendere... File content/browser/renderer_host/render_process_host_impl.cc (right): https://codereview.chromium.org/54383003/diff/1130001/content/browser/rendere... content/browser/renderer_host/render_process_host_impl.cc:1051: switches::kEnableAudioTrackProcessing, On 2013/11/12 19:51:27, piman wrote: > Please add to kForwardSwitches in > chrome/browser/chromeos/login/chrome_restart_request.cc so that it also applies > to Chrome OS guest mode. Done. https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:208: base::subtle::NoBarrier_Load(&render_delay_ms_); On 2013/11/12 19:51:27, piman wrote: > I don't see the point of making render_delay_ms_ an Atomic32 if you only load > and store without barrier. > What are you trying to achieve? a mistake. changed it to Acquire_Load and Release_Store. is it OK now? https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:210: CHECK_LT(capture_delay_ms, On 2013/11/12 19:51:27, piman wrote: > DCHECK_LT Done. https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:242: CHECK_LT(new_render_delay_ms, On 2013/11/12 19:51:27, piman wrote: > DCHECK_LT Done. https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:102: volatile base::subtle::Atomic32 render_delay_ms_; On 2013/11/12 19:51:27, piman wrote: > Shouldn't need volatile if you use the proper functions to access. > > Also, POD field needs initializer. Done.
LGTM for my part, but I don't understand the threading logic, so I'm unable to evaluate. Please make sure this is properly reviewed. https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:208: base::subtle::NoBarrier_Load(&render_delay_ms_); On 2013/11/13 17:19:34, xians1 wrote: > On 2013/11/12 19:51:27, piman wrote: > > I don't see the point of making render_delay_ms_ an Atomic32 if you only load > > and store without barrier. > > What are you trying to achieve? > > a mistake. changed it to Acquire_Load and Release_Store. is it OK now? It looks saner, but I don't understand the threading logic, and why this is passed out-of-band, and what are the ordering consequences, so I'm unable to tell you whether this is correct or not. Has someone familiar with the logic reviewed this code?
On 2013/11/13 19:28:15, piman wrote: > LGTM for my part, but I don't understand the threading logic, so I'm unable to > evaluate. Please make sure this is properly reviewed. > > https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... > File content/renderer/media/webrtc_audio_processor.cc (right): > > https://codereview.chromium.org/54383003/diff/1130001/content/renderer/media/... > content/renderer/media/webrtc_audio_processor.cc:208: > base::subtle::NoBarrier_Load(&render_delay_ms_); > On 2013/11/13 17:19:34, xians1 wrote: > > On 2013/11/12 19:51:27, piman wrote: > > > I don't see the point of making render_delay_ms_ an Atomic32 if you only > load > > > and store without barrier. > > > What are you trying to achieve? > > > > a mistake. changed it to Acquire_Load and Release_Store. is it OK now? > > It looks saner, but I don't understand the threading logic, and why this is > passed out-of-band, and what are the ordering consequences, so I'm unable to > tell you whether this is correct or not. Has someone familiar with the logic > reviewed this code? They are all good questions. The relevant code is written this way to deliver the same behaviour as what the existing production code gives to the WebRtc echo cancellation, but it is hard to find someone who can answer why this behaviour is desirable. I need to do a couple of follow-up CLs to hook up this the feature of this CL to the webrtc audio track. If you don't mind, I will land this CL now and will come back after I sort out the details of the design.
https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:144: void WebRtcAudioProcessor::SetCaptureFormat( Put the function definitions in the same order as the declarations. https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:168: capture_thread_checker_.CalledOnValidThread(); I can't see that the thread is actually checked somewhere in the code. Shouldn't there be a DCHECK at at least one place? If not all places. https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:247: InitializeRenderConverterIfNeeded(sample_rate, number_of_channels, Is it necessary to check to re-initialization every call to PushRenderData? Can the source parameters change at any time? Does it make sense to have a public function for this and let the caller take responsibility of that? Depending on when the parameters can change it seems that it could be an option to consider. https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:250: // TODO(xians): Avoid this extra interleave/deinterleave. Can you avoid it now in this CL? Follow-up CL? Or is there a bug for fixing this? https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:42: worker_thread_checker_.DetachFromThread(); How does the threading model look like? Push, Convert and destructor must be called on the same thread, but not necessarily the same as the constructor? Why is that? Why not all on the same thread? https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:67: worker_thread_checker_.CalledOnValidThread(); I may be ignorant regarding ThreadChecker, but don't you need a DCHECK? This will only set the valid thread, right? https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:168: capture_thread_checker_.CalledOnValidThread(); Shouldn't you return if |audio_processor_| == NULL? https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:192: void WebRtcAudioProcessor::ProcessData(webrtc::AudioFrame* audio_frame, Does it make sense to set the delay, volume and key pressed in other separate functions? I'm not sure how the system behind is designed. Or if these values are strongly tied to an audio frame, maybe they should be in there audio frame class?
Henrik, PTAL. SX https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:144: void WebRtcAudioProcessor::SetCaptureFormat( On 2013/11/18 13:19:34, Henrik Grunell wrote: > Put the function definitions in the same order as the declarations. Done. https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:168: capture_thread_checker_.CalledOnValidThread(); On 2013/11/18 13:19:34, Henrik Grunell wrote: > I can't see that the thread is actually checked somewhere in the code. Shouldn't > there be a DCHECK at at least one place? If not all places. That is a mistake, fixed now, thanks. https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:247: InitializeRenderConverterIfNeeded(sample_rate, number_of_channels, On 2013/11/18 13:19:34, Henrik Grunell wrote: > Is it necessary to check to re-initialization every call to PushRenderData? Can > the source parameters change at any time? Does it make sense to have a public > function for this and let the caller take responsibility of that? Depending on > when the parameters can change it seems that it could be an option to consider. They might be changed on the fly, for example, changing to another device. There is currently no callback to notify things have changed like how we do it on the capture side. Things are going to be changed substantially when we start implementing a webrtc mixer in chrome, and pass the unmixed data from webrtc to chrome. Until then, we will figure out if we can add SetRenderFormat callback. https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:250: // TODO(xians): Avoid this extra interleave/deinterleave. On 2013/11/18 13:19:34, Henrik Grunell wrote: > Can you avoid it now in this CL? Follow-up CL? Or is there a bug for fixing > this? Not in this CL or the follow-up. Yes, I created 317710 when Dale asked the same question. This might be addressed when we start implementing the webrtc mixer in Chrome. https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:42: worker_thread_checker_.DetachFromThread(); On 2013/11/18 13:19:34, Henrik Grunell wrote: > How does the threading model look like? Push, Convert and destructor must be > called on the same thread, but not necessarily the same as the constructor? Why > is that? Why not all on the same thread? The capture converter is created in the main render thread but used in the audio capture thread. the render converter is created and used in the audio render thread. The reason why the capture converter is different from the render converter is that on the capture side, we get a SetCaptureFormat() notification when the format has changed, this happens on the main render thread, and the source is responsible for stopping the data flow before this SetCaptureFormat() callback, so that the clients can re-initialize the components. But on the render side we don't have such callback. If you are asking can we can do the same thing to the capture converter as how we do with the render converter. Like checking the format for each callback, and do reinitialization if format has changed, then we have everything on the audio capture thread. We may be able to do this, but I don't think it is a good idea to allow changing resources on the fly. https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:67: worker_thread_checker_.CalledOnValidThread(); On 2013/11/18 13:19:34, Henrik Grunell wrote: > I may be ignorant regarding ThreadChecker, but don't you need a DCHECK? This > will only set the valid thread, right? right, it is needed. https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:168: capture_thread_checker_.CalledOnValidThread(); On 2013/11/18 13:19:34, Henrik Grunell wrote: > Shouldn't you return if |audio_processor_| == NULL? No, we handle the case when audio_processor_ is NULL, where this class works as a FIFO. https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:192: void WebRtcAudioProcessor::ProcessData(webrtc::AudioFrame* audio_frame, On 2013/11/18 13:19:34, Henrik Grunell wrote: > Does it make sense to set the delay, volume and key pressed in other separate > functions? I'm not sure how the system behind is designed. Or if these values > are strongly tied to an audio frame, maybe they should be in there audio frame > class? They are tied to the audio_frame. From the code perspective, there won't be any benefit by extracting them out to a separate method.
https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:144: void WebRtcAudioProcessor::SetCaptureFormat( On 2013/11/18 17:41:46, xians1 wrote: > On 2013/11/18 13:19:34, Henrik Grunell wrote: > > Put the function definitions in the same order as the declarations. > > Done. The definitions are still not in the same order as the declarations. I meant for all functions. Also I think you should put Push*Data together. https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:247: InitializeRenderConverterIfNeeded(sample_rate, number_of_channels, On 2013/11/18 17:41:46, xians1 wrote: > On 2013/11/18 13:19:34, Henrik Grunell wrote: > > Is it necessary to check to re-initialization every call to PushRenderData? > Can > > the source parameters change at any time? Does it make sense to have a public > > function for this and let the caller take responsibility of that? Depending on > > when the parameters can change it seems that it could be an option to > consider. > > They might be changed on the fly, for example, changing to another device. > There is currently no callback to notify things have changed like how we do it > on the capture side. > Things are going to be changed substantially when we start implementing a webrtc > mixer in chrome, and pass the unmixed data from webrtc to chrome. Until then, we > will figure out if we can add SetRenderFormat callback. OK, sounds fine. https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:250: // TODO(xians): Avoid this extra interleave/deinterleave. On 2013/11/18 17:41:46, xians1 wrote: > On 2013/11/18 13:19:34, Henrik Grunell wrote: > > Can you avoid it now in this CL? Follow-up CL? Or is there a bug for fixing > > this? > > Not in this CL or the follow-up. > Yes, I created 317710 when Dale asked the same question. This might be addressed > when we start implementing the webrtc mixer in Chrome. OK. https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:42: worker_thread_checker_.DetachFromThread(); On 2013/11/18 17:41:46, xians1 wrote: > On 2013/11/18 13:19:34, Henrik Grunell wrote: > > How does the threading model look like? Push, Convert and destructor must be > > called on the same thread, but not necessarily the same as the constructor? > Why > > is that? Why not all on the same thread? > > The capture converter is created in the main render thread but used in the audio > capture thread. > the render converter is created and used in the audio render thread. Ah of course, there's two thread checkers. > > The reason why the capture converter is different from the render converter is > that on the capture side, we get a SetCaptureFormat() notification when the > format has changed, this happens on the main render thread, and the source is > responsible for stopping the data flow before this SetCaptureFormat() callback, > so that the clients can re-initialize the components. > > But on the render side we don't have such callback. > > If you are asking can we can do the same thing to the capture converter as how > we do with the render converter. Like checking the format for each callback, and > do reinitialization if format has changed, then we have everything on the audio > capture thread. > We may be able to do this, but I don't think it is a good idea to allow changing > resources on the fly. > > https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:168: capture_thread_checker_.CalledOnValidThread(); On 2013/11/18 17:41:46, xians1 wrote: > On 2013/11/18 13:19:34, Henrik Grunell wrote: > > Shouldn't you return if |audio_processor_| == NULL? > > No, we handle the case when audio_processor_ is NULL, where this class works as > a FIFO. OK. https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:192: void WebRtcAudioProcessor::ProcessData(webrtc::AudioFrame* audio_frame, On 2013/11/18 17:41:46, xians1 wrote: > On 2013/11/18 13:19:34, Henrik Grunell wrote: > > Does it make sense to set the delay, volume and key pressed in other separate > > functions? I'm not sure how the system behind is designed. Or if these values > > are strongly tied to an audio frame, maybe they should be in there audio frame > > class? > > They are tied to the audio_frame. From the code perspective, there won't be any > benefit by extracting them out to a separate method. OK, should they be added to AudioFrame? Or are they only used here? https://codereview.chromium.org/54383003/diff/2210001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2210001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:230: void WebRtcAudioProcessor::PushRenderData( Just a step ahead: have you thought about guaranteeing that the user(s) of objects of this class not calling these functions after of during destruction? What will be the lifetime of objects of this class? I just want to make sure this doesn't slip and is forgotten when starting using the class.
Henrik, how is it now? SX https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1760001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:144: void WebRtcAudioProcessor::SetCaptureFormat( On 2013/11/19 08:52:22, Henrik Grunell wrote: > On 2013/11/18 17:41:46, xians1 wrote: > > On 2013/11/18 13:19:34, Henrik Grunell wrote: > > > Put the function definitions in the same order as the declarations. > > > > Done. > > The definitions are still not in the same order as the declarations. I meant for > all functions. Also I think you should put Push*Data together. OK, done now. https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/1710002/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:192: void WebRtcAudioProcessor::ProcessData(webrtc::AudioFrame* audio_frame, On 2013/11/19 08:52:22, Henrik Grunell wrote: > On 2013/11/18 17:41:46, xians1 wrote: > > On 2013/11/18 13:19:34, Henrik Grunell wrote: > > > Does it make sense to set the delay, volume and key pressed in other > separate > > > functions? I'm not sure how the system behind is designed. Or if these > values > > > are strongly tied to an audio frame, maybe they should be in there audio > frame > > > class? > > > > They are tied to the audio_frame. From the code perspective, there won't be > any > > benefit by extracting them out to a separate method. > > OK, should they be added to AudioFrame? Or are they only used here? They are only used here, we need to pass them to webrtc::AudioProcessing module, then most of them will not be used again, except that the delay value needs to be cached for video/audio sync. https://codereview.chromium.org/54383003/diff/2210001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2210001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:230: void WebRtcAudioProcessor::PushRenderData( On 2013/11/19 08:52:22, Henrik Grunell wrote: > Just a step ahead: have you thought about guaranteeing that the user(s) of > objects of this class not calling these functions after of during destruction? > What will be the lifetime of objects of this class? > > I just want to make sure this doesn't slip and is forgotten when starting using > the class. The owner of this class will gurantee this.
drive-by https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:114: return 1.0; add a note that explains this return value? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:118: media::AudioParameters source_params_; if these parameters (sink and source) do not change after construction, consider making these const. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:154: return; {} https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:171: while (render_converter_->Convert(&render_frame_)) { no {} https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:226: if (!constraints) when will this happen? if it's a supported use case it might be good to document. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:229: const bool enable_aec = GetPropertyFromConstraints( should agc be included as well or is that coming later? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:254: if (!audio_processing_) is there a case where this pointer is not NULL when we get here? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:278: if (audio_processing_->set_sample_rate_hz(kAudioProcessingSampleRate)) should this and the next conditional be |if (!...)|? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:282: NOTREACHED(); How you're using NOTREACHED is a bit confusing. should this perhaps be: CHECK(audio_processing_->set_sample_rate_hz(kAudioProcessingSampleRate)); CHECK(audio_processing_->set_num_channels( kAudioProcessingNumberOfChannel, kAudioProcessingNumberOfChannel)); ? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:286: int sample_rate, int number_of_channels, int frames_per_buffer) { thread check? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:301: media::CHANNEL_LAYOUT_MONO, kAudioProcessingSampleRate, 16, It's probably worth adding some comments here that explain why we always downmix to mono. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:338: NOTREACHED(); same questions as above about NOTREACHED... does set_stream_analog_level return a bool or an int? If int, is 0 considered success? If so, I'd personally prefer comparing against 0 instead of using the ! operator. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:44: // Push the render audio to WebRtc::AudioProcessing for analysis. This is Since AudioTrack isn't strictly speaking WebRTC specific (assuming webrtc means PeerConnection), then I'm not sure that AudioProcessing should be in a webrtc namespace or have WebRTC as part of its name. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:102: base::subtle::Atomic32 render_delay_ms_; can you add some comments on why it's an Atomic32 value? the comments are generally good here btw. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_options.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:29: NOTREACHED(); same questions about NOTREACHED in this file and throughout https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:51: webrtc::NoiseSuppression::kHigh)) {} https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:60: NOTREACHED(); strange indent (run lint?) https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:69: webrtc::VoiceDetection::kVeryLowLikelihood)) {} https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:80: // TODO(xians): Figure out a more suitable directory for the audio dump data. check with grunell? (and perhaps even assign the todo to him?) https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:92: const std::string file_name = WideToUTF8(file.value()); strange indent https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_options.h (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.h:31: void EnableEchoCancellation(AudioProcessing* audio_processing); why aren't all these methods member functions of AudioProcessing? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:58: } const kDefaultAudioConstraints[] = { make this a static const inside ApplyFixedAudioConstraints? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:91: switches::kEnableAudioTrackProcessing); is this a common thing to do for tests? could we somehow avoid this and still be able to test the code?
Some more comments and questions. https://codereview.chromium.org/54383003/diff/2260001/content/public/common/c... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/public/common/c... content/public/common/content_switches.cc:936: // Ns and AGC will be done per MediaStreamTrack instead of in PeerConnection. NS (capitals) https://codereview.chromium.org/54383003/diff/2260001/content/public/common/c... content/public/common/content_switches.cc:937: const char kEnableAudioTrackProcessing[] = "enable-audio-processor"; Why not "enable-audio-track-processing"? :) I think that's better. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:278: if (audio_processing_->set_sample_rate_hz(kAudioProcessingSampleRate)) On 2013/11/21 19:52:03, tommi wrote: > should this and the next conditional be |if (!...)|? I think this returns an int. Maybe check != 0. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:58: // of the data represeted by |out| is guaranteed to outlive the method call. Are there any guarantees for how long the data pointed to by *|out| will live *after* this function has returned? Is the caller supposed to copy the data? In that case, it seems better to copy the data in this function to memory the caller has allocated. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_options.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:80: // TODO(xians): Figure out a more suitable directory for the audio dump data. On 2013/11/21 19:52:03, tommi wrote: > check with grunell? (and perhaps even assign the todo to him?) Shijing and I talked about this offline. We agreed to use temp directory for all platforms (PathService::Get(base::DIR_TEMP, &path)) but first check with Andrew (ajm@) if there's possibly a strong reason for storing it in the executable or current directory (e.g. easier for user to find it.) - he's more familiar with the use case. - Have you checked with ajm@ for his input? - Will PathService::Get(base::DIR_TEMP, &path) work on Android? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:45: .Append(FILE_PATH_LITERAL("speech_16b_stereo_48kHz.raw")); Is this a common way of specifying a file used in a test? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:50: DCHECK(data_file_size64 > length); Only do this check in debug builds? (If you change this, also change the one above.) I think the check should be moved to before reading the file. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:61: #if defined(OS_CHROMEOS) || defined(OS_MACOSX) Why do we need to set experimental AEC platform conditionally in the test? I think you can enable or not for all. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:113: // Feed data as render data to the processor, this does not cost anything "Feed data as render data to the processor" is redundant, remove. I think the rest is slightly off target, this small type of cost isn't really a problem in tests. I'd suggest explain more in line with that for some cases it's not used but we do this always anyway. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:120: // Process and consume the data in the processor. This comment is redundant. Remove. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:140: TEST_F(WebRtcAudioProcessorTest, WithoutAudioProcessing) { I suggest testing function by function in separate tests if possible. (ProcessAndConsume... obviously depends on other function calls prior.) Is it possible to do that? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:153: TEST_F(WebRtcAudioProcessorTest, WithAudioProcessing) { Is there any way we can test different sets of constraints? Test that the right set of components are enabled and actually exercised?
Tommi and Henrik, PTAL. SX https://codereview.chromium.org/54383003/diff/2260001/content/public/common/c... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/public/common/c... content/public/common/content_switches.cc:936: // Ns and AGC will be done per MediaStreamTrack instead of in PeerConnection. On 2013/11/22 09:03:36, Henrik Grunell wrote: > NS (capitals) Done. https://codereview.chromium.org/54383003/diff/2260001/content/public/common/c... content/public/common/content_switches.cc:937: const char kEnableAudioTrackProcessing[] = "enable-audio-processor"; On 2013/11/22 09:03:36, Henrik Grunell wrote: > Why not "enable-audio-track-processing"? :) I think that's better. Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:114: return 1.0; On 2013/11/21 19:52:03, tommi wrote: > add a note that explains this return value? Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:118: media::AudioParameters source_params_; On 2013/11/21 19:52:03, tommi wrote: > if these parameters (sink and source) do not change after construction, consider > making these const. Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:154: return; On 2013/11/21 19:52:03, tommi wrote: > {} Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:171: while (render_converter_->Convert(&render_frame_)) { On 2013/11/21 19:52:03, tommi wrote: > no {} Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:226: if (!constraints) On 2013/11/21 19:52:03, tommi wrote: > when will this happen? if it's a supported use case it might be good to > document. Right, in some of the unittests which do not care about the constraint. OK. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:229: const bool enable_aec = GetPropertyFromConstraints( On 2013/11/21 19:52:03, tommi wrote: > should agc be included as well or is that coming later? It will be in the future. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:254: if (!audio_processing_) On 2013/11/21 19:52:03, tommi wrote: > is there a case where this pointer is not NULL when we get here? Added a DCHECK instead. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:278: if (audio_processing_->set_sample_rate_hz(kAudioProcessingSampleRate)) On 2013/11/22 09:03:36, Henrik Grunell wrote: > On 2013/11/21 19:52:03, tommi wrote: > > should this and the next conditional be |if (!...)|? > > I think this returns an int. Maybe check != 0. Use CHECK_EQ https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:278: if (audio_processing_->set_sample_rate_hz(kAudioProcessingSampleRate)) On 2013/11/21 19:52:03, tommi wrote: > should this and the next conditional be |if (!...)|? Use CHECK_EQ https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:282: NOTREACHED(); On 2013/11/21 19:52:03, tommi wrote: > How you're using NOTREACHED is a bit confusing. should this perhaps be: > CHECK(audio_processing_->set_sample_rate_hz(kAudioProcessingSampleRate)); > CHECK(audio_processing_->set_num_channels( > kAudioProcessingNumberOfChannel, kAudioProcessingNumberOfChannel)); > ? Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:286: int sample_rate, int number_of_channels, int frames_per_buffer) { On 2013/11/21 19:52:03, tommi wrote: > thread check? Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:301: media::CHANNEL_LAYOUT_MONO, kAudioProcessingSampleRate, 16, On 2013/11/21 19:52:03, tommi wrote: > It's probably worth adding some comments here that explain why we always downmix > to mono. Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:338: NOTREACHED(); On 2013/11/21 19:52:03, tommi wrote: > same questions as above about NOTREACHED... does set_stream_analog_level return > a bool or an int? If int, is 0 considered success? If so, I'd personally > prefer comparing against 0 instead of using the ! operator. Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:44: // Push the render audio to WebRtc::AudioProcessing for analysis. This is On 2013/11/21 19:52:03, tommi wrote: > Since AudioTrack isn't strictly speaking WebRTC specific (assuming webrtc means > PeerConnection), then I'm not sure that AudioProcessing should be in a webrtc > namespace or have WebRTC as part of its name. Changed it to webrtc::AudioProcessing, which is the name of the class with webrtc namespace. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:58: // of the data represeted by |out| is guaranteed to outlive the method call. On 2013/11/22 09:03:36, Henrik Grunell wrote: > Are there any guarantees for how long the data pointed to by *|out| will live > *after* this function has returned? Is the caller supposed to copy the data? In > that case, it seems better to copy the data in this function to memory the > caller has allocated. As explained in the comment, *|out| guarantees to outlive the method call, so the caller is supposed to get the pointer of the data, then demux it to all the sinks, and the sink will copy the data. We are trying to avoid an extra memcopy here to help performance. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:102: base::subtle::Atomic32 render_delay_ms_; On 2013/11/21 19:52:03, tommi wrote: > can you add some comments on why it's an Atomic32 value? > the comments are generally good here btw. Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_options.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:29: NOTREACHED(); On 2013/11/21 19:52:03, tommi wrote: > same questions about NOTREACHED in this file and throughout Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:51: webrtc::NoiseSuppression::kHigh)) On 2013/11/21 19:52:03, tommi wrote: > {} removed. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:60: NOTREACHED(); On 2013/11/21 19:52:03, tommi wrote: > strange indent (run lint?) Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:69: webrtc::VoiceDetection::kVeryLowLikelihood)) On 2013/11/21 19:52:03, tommi wrote: > {} removed. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:80: // TODO(xians): Figure out a more suitable directory for the audio dump data. On 2013/11/22 09:03:36, Henrik Grunell wrote: > On 2013/11/21 19:52:03, tommi wrote: > > check with grunell? (and perhaps even assign the todo to him?) > > Shijing and I talked about this offline. We agreed to use temp directory for all > platforms (PathService::Get(base::DIR_TEMP, &path)) but first check with Andrew > (ajm@) if there's possibly a strong reason for storing it in the executable or > current directory (e.g. easier for user to find it.) - he's more familiar with > the use case. > > - Have you checked with ajm@ for his input? > - Will PathService::Get(base::DIR_TEMP, &path) work on Android? I don't have time for this, and the purpose of this CL is to build up a framework allowing us to continue working on it. So I will leave the AEC dump relevant question to you. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:80: // TODO(xians): Figure out a more suitable directory for the audio dump data. On 2013/11/21 19:52:03, tommi wrote: > check with grunell? (and perhaps even assign the todo to him?) Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:92: const std::string file_name = WideToUTF8(file.value()); On 2013/11/21 19:52:03, tommi wrote: > strange indent Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_options.h (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.h:31: void EnableEchoCancellation(AudioProcessing* audio_processing); On 2013/11/21 19:52:03, tommi wrote: > why aren't all these methods member functions of AudioProcessing? The reason is that this list of methods will grow substantially when we start supporting more and more constraints. It is definitely better to extract them out to a helper class instead of making a super long file. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:45: .Append(FILE_PATH_LITERAL("speech_16b_stereo_48kHz.raw")); On 2013/11/22 09:03:36, Henrik Grunell wrote: > Is this a common way of specifying a file used in a test? yes. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:50: DCHECK(data_file_size64 > length); On 2013/11/22 09:03:36, Henrik Grunell wrote: > Only do this check in debug builds? (If you change this, also change the one > above.) I think the check should be moved to before reading the file. that is unittest, DCHECK is enough. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:58: } const kDefaultAudioConstraints[] = { On 2013/11/21 19:52:03, tommi wrote: > make this a static const inside ApplyFixedAudioConstraints? Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:61: #if defined(OS_CHROMEOS) || defined(OS_MACOSX) On 2013/11/22 09:03:36, Henrik Grunell wrote: > Why do we need to set experimental AEC platform conditionally in the test? I > think you can enable or not for all. This is the same as the production code. Do you know if experimental AEC is supported on those platforms? But anyhow, if they are not used in production on those platforms, why should we test experimental AEC on those platforms? https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:91: switches::kEnableAudioTrackProcessing); On 2013/11/21 19:52:03, tommi wrote: > is this a common thing to do for tests? could we somehow avoid this and still > be able to test the code? That is what I found from the current unittests. Well, the falg is in the production code, and the unittest wants to test the code behind the falg, so the unittest has to set the falg to true. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:113: // Feed data as render data to the processor, this does not cost anything On 2013/11/22 09:03:36, Henrik Grunell wrote: > "Feed data as render data to the processor" is redundant, remove. > > I think the rest is slightly off target, this small type of cost isn't really a > problem in tests. I'd suggest explain more in line with that for some cases it's > not used but we do this always anyway. Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:120: // Process and consume the data in the processor. On 2013/11/22 09:03:36, Henrik Grunell wrote: > This comment is redundant. Remove. Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:140: TEST_F(WebRtcAudioProcessorTest, WithoutAudioProcessing) { On 2013/11/22 09:03:36, Henrik Grunell wrote: > I suggest testing function by function in separate tests if possible. > (ProcessAndConsume... obviously depends on other function calls prior.) Is it > possible to do that? Yes, PushCaptureData, PushRenderData and ProcessAndConsumeData are called in order, so they have to be tested together. Also, I don't know what to test if just calling one method. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:153: TEST_F(WebRtcAudioProcessorTest, WithAudioProcessing) { On 2013/11/22 09:03:36, Henrik Grunell wrote: > Is there any way we can test different sets of constraints? Test that the right > set of components are enabled and actually exercised? WebRtcAudioProcessor can take one constraint at its lifetime, the owner of WebRtcAudioProcessor need to recreate a new object when the constraint changes. Looks like you want tests on the owner of WebRtcAudioProcessor, this can only be done after this CL and its follow-up are landed.
https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:226: if (!constraints) On 2013/11/22 13:27:53, xians1 wrote: > On 2013/11/21 19:52:03, tommi wrote: > > when will this happen? if it's a supported use case it might be good to > > document. > > Right, in some of the unittests which do not care about the constraint. > OK. Can we change those tests to send in empty constraints? I'd like to avoid special casing unit tests in production code. The way it is now we will not get code coverage from those tests. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_options.h (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.h:31: void EnableEchoCancellation(AudioProcessing* audio_processing); On 2013/11/22 13:27:53, xians1 wrote: > On 2013/11/21 19:52:03, tommi wrote: > > why aren't all these methods member functions of AudioProcessing? > > The reason is that this list of methods will grow substantially when we start > supporting more and more constraints. It is definitely better to extract them > out to a helper class instead of making a super long file. I realize now that AudioProcessing is an interface from third_party/webrtc. I was thinking that this was a chrome class and therefore it would make sense to have these as member methods. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:91: switches::kEnableAudioTrackProcessing); On 2013/11/22 13:27:53, xians1 wrote: > On 2013/11/21 19:52:03, tommi wrote: > > is this a common thing to do for tests? could we somehow avoid this and still > > be able to test the code? > > That is what I found from the current unittests. > > Well, the falg is in the production code, and the unittest wants to test the > code behind the falg, so the unittest has to set the falg to true. what is this falg you speak of? :) I understand that the production code is currently checking for this, but can we be a little bit more clever about that and remove the check from InitializeAudioProcessingModule method and never construct a WebRtcAudioProcessor object unless the flag is set? That way we don't have to bother about the flag in either the WebRtcAudioProcessor implementation or the test. https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:7: #include "base/command_line.h" you probably don't need this https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:9: #include "content/public/common/content_switches.h" or this https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:224: if (!CommandLine::ForCurrentProcess()->HasSwitch( I don't think this is the right place to do this check. It should be done before creating an instance of WebRtcAudioProcessor.
https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:58: // of the data represeted by |out| is guaranteed to outlive the method call. On 2013/11/22 13:27:53, xians1 wrote: > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > Are there any guarantees for how long the data pointed to by *|out| will live > > *after* this function has returned? Is the caller supposed to copy the data? > In > > that case, it seems better to copy the data in this function to memory the > > caller has allocated. > > As explained in the comment, *|out| guarantees to outlive the method call, so > the caller is supposed to get the pointer of the data, then demux it to all the > sinks, and the sink will copy the data. > > We are trying to avoid an extra memcopy here to help performance. We talked about this offline too. I don't think the comment is clear, hence my original question. I think it should be clarified that the data pointed to by *|out| won't change until this function is called again. (If that's correct?) https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_options.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.cc:80: // TODO(xians): Figure out a more suitable directory for the audio dump data. On 2013/11/22 13:27:53, xians1 wrote: > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > On 2013/11/21 19:52:03, tommi wrote: > > > check with grunell? (and perhaps even assign the todo to him?) > > > > Shijing and I talked about this offline. We agreed to use temp directory for > all > > platforms (PathService::Get(base::DIR_TEMP, &path)) but first check with > Andrew > > (ajm@) if there's possibly a strong reason for storing it in the executable or > > current directory (e.g. easier for user to find it.) - he's more familiar with > > the use case. > > > > - Have you checked with ajm@ for his input? > > - Will PathService::Get(base::DIR_TEMP, &path) work on Android? > > I don't have time for this, and the purpose of this CL is to build up a > framework allowing us to continue working on it. So I will leave the AEC dump > relevant question to you. That's fine, I can deal with this. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:61: #if defined(OS_CHROMEOS) || defined(OS_MACOSX) On 2013/11/22 13:27:53, xians1 wrote: > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > Why do we need to set experimental AEC platform conditionally in the test? I > > think you can enable or not for all. > > This is the same as the production code. Do you know if experimental AEC is > supported on those platforms? But anyhow, if they are not used in production on > those platforms, why should we test experimental AEC on those platforms? The reason is to not make any assumptions about the implementation of some other class/function that is a consumer of the AudioProcessing class. If that code changes the test will not cover that. I would prefer to test different sets of constraints regardless of platform. See also other comment below. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:140: TEST_F(WebRtcAudioProcessorTest, WithoutAudioProcessing) { On 2013/11/22 13:27:53, xians1 wrote: > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > I suggest testing function by function in separate tests if possible. > > (ProcessAndConsume... obviously depends on other function calls prior.) Is it > > possible to do that? > > Yes, PushCaptureData, PushRenderData and ProcessAndConsumeData are called in > order, so they have to be tested together. > Also, I don't know what to test if just calling one method. OK, if it doesn't make sense to test individual functions skip it. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:153: TEST_F(WebRtcAudioProcessorTest, WithAudioProcessing) { On 2013/11/22 13:27:53, xians1 wrote: > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > Is there any way we can test different sets of constraints? Test that the > right > > set of components are enabled and actually exercised? > > WebRtcAudioProcessor can take one constraint at its lifetime, the owner of > WebRtcAudioProcessor need to recreate a new object when the constraint changes. > > Looks like you want tests on the owner of WebRtcAudioProcessor, this can only be > done after this CL and its follow-up are landed. Well, no, I mean testing this class. You set constraints in ApplyFixedAudioConstraints. I meant set them differently. E.g. with and without experimental AEC and other combinations that could be good to test. https://codereview.chromium.org/54383003/diff/2390001/content/public/common/c... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/2390001/content/public/common/c... content/public/common/content_switches.cc:937: const char kEnableAudioTrackProcessing[] = "enable-audio-track-processor"; On 2013/11/22 13:27:53, xians1 wrote: > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > Why not "enable-audio-track-processing"? :) I think that's better. > > Done. "processing", not "processor". https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:224: if (!CommandLine::ForCurrentProcess()->HasSwitch( On 2013/11/22 14:32:42, tommi wrote: > I don't think this is the right place to do this check. It should be done > before creating an instance of WebRtcAudioProcessor. +1
Hi Tommi and Henrik, I changed the name from webrtc_audio_processor* to media_stream_audio_processor*, and fixed two nits. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:226: if (!constraints) On 2013/11/22 14:32:42, tommi wrote: > On 2013/11/22 13:27:53, xians1 wrote: > > On 2013/11/21 19:52:03, tommi wrote: > > > when will this happen? if it's a supported use case it might be good to > > > document. > > > > Right, in some of the unittests which do not care about the constraint. > > OK. > > Can we change those tests to send in empty constraints? I'd like to avoid > special casing unit tests in production code. The way it is now we will not get > code coverage from those tests. OK, I will make sure the constraints won't be NULL in unittests https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_options.h (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_options.h:31: void EnableEchoCancellation(AudioProcessing* audio_processing); On 2013/11/22 14:32:42, tommi wrote: > On 2013/11/22 13:27:53, xians1 wrote: > > On 2013/11/21 19:52:03, tommi wrote: > > > why aren't all these methods member functions of AudioProcessing? > > > > The reason is that this list of methods will grow substantially when we start > > supporting more and more constraints. It is definitely better to extract them > > out to a helper class instead of making a super long file. > > I realize now that AudioProcessing is an interface from third_party/webrtc. I > was thinking that this was a chrome class and therefore it would make sense to > have these as member methods. :) so it is OK to have these methods to be in a separate helper class? I really want to keep the WebRtcAudioProcessor (or MediaStreamAudioProcessor) clean and short. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:91: switches::kEnableAudioTrackProcessing); On 2013/11/22 14:32:42, tommi wrote: > On 2013/11/22 13:27:53, xians1 wrote: > > On 2013/11/21 19:52:03, tommi wrote: > > > is this a common thing to do for tests? could we somehow avoid this and > still > > > be able to test the code? > > > > That is what I found from the current unittests. > > > > Well, the falg is in the production code, and the unittest wants to test the > > code behind the falg, so the unittest has to set the falg to true. > > what is this falg you speak of? :) > > I understand that the production code is currently checking for this, but can we > be a little bit more clever about that and remove the check from > InitializeAudioProcessingModule method and never construct a > WebRtcAudioProcessor object unless the flag is set? That way we don't have to > bother about the flag in either the WebRtcAudioProcessor implementation or the > test. flag means switches::kEnableAudioTrackProcessing. We had a offline discussion and agreed to keep this way. In short, the reason for doing this is that WebRtcAudioProcessor will be used as a FIFO when the flag is not set, that is why we put the code here. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:140: TEST_F(WebRtcAudioProcessorTest, WithoutAudioProcessing) { On 2013/11/25 10:52:02, Henrik Grunell wrote: > On 2013/11/22 13:27:53, xians1 wrote: > > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > > I suggest testing function by function in separate tests if possible. > > > (ProcessAndConsume... obviously depends on other function calls prior.) Is > it > > > possible to do that? > > > > Yes, PushCaptureData, PushRenderData and ProcessAndConsumeData are called in > > order, so they have to be tested together. > > Also, I don't know what to test if just calling one method. > > OK, if it doesn't make sense to test individual functions skip it. Done. https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:153: TEST_F(WebRtcAudioProcessorTest, WithAudioProcessing) { On 2013/11/25 10:52:02, Henrik Grunell wrote: > On 2013/11/22 13:27:53, xians1 wrote: > > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > > Is there any way we can test different sets of constraints? Test that the > > right > > > set of components are enabled and actually exercised? > > > > WebRtcAudioProcessor can take one constraint at its lifetime, the owner of > > WebRtcAudioProcessor need to recreate a new object when the constraint > changes. > > > > Looks like you want tests on the owner of WebRtcAudioProcessor, this can only > be > > done after this CL and its follow-up are landed. > > Well, no, I mean testing this class. You set constraints in > ApplyFixedAudioConstraints. I meant set them differently. E.g. with and without > experimental AEC and other combinations that could be good to test. As discussed offline, I don't have time to add more tests, this will be done in separate CLs. https://codereview.chromium.org/54383003/diff/2390001/content/public/common/c... File content/public/common/content_switches.cc (right): https://codereview.chromium.org/54383003/diff/2390001/content/public/common/c... content/public/common/content_switches.cc:937: const char kEnableAudioTrackProcessing[] = "enable-audio-track-processor"; On 2013/11/25 10:52:02, Henrik Grunell wrote: > On 2013/11/22 13:27:53, xians1 wrote: > > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > > Why not "enable-audio-track-processing"? :) I think that's better. > > > > Done. > > "processing", not "processor". Done. https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.cc (right): https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:7: #include "base/command_line.h" On 2013/11/22 14:32:42, tommi wrote: > you probably don't need this This will stay https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:9: #include "content/public/common/content_switches.h" On 2013/11/22 14:32:42, tommi wrote: > or this ditto https://codereview.chromium.org/54383003/diff/2390001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.cc:224: if (!CommandLine::ForCurrentProcess()->HasSwitch( On 2013/11/22 14:32:42, tommi wrote: > I don't think this is the right place to do this check. It should be done > before creating an instance of WebRtcAudioProcessor. The comment was addressed offline.
https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor.h (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor.h:58: // of the data represeted by |out| is guaranteed to outlive the method call. On 2013/11/25 10:52:02, Henrik Grunell wrote: > On 2013/11/22 13:27:53, xians1 wrote: > > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > > Are there any guarantees for how long the data pointed to by *|out| will > live > > > *after* this function has returned? Is the caller supposed to copy the data? > > In > > > that case, it seems better to copy the data in this function to memory the > > > caller has allocated. > > > > As explained in the comment, *|out| guarantees to outlive the method call, so > > the caller is supposed to get the pointer of the data, then demux it to all > the > > sinks, and the sink will copy the data. > > > > We are trying to avoid an extra memcopy here to help performance. > > We talked about this offline too. I don't think the comment is clear, hence my > original question. I think it should be clarified that the data pointed to by > *|out| won't change until this function is called again. (If that's correct?) I missed this one before, now it is done.
Also, FYI, it compiles with "webrtc_enabled=0"
lgtm. thanks!
LGTM Make sure to file a bug for additional testing. Appreciate your patience! :) https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... File content/renderer/media/webrtc_audio_processor_unittest.cc (right): https://codereview.chromium.org/54383003/diff/2260001/content/renderer/media/... content/renderer/media/webrtc_audio_processor_unittest.cc:61: #if defined(OS_CHROMEOS) || defined(OS_MACOSX) On 2013/11/25 10:52:02, Henrik Grunell wrote: > On 2013/11/22 13:27:53, xians1 wrote: > > On 2013/11/22 09:03:36, Henrik Grunell wrote: > > > Why do we need to set experimental AEC platform conditionally in the test? I > > > think you can enable or not for all. > > > > This is the same as the production code. Do you know if experimental AEC is > > supported on those platforms? But anyhow, if they are not used in production > on > > those platforms, why should we test experimental AEC on those platforms? > > The reason is to not make any assumptions about the implementation of some other > class/function that is a consumer of the AudioProcessing class. If that code > changes the test will not cover that. I would prefer to test different sets of > constraints regardless of platform. See also other comment below. We decided to revisit further testing including this comment I have in a later CL. Shijing to create a bug for that.
CQ is trying da patch. Follow status at https://chromium-status.appspot.com/cq/xians@chromium.org/54383003/2590001
Message was sent while issue was closed.
Change committed as 237311 |