Chromium Code Reviews
chromiumcodereview-hr@appspot.gserviceaccount.com (chromiumcodereview-hr) | Please choose your nickname with Settings | Help | Chromium Project | Gerrit Changes | Sign out
(98)

Side by Side Diff: third_party/WebKit/Source/modules/webaudio/AudioNode.h

Issue 2389253002: reflow comments in modules/{webaudio,vr} (Closed)
Patch Set: . Created 4 years, 2 months ago
Use n/p to move between diff chunks; N/P to move between comments. Draft comments are only viewable by you.
Jump to:
View unified diff | Download patch
OLDNEW
1 /* 1 /*
2 * Copyright (C) 2010, Google Inc. All rights reserved. 2 * Copyright (C) 2010, Google Inc. All rights reserved.
3 * 3 *
4 * Redistribution and use in source and binary forms, with or without 4 * Redistribution and use in source and binary forms, with or without
5 * modification, are permitted provided that the following conditions 5 * modification, are permitted provided that the following conditions
6 * are met: 6 * are met:
7 * 1. Redistributions of source code must retain the above copyright 7 * 1. Redistributions of source code must retain the above copyright
8 * notice, this list of conditions and the following disclaimer. 8 * notice, this list of conditions and the following disclaimer.
9 * 2. Redistributions in binary form must reproduce the above copyright 9 * 2. Redistributions in binary form must reproduce the above copyright
10 * notice, this list of conditions and the following disclaimer in the 10 * notice, this list of conditions and the following disclaimer in the
(...skipping 30 matching lines...) Expand all
41 namespace blink { 41 namespace blink {
42 42
43 class BaseAudioContext; 43 class BaseAudioContext;
44 class AudioNode; 44 class AudioNode;
45 class AudioNodeOptions; 45 class AudioNodeOptions;
46 class AudioNodeInput; 46 class AudioNodeInput;
47 class AudioNodeOutput; 47 class AudioNodeOutput;
48 class AudioParam; 48 class AudioParam;
49 class ExceptionState; 49 class ExceptionState;
50 50
51 // An AudioNode is the basic building block for handling audio within an BaseAud ioContext. 51 // An AudioNode is the basic building block for handling audio within an
52 // It may be an audio source, an intermediate processing module, or an audio des tination. 52 // BaseAudioContext. It may be an audio source, an intermediate processing
53 // Each AudioNode can have inputs and/or outputs. An AudioSourceNode has no inpu ts and a single output. 53 // module, or an audio destination. Each AudioNode can have inputs and/or
54 // An AudioDestinationNode has one input and no outputs and represents the final destination to the audio hardware. 54 // outputs. An AudioSourceNode has no inputs and a single output.
55 // Most processing nodes such as filters will have one input and one output, alt hough multiple inputs and outputs are possible. 55 // An AudioDestinationNode has one input and no outputs and represents the final
56 // destination to the audio hardware. Most processing nodes such as filters
57 // will have one input and one output, although multiple inputs and outputs are
58 // possible.
56 59
57 // Each of AudioNode objects owns its dedicated AudioHandler object. AudioNode 60 // Each of AudioNode objects owns its dedicated AudioHandler object. AudioNode
58 // is responsible to provide IDL-accessible interface and its lifetime is 61 // is responsible to provide IDL-accessible interface and its lifetime is
59 // managed by Oilpan GC. AudioHandler is responsible for anything else. We must 62 // managed by Oilpan GC. AudioHandler is responsible for anything else. We must
60 // not touch AudioNode objects in an audio rendering thread. 63 // not touch AudioNode objects in an audio rendering thread.
61 64
62 // AudioHandler is created and owned by an AudioNode almost all the time. When 65 // AudioHandler is created and owned by an AudioNode almost all the time. When
63 // the AudioNode is about to die, the ownership of its AudioHandler is 66 // the AudioNode is about to die, the ownership of its AudioHandler is
64 // transferred to DeferredTaskHandler, and it does deref the AudioHandler on the 67 // transferred to DeferredTaskHandler, and it does deref the AudioHandler on the
65 // main thread. 68 // main thread.
(...skipping 34 matching lines...) Expand 10 before | Expand all | Expand 10 after
100 // dispose() is called when the owner AudioNode is about to be 103 // dispose() is called when the owner AudioNode is about to be
101 // destructed. This must be called in the main thread, and while the graph 104 // destructed. This must be called in the main thread, and while the graph
102 // lock is held. 105 // lock is held.
103 // Do not release resources used by an audio rendering thread in dispose(). 106 // Do not release resources used by an audio rendering thread in dispose().
104 virtual void dispose(); 107 virtual void dispose();
105 108
106 // node() returns a valid object until dispose() is called. This returns 109 // node() returns a valid object until dispose() is called. This returns
107 // nullptr after dispose(). We must not call node() in an audio rendering 110 // nullptr after dispose(). We must not call node() in an audio rendering
108 // thread. 111 // thread.
109 AudioNode* node() const; 112 AudioNode* node() const;
110 // context() returns a valid object until the BaseAudioContext dies, and retur ns 113 // context() returns a valid object until the BaseAudioContext dies, and
111 // nullptr otherwise. This always returns a valid object in an audio 114 // returns nullptr otherwise. This always returns a valid object in an audio
112 // rendering thread, and inside dispose(). We must not call context() in 115 // rendering thread, and inside dispose(). We must not call context() in the
113 // the destructor. 116 // destructor.
114 virtual BaseAudioContext* context() const; 117 virtual BaseAudioContext* context() const;
115 void clearContext() { m_context = nullptr; } 118 void clearContext() { m_context = nullptr; }
116 119
117 enum ChannelCountMode { Max, ClampedMax, Explicit }; 120 enum ChannelCountMode { Max, ClampedMax, Explicit };
118 121
119 NodeType getNodeType() const { return m_nodeType; } 122 NodeType getNodeType() const { return m_nodeType; }
120 String nodeTypeName() const; 123 String nodeTypeName() const;
121 124
122 // This object has been connected to another object. This might have 125 // This object has been connected to another object. This might have
123 // existing connections from others. 126 // existing connections from others.
124 // This function must be called after acquiring a connection reference. 127 // This function must be called after acquiring a connection reference.
125 void makeConnection(); 128 void makeConnection();
126 // This object will be disconnected from another object. This might have 129 // This object will be disconnected from another object. This might have
127 // remaining connections from others. 130 // remaining connections from others.
128 // This function must be called before releasing a connection reference. 131 // This function must be called before releasing a connection reference.
129 void breakConnection(); 132 void breakConnection();
130 133
131 // Can be called from main thread or context's audio thread. It must be calle d while the context's graph lock is held. 134 // Can be called from main thread or context's audio thread. It must be
135 // called while the context's graph lock is held.
132 void breakConnectionWithLock(); 136 void breakConnectionWithLock();
133 137
134 // The AudioNodeInput(s) (if any) will already have their input data available when process() is called. 138 // The AudioNodeInput(s) (if any) will already have their input data available
135 // Subclasses will take this input data and put the results in the AudioBus(s) of its AudioNodeOutput(s) (if any). 139 // when process() is called. Subclasses will take this input data and put the
140 // results in the AudioBus(s) of its AudioNodeOutput(s) (if any).
136 // Called from context's audio thread. 141 // Called from context's audio thread.
137 virtual void process(size_t framesToProcess) = 0; 142 virtual void process(size_t framesToProcess) = 0;
138 143
139 // No significant resources should be allocated until initialize() is called. 144 // No significant resources should be allocated until initialize() is called.
140 // Processing may not occur until a node is initialized. 145 // Processing may not occur until a node is initialized.
141 virtual void initialize(); 146 virtual void initialize();
142 virtual void uninitialize(); 147 virtual void uninitialize();
143 148
144 // Clear internal state when the node is disabled. When a node is disabled, 149 // Clear internal state when the node is disabled. When a node is disabled,
145 // it is no longer pulled so any internal state is never updated. But some 150 // it is no longer pulled so any internal state is never updated. But some
(...skipping 11 matching lines...) Expand all
157 // Number of output channels. This only matters for ScriptProcessorNodes. 162 // Number of output channels. This only matters for ScriptProcessorNodes.
158 virtual unsigned numberOfOutputChannels() const; 163 virtual unsigned numberOfOutputChannels() const;
159 164
160 // The argument must be less than numberOfInputs(). 165 // The argument must be less than numberOfInputs().
161 AudioNodeInput& input(unsigned); 166 AudioNodeInput& input(unsigned);
162 // The argument must be less than numberOfOutputs(). 167 // The argument must be less than numberOfOutputs().
163 AudioNodeOutput& output(unsigned); 168 AudioNodeOutput& output(unsigned);
164 169
165 virtual float sampleRate() const { return m_sampleRate; } 170 virtual float sampleRate() const { return m_sampleRate; }
166 171
167 // processIfNecessary() is called by our output(s) when the rendering graph ne eds this AudioNode to process. 172 // processIfNecessary() is called by our output(s) when the rendering graph
168 // This method ensures that the AudioNode will only process once per rendering time quantum even if it's called repeatedly. 173 // needs this AudioNode to process. This method ensures that the AudioNode
169 // This handles the case of "fanout" where an output is connected to multiple AudioNode inputs. 174 // will only process once per rendering time quantum even if it's called
170 // Called from context's audio thread. 175 // repeatedly. This handles the case of "fanout" where an output is connected
176 // to multiple AudioNode inputs. Called from context's audio thread.
171 void processIfNecessary(size_t framesToProcess); 177 void processIfNecessary(size_t framesToProcess);
172 178
173 // Called when a new connection has been made to one of our inputs or the conn ection number of channels has changed. 179 // Called when a new connection has been made to one of our inputs or the
174 // This potentially gives us enough information to perform a lazy initializati on or, if necessary, a re-initialization. 180 // connection number of channels has changed. This potentially gives us
175 // Called from main thread. 181 // enough information to perform a lazy initialization or, if necessary, a
182 // re-initialization. Called from main thread.
176 virtual void checkNumberOfChannelsForInput(AudioNodeInput*); 183 virtual void checkNumberOfChannelsForInput(AudioNodeInput*);
177 184
178 #if DEBUG_AUDIONODE_REFERENCES 185 #if DEBUG_AUDIONODE_REFERENCES
179 static void printNodeCounts(); 186 static void printNodeCounts();
180 #endif 187 #endif
181 188
182 // tailTime() is the length of time (not counting latency time) where 189 // tailTime() is the length of time (not counting latency time) where
183 // non-zero output may occur after continuous silent input. 190 // non-zero output may occur after continuous silent input.
184 virtual double tailTime() const; 191 virtual double tailTime() const;
185 192
186 // latencyTime() is the length of time it takes for non-zero output to 193 // latencyTime() is the length of time it takes for non-zero output to
187 // appear after non-zero input is provided. This only applies to processing 194 // appear after non-zero input is provided. This only applies to processing
188 // delay which is an artifact of the processing algorithm chosen and is 195 // delay which is an artifact of the processing algorithm chosen and is
189 // *not* part of the intrinsic desired effect. For example, a "delay" effect 196 // *not* part of the intrinsic desired effect. For example, a "delay" effect
190 // is expected to delay the signal, and thus would not be considered 197 // is expected to delay the signal, and thus would not be considered
191 // latency. 198 // latency.
192 virtual double latencyTime() const; 199 virtual double latencyTime() const;
193 200
194 // propagatesSilence() should return true if the node will generate silent out put when given silent input. By default, AudioNode 201 // propagatesSilence() should return true if the node will generate silent
195 // will take tailTime() and latencyTime() into account when determining whethe r the node will propagate silence. 202 // output when given silent input. By default, AudioNode will take tailTime()
203 // and latencyTime() into account when determining whether the node will
204 // propagate silence.
196 virtual bool propagatesSilence() const; 205 virtual bool propagatesSilence() const;
197 bool inputsAreSilent(); 206 bool inputsAreSilent();
198 void silenceOutputs(); 207 void silenceOutputs();
199 void unsilenceOutputs(); 208 void unsilenceOutputs();
200 209
201 void enableOutputsIfNecessary(); 210 void enableOutputsIfNecessary();
202 void disableOutputsIfNecessary(); 211 void disableOutputsIfNecessary();
203 212
204 unsigned long channelCount(); 213 unsigned long channelCount();
205 virtual void setChannelCount(unsigned long, ExceptionState&); 214 virtual void setChannelCount(unsigned long, ExceptionState&);
(...skipping 13 matching lines...) Expand all
219 228
220 void updateChannelCountMode(); 229 void updateChannelCountMode();
221 void updateChannelInterpretation(); 230 void updateChannelInterpretation();
222 231
223 protected: 232 protected:
224 // Inputs and outputs must be created before the AudioHandler is 233 // Inputs and outputs must be created before the AudioHandler is
225 // initialized. 234 // initialized.
226 void addInput(); 235 void addInput();
227 void addOutput(unsigned numberOfChannels); 236 void addOutput(unsigned numberOfChannels);
228 237
229 // Called by processIfNecessary() to cause all parts of the rendering graph co nnected to us to process. 238 // Called by processIfNecessary() to cause all parts of the rendering graph
230 // Each rendering quantum, the audio data for each of the AudioNode's inputs w ill be available after this method is called. 239 // connected to us to process. Each rendering quantum, the audio data for
231 // Called from context's audio thread. 240 // each of the AudioNode's inputs will be available after this method is
241 // called. Called from context's audio thread.
232 virtual void pullInputs(size_t framesToProcess); 242 virtual void pullInputs(size_t framesToProcess);
233 243
234 // Force all inputs to take any channel interpretation changes into account. 244 // Force all inputs to take any channel interpretation changes into account.
235 void updateChannelsForInputs(); 245 void updateChannelsForInputs();
236 246
237 private: 247 private:
238 void setNodeType(NodeType); 248 void setNodeType(NodeType);
239 249
240 volatile bool m_isInitialized; 250 volatile bool m_isInitialized;
241 NodeType m_nodeType; 251 NodeType m_nodeType;
(...skipping 30 matching lines...) Expand all
272 AudioBus::ChannelInterpretation m_channelInterpretation; 282 AudioBus::ChannelInterpretation m_channelInterpretation;
273 283
274 protected: 284 protected:
275 // Set the (internal) channelCountMode and channelInterpretation 285 // Set the (internal) channelCountMode and channelInterpretation
276 // accordingly. Use this in the node constructors to set the internal state 286 // accordingly. Use this in the node constructors to set the internal state
277 // correctly if the node uses values different from the defaults. 287 // correctly if the node uses values different from the defaults.
278 void setInternalChannelCountMode(ChannelCountMode); 288 void setInternalChannelCountMode(ChannelCountMode);
279 void setInternalChannelInterpretation(AudioBus::ChannelInterpretation); 289 void setInternalChannelInterpretation(AudioBus::ChannelInterpretation);
280 290
281 unsigned m_channelCount; 291 unsigned m_channelCount;
282 // The new channel count mode that will be used to set the actual mode in the pre or post 292 // The new channel count mode that will be used to set the actual mode in the
283 // rendering phase. 293 // pre or post rendering phase.
284 ChannelCountMode m_newChannelCountMode; 294 ChannelCountMode m_newChannelCountMode;
285 // The new channel interpretation that will be used to set the actual 295 // The new channel interpretation that will be used to set the actual
286 // intepretation in the pre or post rendering phase. 296 // intepretation in the pre or post rendering phase.
287 AudioBus::ChannelInterpretation m_newChannelInterpretation; 297 AudioBus::ChannelInterpretation m_newChannelInterpretation;
288 }; 298 };
289 299
290 class MODULES_EXPORT AudioNode : public EventTargetWithInlineData { 300 class MODULES_EXPORT AudioNode : public EventTargetWithInlineData {
291 DEFINE_WRAPPERTYPEINFO(); 301 DEFINE_WRAPPERTYPEINFO();
292 USING_PRE_FINALIZER(AudioNode, dispose); 302 USING_PRE_FINALIZER(AudioNode, dispose);
293 303
(...skipping 27 matching lines...) Expand all
321 void setChannelCountMode(const String&, ExceptionState&); 331 void setChannelCountMode(const String&, ExceptionState&);
322 String channelInterpretation() const; 332 String channelInterpretation() const;
323 void setChannelInterpretation(const String&, ExceptionState&); 333 void setChannelInterpretation(const String&, ExceptionState&);
324 334
325 // EventTarget 335 // EventTarget
326 const AtomicString& interfaceName() const final; 336 const AtomicString& interfaceName() const final;
327 ExecutionContext* getExecutionContext() const final; 337 ExecutionContext* getExecutionContext() const final;
328 338
329 // Called inside AudioHandler constructors. 339 // Called inside AudioHandler constructors.
330 void didAddOutput(unsigned numberOfOutputs); 340 void didAddOutput(unsigned numberOfOutputs);
331 // Like disconnect, but no exception is thrown if the outputIndex is invalid. Just do nothing 341 // Like disconnect, but no exception is thrown if the outputIndex is invalid.
332 // in that case. 342 // Just do nothing in that case.
333 void disconnectWithoutException(unsigned outputIndex); 343 void disconnectWithoutException(unsigned outputIndex);
334 344
335 protected: 345 protected:
336 explicit AudioNode(BaseAudioContext&); 346 explicit AudioNode(BaseAudioContext&);
337 // This should be called in a constructor. 347 // This should be called in a constructor.
338 void setHandler(PassRefPtr<AudioHandler>); 348 void setHandler(PassRefPtr<AudioHandler>);
339 349
340 private: 350 private:
341 void dispose(); 351 void dispose();
342 void disconnectAllFromOutput(unsigned outputIndex); 352 void disconnectAllFromOutput(unsigned outputIndex);
(...skipping 12 matching lines...) Expand all
355 HeapVector<Member<HeapHashSet<Member<AudioNode>>>> m_connectedNodes; 365 HeapVector<Member<HeapHashSet<Member<AudioNode>>>> m_connectedNodes;
356 // Represents audio node graph with Oilpan references. N-th HeapHashSet 366 // Represents audio node graph with Oilpan references. N-th HeapHashSet
357 // represents a set of AudioParam objects connected to this AudioNode's N-th 367 // represents a set of AudioParam objects connected to this AudioNode's N-th
358 // output. 368 // output.
359 HeapVector<Member<HeapHashSet<Member<AudioParam>>>> m_connectedParams; 369 HeapVector<Member<HeapHashSet<Member<AudioParam>>>> m_connectedParams;
360 }; 370 };
361 371
362 } // namespace blink 372 } // namespace blink
363 373
364 #endif // AudioNode_h 374 #endif // AudioNode_h
OLDNEW
« no previous file with comments | « third_party/WebKit/Source/modules/webaudio/AudioListener.cpp ('k') | third_party/WebKit/Source/modules/webaudio/AudioNode.cpp » ('j') | no next file with comments »

Powered by Google App Engine
This is Rietveld 408576698