Skip to content

Latest commit

 

History

History
139 lines (131 loc) · 6.05 KB

JavaAudioDeviceModule.md

File metadata and controls

139 lines (131 loc) · 6.05 KB

JavaAudioDeviceModule 与 AndroidAudioDeviceModule

AudioDeviceModule(Java)的创建

该类由如下方法创建:

JavaAudioDeviceModule.builder(ContextUtils.getApplicationContext())
        .createAudioDeviceModule();

直接看JavaAudioDeviceModule.Builder.createAudioDeviceModule():

public AudioDeviceModule createAudioDeviceModule() {
    ...
    final WebRtcAudioRecord audioInput = new WebRtcAudioRecord(context, audioManager, audioSource,
        audioFormat, audioRecordErrorCallback, audioRecordStateCallback, samplesReadyCallback,
        useHardwareAcousticEchoCanceler, useHardwareNoiseSuppressor);
    final WebRtcAudioTrack audioOutput = new WebRtcAudioTrack(
        context, audioManager, audioTrackErrorCallback, audioTrackStateCallback);
    return new JavaAudioDeviceModule(context, audioManager, audioInput, audioOutput,
        inputSampleRate, outputSampleRate, useStereoInput, useStereoOutput);
}

可以看到两个音频相关的类:

  • WebRtcAudioRecord sdk/android/src/java/org/webrtc/audio/WebRtcAudioRecord.java
  • WebRtcAudioTrack sdk/android/src/java/org/webrtc/audio/WebRtcAudioTrack.java

对于WebRtcAudioRecord.java, 其通过audioRecord成员引用了AudioRecord:

private @Nullable AudioRecord audioRecord;

在其initRecording()方法中根据SDK的版本进行了区分, 但无论那种方式, 最终都是创建了AudioRecord对象:

if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
    // Use the AudioRecord.Builder class on Android M (23) and above.
    // Throws IllegalArgumentException.   n  
    audioRecord = createAudioRecordOnMOrHigher(
        audioSource, sampleRate, channelConfig, audioFormat, bufferSizeInBytes);
    if (preferredDevice != null) {
        setPreferredDevice(preferredDevice);
    }
} else {
    // Use the old AudioRecord constructor for API levels below 23.
    // Throws UnsupportedOperationException.
    audioRecord = createAudioRecordOnLowerThanM(
        audioSource, sampleRate, channelConfig, audioFormat, bufferSizeInBytes);
}

JavaAudioDeviceModule的创建中, 只是记录了WebRtcAudioRecordJavaAudioDeviceModule.audioInput而已, 并没有做实际的初始化.那么initRecording()何时被调用的呢? 其是在JavaAudioRecord::InitRecording()中被调用的, 那么JavaAudioRecord又是什么? 其实它是在创建AudioRecordJni时创建的, 这要从PeerConnectionFactory.Builder.createPeerConnectionFactory()方法说起, 该方法中:

public PeerConnectionFactory createPeerConnectionFactory() {
    ...
    return nativeCreatePeerConnectionFactory(ContextUtils.getApplicationContext(), options,
          audioDeviceModule.getNativeAudioDeviceModulePointer(),
          ...
    );
}

查看JavaAudioDeviceModule.getNativeAudioDeviceModulePointer():

@Override
public long getNativeAudioDeviceModulePointer() {
    synchronized (nativeLock) {
        if (nativeAudioDeviceModule == 0) {
        nativeAudioDeviceModule = nativeCreateAudioDeviceModule(context, audioManager, audioInput,
            audioOutput, inputSampleRate, outputSampleRate, useStereoInput, useStereoOutput);
        }
        return nativeAudioDeviceModule;
    }
}

查看native方法:

JNI_GENERATOR_EXPORT jlong
    Java_org_webrtc_audio_JavaAudioDeviceModule_nativeCreateAudioDeviceModule(...){
        return JNI_JavaAudioDeviceModule_CreateAudioDeviceModule(env,
            base::android::JavaParamRef<jobject>(env, context), base::android::JavaParamRef<jobject>(env,
            audioManager), base::android::JavaParamRef<jobject>(env, audioInput),
            base::android::JavaParamRef<jobject>(env, audioOutput), inputSampleRate, outputSampleRate,
            useStereoInput, useStereoOutput
    }

继续查看JNI_JavaAudioDeviceModule_CreateAudioDeviceModule()方法:

static jlong JNI_JavaAudioDeviceModule_CreateAudioDeviceModule(...){
    auto audio_input = std::make_unique<AudioRecordJni>(
                            env, input_parameters, kHighLatencyModeDelayEstimateInMilliseconds,
                            j_webrtc_audio_record);
    auto audio_output = std::make_unique<AudioTrackJni>(env, output_parameters,
                            j_webrtc_audio_track);
    return jlongFromPointer(CreateAudioDeviceModuleFromInputAndOutput(
                            AudioDeviceModule::kAndroidJavaAudio,
                            j_use_stereo_input, j_use_stereo_output,
                            kHighLatencyModeDelayEstimateInMilliseconds,
                            std::move(audio_input), std::move(audio_output))
                            .release());
}

创建了AudioTrackJni对象, 并通过j_webrtc_audio_record关联到了JavaAudioDeviceModule.audioInput所对应的WebRtcAudioRecord类 看下AudioRecordJni的构造函数:

AudioRecordJni::AudioRecordJni(AudioManager* audio_manager)...{
    j_audio_record_.reset(
        new JavaAudioRecord(j_native_registration_.get(),
                            j_native_registration_->NewObject(
                                "<init>", "(J)V", PointerTojlong(this))));
    ...
}

继续查看CreateAudioDeviceModuleFromInputAndOutput()方法:

rtc::scoped_refptr<AudioDeviceModule> CreateAudioDeviceModuleFromInputAndOutput(...){
    return new rtc::RefCountedObject<AndroidAudioDeviceModule>(
        audio_layer, is_stereo_playout_supported, is_stereo_record_supported,
        playout_delay_ms, std::move(audio_input), std::move(audio_output));
}

创建了AndroidAudioDeviceModule(AudioDeviceModule)对象

这样其实AudioRecordJni就被设置给了AndroidAudioDeviceModule(AudioDeviceModule)input_成员了

const std::unique_ptr<AudioInput> input_;

而WebRTC的Native真正需要的是AudioDeviceModule这个类. 对于WebRtcAudioTrack也是类似的情形. 那么AudioDeviceModule最后会关联到那个Native类中呢? 是下面这个:

// media/engine/webrtc_voice_engine.h
class WebRtcVoiceEngine final : public VoiceEngineInterface {
    ...
    rtc::scoped_refptr<webrtc::AudioDeviceModule> adm_;
    ...
}

基本类图如下:
JavaAudioDeviceModule