Skip to content
Permalink
Browse files
[iOS] getUserMedia sometimes doesn't capture from specified microphone
https://bugs.webkit.org/show_bug.cgi?id=228753
rdar://79704226

Reviewed by Youenn Fablet.

Source/WebCore:

The system will always choose the "default" audio input source unless
+[AVAudioSession setPreferredInput:error:] is called first, and that only works
if the audio session category has been set to PlayAndRecord *before* it is called,
so configure the audio session for recording before we choose and configure the
audio capture device.

Tested manually, this only reproduces on hardware.

* platform/audio/PlatformMediaSessionManager.cpp:
(WebCore::PlatformMediaSessionManager::activeAudioSessionRequired const): Audio
capture requires an active audio session.
(WebCore::PlatformMediaSessionManager::removeSession): Move `#if USE(AUDIO_SESSION)`
guard inside of maybeDeactivateAudioSession so it isn't spread throughout the file.
(WebCore::PlatformMediaSessionManager::sessionWillBeginPlayback): Ditto.
(WebCore::PlatformMediaSessionManager::processWillSuspend): Ditto.
(WebCore::PlatformMediaSessionManager::processDidResume): Ditto.
(WebCore::PlatformMediaSessionManager::sessionCanProduceAudioChanged): Add logging,
call `maybeActivateAudioSession()` so we activate the audio session if necessary.
(WebCore::PlatformMediaSessionManager::addAudioCaptureSource): Call updateSessionState
instead of scheduleUpdateSessionState so the audio session category is updated
immediately.
(WebCore::PlatformMediaSessionManager::maybeDeactivateAudioSession): Move
`#if USE(AUDIO_SESSION)` into the function so it doesn't need to be spread
throughout the file.
(WebCore::PlatformMediaSessionManager::maybeActivateAudioSession): Ditto.
* platform/audio/PlatformMediaSessionManager.h:
(WebCore::PlatformMediaSessionManager::isApplicationInBackground const):

* platform/audio/ios/AudioSessionIOS.mm:
(WebCore::AudioSessionIOS::setPreferredBufferSize): Log an error if we are unable
to set the preferred buffer size.

* platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.h:
* platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.mm:
(WebCore::AVAudioSessionCaptureDeviceManager::setPreferredAudioSessionDeviceUID):
New, set the preferred input so capture will use select the device we want.
(WebCore::AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices): Remove
m_recomputeDevices, `setAudioCaptureDevices` has been restructured so we don't need it.
(WebCore::AVAudioSessionCaptureDeviceManager::computeCaptureDevices): Ditto.
(WebCore::AVAudioSessionCaptureDeviceManager::setAudioCaptureDevices): Don't update
the list of capture devices when the default device changes, only when a device is
added, removed, enabled, or disabled.

* platform/mediastream/mac/CoreAudioCaptureSource.cpp:
(WebCore::CoreAudioSharedUnit::setCaptureDevice): Call `setPreferredAudioSessionDeviceUID`
so the correct device is selected.
(WebCore::CoreAudioSharedUnit::cleanupAudioUnit): Clear m_persistentID.
(WebCore::CoreAudioCaptureSource::create): Return an error with a string, or the
web process can detect a failure.
(WebCore::CoreAudioCaptureSource::stopProducingData): Add logging.

Source/WebKit:

* UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp: Re
(WebKit::UserMediaCaptureManagerProxy::SourceProxy::audioUnitWillStart): Delete,
we don't need it now that the web process configures the audio session before
capture begins.


Canonical link: https://commits.webkit.org/240298@main
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@280702 268f45cc-cd09-0410-ab3c-d52691b4dbfc
  • Loading branch information
eric-carlson committed Aug 5, 2021
1 parent 15286c3 commit 4250b816558d0d8ae4d9a3a126678377ceade2e4
Showing 9 changed files with 159 additions and 53 deletions.
@@ -1,3 +1,62 @@
2021-08-05 Eric Carlson <eric.carlson@apple.com>

[iOS] getUserMedia sometimes doesn't capture from specified microphone
https://bugs.webkit.org/show_bug.cgi?id=228753
rdar://79704226

Reviewed by Youenn Fablet.

The system will always choose the "default" audio input source unless
+[AVAudioSession setPreferredInput:error:] is called first, and that only works
if the audio session category has been set to PlayAndRecord *before* it is called,
so configure the audio session for recording before we choose and configure the
audio capture device.

Tested manually, this only reproduces on hardware.

* platform/audio/PlatformMediaSessionManager.cpp:
(WebCore::PlatformMediaSessionManager::activeAudioSessionRequired const): Audio
capture requires an active audio session.
(WebCore::PlatformMediaSessionManager::removeSession): Move `#if USE(AUDIO_SESSION)`
guard inside of maybeDeactivateAudioSession so it isn't spread throughout the file.
(WebCore::PlatformMediaSessionManager::sessionWillBeginPlayback): Ditto.
(WebCore::PlatformMediaSessionManager::processWillSuspend): Ditto.
(WebCore::PlatformMediaSessionManager::processDidResume): Ditto.
(WebCore::PlatformMediaSessionManager::sessionCanProduceAudioChanged): Add logging,
call `maybeActivateAudioSession()` so we activate the audio session if necessary.
(WebCore::PlatformMediaSessionManager::addAudioCaptureSource): Call updateSessionState
instead of scheduleUpdateSessionState so the audio session category is updated
immediately.
(WebCore::PlatformMediaSessionManager::maybeDeactivateAudioSession): Move
`#if USE(AUDIO_SESSION)` into the function so it doesn't need to be spread
throughout the file.
(WebCore::PlatformMediaSessionManager::maybeActivateAudioSession): Ditto.
* platform/audio/PlatformMediaSessionManager.h:
(WebCore::PlatformMediaSessionManager::isApplicationInBackground const):

* platform/audio/ios/AudioSessionIOS.mm:
(WebCore::AudioSessionIOS::setPreferredBufferSize): Log an error if we are unable
to set the preferred buffer size.

* platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.h:
* platform/mediastream/ios/AVAudioSessionCaptureDeviceManager.mm:
(WebCore::AVAudioSessionCaptureDeviceManager::setPreferredAudioSessionDeviceUID):
New, set the preferred input so capture will use select the device we want.
(WebCore::AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices): Remove
m_recomputeDevices, `setAudioCaptureDevices` has been restructured so we don't need it.
(WebCore::AVAudioSessionCaptureDeviceManager::computeCaptureDevices): Ditto.
(WebCore::AVAudioSessionCaptureDeviceManager::setAudioCaptureDevices): Don't update
the list of capture devices when the default device changes, only when a device is
added, removed, enabled, or disabled.

* platform/mediastream/mac/CoreAudioCaptureSource.cpp:
(WebCore::CoreAudioSharedUnit::setCaptureDevice): Call `setPreferredAudioSessionDeviceUID`
so the correct device is selected.
(WebCore::CoreAudioSharedUnit::cleanupAudioUnit): Clear m_persistentID.
(WebCore::CoreAudioCaptureSource::create): Return an error with a string, or the
web process can detect a failure.
(WebCore::CoreAudioCaptureSource::stopProducingData): Add logging.

2021-08-05 Alan Bujtas <zalan@apple.com>

Document::isLayoutTimerActive should read isLayoutPending
@@ -1,5 +1,5 @@
/*
* Copyright (C) 2013-2020 Apple Inc. All rights reserved.
* Copyright (C) 2013-2021 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -116,8 +116,11 @@ bool PlatformMediaSessionManager::has(PlatformMediaSession::MediaType type) cons

bool PlatformMediaSessionManager::activeAudioSessionRequired() const
{
return anyOfSessions([] (auto& session) {
return session.activeAudioSessionRequired();
if (anyOfSessions([] (auto& session) { return session.activeAudioSessionRequired(); }))
return true;

return WTF::anyOf(m_audioCaptureSources, [](auto& source) {
return source.isCapturingAudio();
});
}

@@ -199,10 +202,8 @@ void PlatformMediaSessionManager::removeSession(PlatformMediaSession& session)

m_sessions.remove(index);

#if USE(AUDIO_SESSION)
if (hasNoSession())
maybeDeactivateAudioSession();
#endif

#if !RELEASE_LOG_DISABLED
m_logger->removeLogger(session.logger());
@@ -237,17 +238,10 @@ bool PlatformMediaSessionManager::sessionWillBeginPlayback(PlatformMediaSession&
return false;
}

#if USE(AUDIO_SESSION)
if (activeAudioSessionRequired()) {
if (!AudioSession::sharedSession().tryToSetActive(true)) {
ALWAYS_LOG(LOGIDENTIFIER, session.logIdentifier(), " returning false failed to set active AudioSession");
return false;
}

ALWAYS_LOG(LOGIDENTIFIER, session.logIdentifier(), " sucessfully activated AudioSession");
m_becameActive = true;
if (!maybeActivateAudioSession()) {
ALWAYS_LOG(LOGIDENTIFIER, session.logIdentifier(), " returning false, failed to activate AudioSession");
return false;
}
#endif

if (m_interrupted)
endInterruption(PlatformMediaSession::NoFlags);
@@ -394,9 +388,7 @@ void PlatformMediaSessionManager::processWillSuspend()
session.client().processIsSuspendedChanged();
});

#if USE(AUDIO_SESSION)
maybeDeactivateAudioSession();
#endif
}

void PlatformMediaSessionManager::processDidResume()
@@ -410,10 +402,8 @@ void PlatformMediaSessionManager::processDidResume()
});

#if USE(AUDIO_SESSION)
if (!m_becameActive && activeAudioSessionRequired()) {
m_becameActive = AudioSession::sharedSession().tryToSetActive(true);
ALWAYS_LOG(LOGIDENTIFIER, "tried to set active AudioSession, ", m_becameActive ? "succeeded" : "failed");
}
if (!m_becameActive)
maybeActivateAudioSession();
#endif
}

@@ -437,6 +427,8 @@ void PlatformMediaSessionManager::sessionIsPlayingToWirelessPlaybackTargetChange

void PlatformMediaSessionManager::sessionCanProduceAudioChanged()
{
ALWAYS_LOG(LOGIDENTIFIER);
maybeActivateAudioSession();
updateSessionState();
}

@@ -581,7 +573,7 @@ void PlatformMediaSessionManager::addAudioCaptureSource(PlatformMediaSession::Au
{
ASSERT(!m_audioCaptureSources.contains(source));
m_audioCaptureSources.add(source);
scheduleUpdateSessionState();
updateSessionState();
}


@@ -603,18 +595,31 @@ void PlatformMediaSessionManager::scheduleUpdateSessionState()
});
}

#if USE(AUDIO_SESSION)
void PlatformMediaSessionManager::maybeDeactivateAudioSession()
{
#if USE(AUDIO_SESSION)
if (!m_becameActive || !shouldDeactivateAudioSession())
return;

ALWAYS_LOG(LOGIDENTIFIER, "tried to set inactive AudioSession");
AudioSession::sharedSession().tryToSetActive(false);
m_becameActive = false;
}
#endif
}

bool PlatformMediaSessionManager::maybeActivateAudioSession()
{
#if USE(AUDIO_SESSION)
if (!activeAudioSessionRequired())
return true;

m_becameActive = AudioSession::sharedSession().tryToSetActive(true);
ALWAYS_LOG(LOGIDENTIFIER, m_becameActive ? "successfully activated" : "failed to activate", " AudioSession");
return m_becameActive;
#else
return true;
#endif
}
static bool& deactivateAudioSession()
{
static bool deactivate;
@@ -181,9 +181,8 @@ class PlatformMediaSessionManager
bool anyOfSessions(const Function<bool(const PlatformMediaSession&)>&) const;

bool isApplicationInBackground() const { return m_isApplicationInBackground; }
#if USE(AUDIO_SESSION)
void maybeDeactivateAudioSession();
#endif
bool maybeActivateAudioSession();

#if !RELEASE_LOG_DISABLED
const Logger& logger() const final { return m_logger; }
@@ -1,5 +1,5 @@
/*
* Copyright (C) 2013-2019 Apple Inc. All rights reserved.
* Copyright (C) 2013-2021 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -301,6 +301,7 @@ static void setEligibleForSmartRouting(bool eligible)
NSError *error = nil;
float duration = bufferSize / sampleRate();
[[PAL::getAVAudioSessionClass() sharedInstance] setPreferredIOBufferDuration:duration error:&error];
RELEASE_LOG_ERROR_IF(error, Media, "failed to set preferred buffer duration to %f with error: %@", duration, error.localizedDescription);
ASSERT(!error);
}

@@ -57,6 +57,8 @@ class AVAudioSessionCaptureDeviceManager final : public CaptureDeviceManager {
void enableAllDevicesQuery();
void disableAllDevicesQuery();

void setPreferredAudioSessionDeviceUID(const String&);

private:
AVAudioSessionCaptureDeviceManager();
~AVAudioSessionCaptureDeviceManager();
@@ -144,12 +144,29 @@ - (void)routeDidChange:(NSNotification *)notification
return std::nullopt;
}

void AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices()
void AVAudioSessionCaptureDeviceManager::setPreferredAudioSessionDeviceUID(const String& deviceUID)
{
if (m_recomputeDevices)
AVAudioSessionPortDescription *preferredPort = nil;
NSString *nsDeviceUID = deviceUID;
for (AVAudioSessionPortDescription *portDescription in [m_audioSession availableInputs]) {
if ([portDescription.UID isEqualToString:nsDeviceUID]) {
preferredPort = portDescription;
break;
}
}

if (!preferredPort) {
RELEASE_LOG_ERROR(WebRTC, "failed to find preferred input '%{public}s'", deviceUID.ascii().data());
return;
}

NSError *error = nil;
if (![[PAL::getAVAudioSessionClass() sharedInstance] setPreferredInput:preferredPort error:&error])
RELEASE_LOG_ERROR(WebRTC, "failed to set preferred input to '%{public}s' with error: %@", deviceUID.ascii().data(), error.localizedDescription);
}

m_recomputeDevices = true;
void AVAudioSessionCaptureDeviceManager::scheduleUpdateCaptureDevices()
{
computeCaptureDevices([] { });
}

@@ -180,13 +197,9 @@ - (void)routeDidChange:(NSNotification *)notification
});
}

if (!m_recomputeDevices)
return;

m_dispatchQueue->dispatch([this, completion = WTFMove(completion)] () mutable {
auto newAudioDevices = retrieveAudioSessionCaptureDevices();
callOnWebThreadOrDispatchAsyncOnMainThread(makeBlockPtr([this, completion = WTFMove(completion), newAudioDevices = WTFMove(newAudioDevices).isolatedCopy()] () mutable {
m_recomputeDevices = false;
setAudioCaptureDevices(WTFMove(newAudioDevices));
completion();
}).get());
@@ -220,17 +233,34 @@ - (void)routeDidChange:(NSNotification *)notification
void AVAudioSessionCaptureDeviceManager::setAudioCaptureDevices(Vector<AVAudioSessionCaptureDevice>&& newAudioDevices)
{
bool firstTime = !m_devices;
bool haveDeviceChanges = !m_devices || newAudioDevices.size() != m_devices->size();
if (!haveDeviceChanges) {
for (size_t i = 0; i < newAudioDevices.size(); ++i) {
auto& oldState = (*m_devices)[i];
auto& newState = newAudioDevices[i];
if (newState.type() != oldState.type() || newState.persistentId() != oldState.persistentId() || newState.enabled() != oldState.enabled() || newState.isDefault() != oldState.isDefault())
haveDeviceChanges = true;
bool deviceListChanged = newAudioDevices.size() != m_devices->size();
bool defaultDeviceChanged = false;
if (!deviceListChanged && !firstTime) {
for (auto& newState : newAudioDevices) {

std::optional<CaptureDevice> oldState;
for (const auto& device : m_devices.value()) {
if (device.type() == newState.type() && device.persistentId() == newState.persistentId()) {
oldState = device;
break;
}
}

if (!oldState.has_value()) {
deviceListChanged = true;
break;
}
if (newState.isDefault() != oldState.value().isDefault())
defaultDeviceChanged = true;

if (newState.enabled() != oldState.value().enabled()) {
deviceListChanged = true;
break;
}
}
}

if (!haveDeviceChanges && !firstTime)
if (!deviceListChanged && !firstTime && !defaultDeviceChanged)
return;

auto newDevices = copyToVectorOf<CaptureDevice>(newAudioDevices);
@@ -240,7 +270,7 @@ - (void)routeDidChange:(NSNotification *)notification
});
m_devices = WTFMove(newDevices);

if (!firstTime)
if (deviceListChanged && !firstTime)
deviceChanged();
}

@@ -163,6 +163,9 @@ CoreAudioSharedUnit::CoreAudioSharedUnit()

void CoreAudioSharedUnit::setCaptureDevice(String&& persistentID, uint32_t captureDeviceID)
{
if (m_persistentID == persistentID)
return;

m_persistentID = WTFMove(persistentID);

#if PLATFORM(MAC)
@@ -173,6 +176,7 @@ void CoreAudioSharedUnit::setCaptureDevice(String&& persistentID, uint32_t captu
reconfigureAudioUnit();
#else
UNUSED_PARAM(captureDeviceID);
AVAudioSessionCaptureDeviceManager::singleton().setPreferredAudioSessionDeviceUID(m_persistentID);
#endif
}

@@ -466,6 +470,7 @@ void CoreAudioSharedUnit::cleanupAudioUnit()

m_microphoneSampleBuffer = nullptr;
m_speakerSampleBuffer = nullptr;
m_persistentID = emptyString();
#if !LOG_DISABLED
m_ioUnitName = emptyString();
#endif
@@ -619,7 +624,7 @@ CaptureSourceOrError CoreAudioCaptureSource::create(String&& deviceID, String&&
#elif PLATFORM(IOS_FAMILY)
auto device = AVAudioSessionCaptureDeviceManager::singleton().audioSessionDeviceWithUID(WTFMove(deviceID));
if (!device)
return { };
return { "No AVAudioSessionCaptureDevice device"_s };

auto source = adoptRef(*new CoreAudioCaptureSource(WTFMove(deviceID), String { device->label() }, WTFMove(hashSalt), 0));
#endif
@@ -760,6 +765,7 @@ void CoreAudioCaptureSource::startProducingData()

void CoreAudioCaptureSource::stopProducingData()
{
ALWAYS_LOG_IF(loggerPtr(), LOGIDENTIFIER);
unit().stopProducingData();
}

@@ -1,3 +1,16 @@
2021-08-05 Eric Carlson <eric.carlson@apple.com>

[iOS] getUserMedia sometimes doesn't capture from specified microphone
https://bugs.webkit.org/show_bug.cgi?id=228753
rdar://79704226

Reviewed by Youenn Fablet.

* UIProcess/Cocoa/UserMediaCaptureManagerProxy.cpp: Re
(WebKit::UserMediaCaptureManagerProxy::SourceProxy::audioUnitWillStart): Delete,
we don't need it now that the web process configures the audio session before
capture begins.

2021-08-05 Youenn Fablet <youenn@apple.com>

GPUProcessProxy should send tccd mach lookup sandbox extension

0 comments on commit 4250b81

Please sign in to comment.