-
Notifications
You must be signed in to change notification settings - Fork 87
[WebRTC] Rework device handling sequence so that we can handle unplugging/re-plugging devices #4593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Parent:
Release/2025.07
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
akleshchev
approved these changes
Aug 21, 2025
Nitpick: A typo in commit's name: siSgning instead of signing |
904ea4c to
c6dde7d
Compare
6c24097 to
db89690
Compare
…ging/re-plugging devices The device handling was not processing device updates in the proper sequence as things like AEC use both input and output devices. Devices like headsets are both so unplugging them resulted in various mute conditions and sometimes even a crash. Now, we update both capture and render devices at once in the proper sequence. Test Guidance: * Bring two users in the same place in webrtc regions. * The 'listening' one should have a headset or something set oas 'Default' * Press 'talk' on one, and verify the other can hear. * Unplug the headset from the listening one. * Validate that audio changes from the headset to the speakers. * Plug the headset back in. * Validate that audio changes from speakers to headset. * Do the same type of test with the headset viewer talking. * The microphone used should switch from the headset to the computer (it should have one) Do other various device tests, such as setting devices explicitly, messing with the device selector, etc.
The primary feature of this commit is to update libwebrtc from m114 to m137. This is needed to make webrtc buildable, as m114 is not buildable by the current toolset. m137 had some changes to the API, which required renaming or changing namespace of some of the calls. Additionally, this PR moves from a callback mechanism for gathering the energy levels for tuning to a wrapper AudioDeviceModule, which gives us more control over the audio stream. Finally, the new m137-based webrtc has been updated to allow for 192khz audio streams.
This change updates to m137 from m114, which required a few API changes.
Additionally, this fixes the hiss that happens shortly after someone unmutes: secondlife/server#2094
There was also an issue with a slight amount of repeated after unmuting if there was audio right before unmuting. This is because
the audio processing and buffering still had audio from the previous speaking session. Now, we inject nearly a half second of
silence into the audio buffers/processor after unmuting to flush things.
m137 improved the AGC pipeline and the existing analog style is going away so move to the new digital pipeline. Also, some tweaking for audio levels so that we don't see inworld bars when tuning, so one's own bars seem a reasonable size, etc.
… pile up Also, mute when leaving webrtc-enabled regions or parcels, and unmute when voice comes back.
db89690 to
a9a18a9
Compare
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issues:
Maybe #3919
Maybe #3225
#3085
#2509
#4004
Maybe #4596
#4627
#4648
#4652
#4653
#4642
The device handling was not processing device updates in the proper sequence as
things like AEC use of both input and output devices. Devices like headsets are both
so unplugging them resulted in various mute conditions and sometimes even a crash.
Now, we update both capture and render devices at once in the proper sequence.
Additionally, this included an update from m114 to m137 of the webrtc library, which allowed
us to add support for 192khz.
Test Guidance:
Device Handling
Do other various device tests, such as setting devices explicitly, messing with the device selector, etc.
m137 Update
Additionally, as this is an upgrade from m114 to m137, we'll need to do a fairly thorough general voice pass.
Multi-Channel Devices
Support was added for surround (4 channel and 8 channel and more) output devices, as well as multi-channel input devices (mixers.) To test:
You may be able to test with virtual multi-channel devices.
AGC
We now support AGC 2 in webrtc (digital agc) which is the recommended. AGC1 may be deprecated in that version. To test:
Self Audio Level Handling
Audio level handling was tweaked for both tuning and in-world for self. Validate it looks reasonable.