Skip to content

Conversation

@roxanneskelly
Copy link
Contributor

@roxanneskelly roxanneskelly commented Aug 21, 2025

Issues:
Maybe #3919
Maybe #3225
#3085
#2509
#4004
Maybe #4596
#4627
#4648
#4652
#4653
#4642

The device handling was not processing device updates in the proper sequence as
things like AEC use of both input and output devices. Devices like headsets are both
so unplugging them resulted in various mute conditions and sometimes even a crash.

Now, we update both capture and render devices at once in the proper sequence.

Additionally, this included an update from m114 to m137 of the webrtc library, which allowed
us to add support for 192khz.

Test Guidance:

Device Handling

  • Bring two users in the same place in webrtc regions.
  • The 'listening' one should have a headset or something set oas 'Default'
  • Press 'talk' on one, and verify the other can hear.
  • Unplug the headset from the listening one.
  • Validate that audio changes from the headset to the speakers.
  • Plug the headset back in.
  • Validate that audio changes from speakers to headset.
  • Do the same type of test with the headset viewer talking.
  • The microphone used should switch from the headset to the computer (it should have one)

Do other various device tests, such as setting devices explicitly, messing with the device selector, etc.

m137 Update
Additionally, as this is an upgrade from m114 to m137, we'll need to do a fairly thorough general voice pass.

  • general voice
  • cross-region voice
  • parcel voice in various combinations.
  • calls of various types (exercises bringing up and tearing down connections.)

Multi-Channel Devices
Support was added for surround (4 channel and 8 channel and more) output devices, as well as multi-channel input devices (mixers.) To test:

  • Attach a surround output device to your computer (if you don't have one)
  • Attach a multi-channel input device to your computer
  • Select them in the viewer
  • Test audio in both directions.
    You may be able to test with virtual multi-channel devices.

AGC
We now support AGC 2 in webrtc (digital agc) which is the recommended. AGC1 may be deprecated in that version. To test:

  • send audio with AGC turned off
  • send audio with AGC turned on
  • Validate they sound different.

Self Audio Level Handling
Audio level handling was tweaked for both tuning and in-world for self. Validate it looks reasonable.

@github-actions github-actions bot added the c/cpp label Aug 21, 2025
@roxanneskelly roxanneskelly marked this pull request as ready for review August 22, 2025 16:31
Base automatically changed from release/2025.06 to main August 29, 2025 16:48
@akleshchev akleshchev changed the base branch from main to develop September 1, 2025 07:18
@akleshchev
Copy link
Contributor

akleshchev commented Sep 9, 2025

Install NSIS during windows sisgning and package build step

Nitpick: A typo in commit's name: siSgning instead of signing

…ging/re-plugging devices

The device handling was not processing device updates in the proper sequence as
things like AEC use both input and output devices.  Devices like headsets are both
so unplugging them resulted in various mute conditions and sometimes even a crash.

Now, we update both capture and render devices at once in the proper sequence.

Test Guidance:
* Bring two users in the same place in webrtc regions.
* The 'listening' one should have a headset or something set oas 'Default'
* Press 'talk' on one, and verify the other can hear.
* Unplug the headset from the listening one.
* Validate that audio changes from the headset to the speakers.
* Plug the headset back in.
* Validate that audio changes from speakers to headset.
* Do the same type of test with the headset viewer talking.
* The microphone used should switch from the headset to the computer (it should have one)

Do other various device tests, such as setting devices explicitly, messing with the device selector, etc.
The primary feature of this commit is to update libwebrtc from m114
to m137.  This is needed to make webrtc buildable, as m114 is not buildable
by the current toolset.

m137 had some changes to the API, which required renaming or changing namespace
of some of the calls.

Additionally, this PR moves from a callback mechanism for gathering the energy
levels for tuning to a wrapper AudioDeviceModule, which gives us more control
over the audio stream.

Finally, the new m137-based webrtc has been updated to allow for 192khz audio
streams.
    This change updates to m137 from m114, which required a few API changes.

    Additionally, this fixes the hiss that happens shortly after someone unmutes: secondlife/server#2094

    There was also an issue with a slight amount of repeated after unmuting if there was audio right before unmuting.  This is because
    the audio processing and buffering still had audio from the previous speaking session.  Now, we inject nearly a half second of
    silence into the audio buffers/processor after unmuting to flush things.
m137 improved the AGC pipeline and the existing analog style is going away
so move to the new digital pipeline.

Also, some tweaking for audio levels so that we don't see inworld bars when tuning,
so one's own bars seem a reasonable size, etc.
… pile up

Also, mute when leaving webrtc-enabled regions or parcels,
and unmute when voice comes back.
@Geenz Geenz merged commit a6d4c1d into release/2025.07 Sep 13, 2025
12 checks passed
@Geenz Geenz deleted the roxie/fix-devices branch September 13, 2025 00:07
@github-actions github-actions bot locked and limited conversation to collaborators Sep 13, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants