Skip to content
MaverickAl edited this page Apr 23, 2018 · 5 revisions

Overview

Audio in Usagi is fairly simple. Like so many of the libraries it had to assume very little in the way of hardware capabilities other than the ability to stream, playback and dynamically alter pitch. This was our first time producing audio code by ourselves; the reason we were keen to do this was that our previous title had not even updated 3D positional audio in real-time and we were keen to make much more dynamic audio this time around.

Positional audio update, the scaling per speaker, doppler effect, etc are all updated by the Audio class. The advantage of this is that it is consistent between platforms and will work in any scenario.

Positional Audio

The current audio assumes stereo sound, however when developing the code was tested with calculations for surround sound panning. To increase the number of speakers you would need to define them in the SoundChannel enum in AudioDefs.h and then set up wrapping speaker info in Audio.cpp (to allow the calculations to work for any number of speakers the most rear left and right speakers are defined twice).

Sound banks

Sound files are not loaded individually, entire banks are loaded and unloaded at once. These banks are described by a yml file in the project, however it is expected that you will edit these with the Usagi Audio Tool.

The yml files you create with the audio tool will be turned into VPB files during the build process.

In code you should load the sound file with the following code:

usg::Audio::Inst()->LoadSoundArchive("VPB/Audio/**BANK NAME**.vpb");  

You can load and unload multiple banks so long as they don't contain effects with the same name as this would result in a clash. This can be useful to avoid loading audio files you know you won't need in a given mode.

usg::Audio::Inst()->UnloadArchive("VPB/Audio/**BANK NAME**.vpb");  

A .proto file will also be created in Project Name/audio_gen, which in turn will be processed by the build and a corresponding .pb.h file will be generated including an enum with the CRC values of the effect ids. Effects can be referenced in code or in yml using either this enum or by constructing a CRC32 of the name.
The generated enum name will be SoundBankNameAudio

Components

There are three primary audio components:

  • SoundActorComponent - Tracks an entities position and velocity, should be added to any moveable sound emitting entity.
  • SoundComponent - Holds a handle to the last spawned sound, the presence of this component allows an entity to react to sound events
  • AudioListenerComponent - A listener should be attached to wherever the player is supposedly located in virtual space. It tracks the position and velocity of that entity to be used when calculating positional audio.

Playing sounds

The audio tool will generate a .proto file to match your new VPB file in ProjectName/audio_gen, and all proto files in turn result in a corresponding .pb.h file which you can read in C++. These enum values in this proto file are the signed CRC32 values of the effects in your sound bank and can be used to request an effect to be played.

To play from C++ you can either create a one shot isolated sound by using the Audio singleton play functions and (e.g. Audio::Inst()->Play3DSound(pos, CRC value, volume) or you can use the component system.

To play a sound with the sound sound system your entity will need a SoundComponent, you can then call the PlaySound and StopSound events on your entity (the SoundComponent will track the soundhandle in its runtime data).

PlaySound sound = { LASER_FIRE, true };  
inputs.eventManager->handle->RegisterEventWithEntity(**ENTITY**, sound);  

If you also have a SoundActorComponent attached to that entity the sound will automatically move with that entity.

Remember that initializer events allow you to fire off events when spawning, it's common to want to play a sound when an actor is spawned and this can be done by adding the following lines to the end of an entity file:

InitializerEvents:
  - PlaySound:  
      uAudioID: <%= Zlib::crc32('LASER_FIRE') %>  
      bPositional: true    

Music

Music is controlled through the MusicManager singleton. In order to conform the design pattern however (and to avoid problems if the singleton is refactored out, as is the aim with most of the singletons) you should add a MusicComponent to your root entity and use the PlayMusic, PauseMusic, RestartMusic and StopMusic events.