Skip to content

Commit

Permalink
Update docs with Airplay2 feature
Browse files Browse the repository at this point in the history
  • Loading branch information
maggie44 committed Jan 25, 2023
1 parent 2158f38 commit 286480d
Show file tree
Hide file tree
Showing 6 changed files with 18 additions and 16 deletions.
18 changes: 10 additions & 8 deletions ARCHITECTURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,23 +7,23 @@ Community contributions have been a staple of this open source project since its
![](https://raw.githubusercontent.com/balena-labs-projects/balena-sound/master/docs/images/arch-overview.png)

balenaSound services can be divided in three groups:
- Sound core: `sound-supervisor` and `audio`.

- Sound core: `sound-supervisor` and `audio`.
- Multiroom: `multiroom-server` and `multiroom-client`
- Plugins: `spotify`, `airplay`, `bluetooth`, `upnp`, etc.
- Plugins: `spotify`, `airplay2`, `bluetooth`, `upnp`, etc.

### Sound core

This is the heart of balenaSound as it contains the most important services: `sound-supervisor` and `audio`.
This is the heart of balenaSound as it contains the most important services: `sound-supervisor` and `audio`.

**audio**
The `audio` block is a [balena block](https://www.balena.io/blog/introducing-balenablocks-jumpstart-your-iot-app-development/) that provides an easy way to work with audio in containerized environments such as balenaOS. You can read more about it [here](https://github.com/balenablocks/audio). In a nutshell, the `audio` block is the main "audio router". It connects to all audio sources and sinks and handles audio routing, which will change depending on the mode of operation (multi-room vs standalone), the output interface selected (onboard audio, HDMI, DAC, USB soundcard), etc. The `audio` block allows you to build complex audio apps such as balenaSound without having to dive deep into ALSA or PulseAudio configuration. One of the key features for balenaSound is that it allows us to define input and output audio layers and then perform all the complex audio routing without knowing/caring about where the audio is being generated or where it should go to. The `audio routing` section belows covers this process in detail.


**sound-supervisor**
The `sound-supervisor`, as its name suggests, is the service that orchestrates all the others. It's not really involved in the audio routing but it does a few key things that enable the other services to be simpler. Here are some of the most important features of the `sound-supervisor`:

- **Multi-room events**: through the use of the [cotejs](https://github.com/dashersw/cote) library and interfacing with the `audio` block, the `sound-supervisor` ensures that all devices on the same local network agree on which is the `master` device. To achieve this, `sound-supervisor` services on different devices exchange event messages constantly.
- **API**: creates a REST API on port 80. The API allows other services to access the current balenaSound configuration, which allows us to update the configuration dynamically and have services react accordingly. As a general rule of thumb, if we are interested in a service's configuration being able to be dynamically updated, the service should rely on configuration reported by `sound-supervisor` and not on environment variables. At this moment, all of the services support this behavior but their configuration is mostly static: you set it at startup via environment variables and that's it. However, there are *experimental* endpoints in the API to update configuration values and all of the services support it already. There's even a *secret* UI that allows for some configuration changes at runtime, it's located at `http://<DEVICE_IP>`.
- **API**: creates a REST API on port 80. The API allows other services to access the current balenaSound configuration, which allows us to update the configuration dynamically and have services react accordingly. As a general rule of thumb, if we are interested in a service's configuration being able to be dynamically updated, the service should rely on configuration reported by `sound-supervisor` and not on environment variables. At this moment, all of the services support this behavior but their configuration is mostly static: you set it at startup via environment variables and that's it. However, there are _experimental_ endpoints in the API to update configuration values and all of the services support it already. There's even a _secret_ UI that allows for some configuration changes at runtime, it's located at `http://<DEVICE_IP>`.

### Multi-room

Expand All @@ -50,27 +50,29 @@ Audio routing is the most crucial part of balenaSound, and it also changes signi

### Input and output layers

One of the advantages of using the `audio` block is that, since it's based on PulseAudio, we can use all the audio processing tools and tricks that are widely available, in this particular case `virtual sinks`. PulseAudio clients can send audio to sinks; usually audio soundcards have a sink that represents them, so sending audio to the audio jack sink will result in that audio coming out of the audio jack. Virtual sinks are virtual nodes that can be used to route audio in and out of them.
One of the advantages of using the `audio` block is that, since it's based on PulseAudio, we can use all the audio processing tools and tricks that are widely available, in this particular case `virtual sinks`. PulseAudio clients can send audio to sinks; usually audio soundcards have a sink that represents them, so sending audio to the audio jack sink will result in that audio coming out of the audio jack. Virtual sinks are virtual nodes that can be used to route audio in and out of them.

For balenaSound we use two virtual sinks in order to simplify how audio is being routed:

- balena-sound.input
- balena-sound.output

Creation and configuration scripts for these virtual sinks are located at `core/audio/balena-sound.pa` and `core/audio/start.sh`.

**balena-sound.input**
`balena-sound.input` acts as an input audio multiplexer/mixer. It's the default sink on balenaSound, so all plugins that send audio to the `audio` block will send it to this sink by default. This allows us to route audio internally without worrying where it came from: any audio generated by a plugin will pass through the `balena-sound.input` sink, so by controlling where it sends it's audio we are effectively controlling all plugins at the same time.
`balena-sound.input` acts as an input audio multiplexer/mixer. It's the default sink on balenaSound, so all plugins that send audio to the `audio` block will send it to this sink by default. This allows us to route audio internally without worrying where it came from: any audio generated by a plugin will pass through the `balena-sound.input` sink, so by controlling where it sends it's audio we are effectively controlling all plugins at the same time.

**balena-sound.output**
`balena-sound.output` on the other hand is the output audio multiplexer/mixer. This one is pretty useful in scenarios where there are multiple soundcards available (onboard, DAC, USB, etc). `balena-sound.output` is always wired to whatever the desired soundcard sink is. So even if we dynamically change the output selection, sending audio to `balena-sound.output` will always result in audio going to the current selection. Again, this is useful to route audio internally without worrying about user selection at runtime.

### Standalone

![](https://raw.githubusercontent.com/balena-labs-projects/balena-sound/master/docs/images/arch-standalone.png)

Standalone mode is easy to understand. You just pipe ` balena-sound.input` to `balena-sound.output` and that's it. Audio coming in from any plugin will find it's way to the selected output. If this was the only mode, we could simplify the setup and use a single sink. Having the two layers however is important for the multiroom mode which is more complicated.


### Multiroom

![](https://raw.githubusercontent.com/balena-labs-projects/balena-sound/master/docs/images/arch-multiroom.png)

Multiroom feature relies on `snapcast` to broadcast the audio to multiple devices. Snapcast has two binaries working alongside: server and client.
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
![logo](https://raw.githubusercontent.com/balena-io-projects/balena-sound/master/docs/images/balenaSound-logo.png)

**Starter project enabling you to add multi-room audio streaming via Bluetooth, Airplay, Spotify Connect and others to any old speakers or Hi-Fi using just a Raspberry Pi.**
**Starter project enabling you to add multi-room audio streaming via Bluetooth, Airplay2, Spotify Connect and others to any old speakers or Hi-Fi using just a Raspberry Pi.**

## Highlights

- **Audio source plugins**: Stream audio from your favourite music services: Bluetooth, Airplay, Spotify Connect, UPnP and more!
- **Audio source plugins**: Stream audio from your favourite music services: Bluetooth, Airplay2, Spotify Connect, UPnP and more!
- **Multi-room synchronous playing**: Play perfectly synchronized audio on multiple devices all over your place.
- **Extended DAC support**: Upgrade your audio quality with one of our supported DACs

Expand Down
2 changes: 1 addition & 1 deletion balena.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: balenaSound
type: sw.application
description: >-
Build a single or multi-room streamer for an existing audio device
using a Raspberry Pi! Supports Bluetooth, Airplay and Spotify Connect
using a Raspberry Pi! Supports Bluetooth, Airplay2 and Spotify Connect
fleetcta: Sounds good
post-provisioning: >-
## Usage instructions
Expand Down
4 changes: 2 additions & 2 deletions docs/01-getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ breadcrumbs: false
Getting started with balenaSound is as simple as deploying it to a [balenaCloud](https://balena.io/cloud) fleet; no additional configuration is required (unless you're using a DAC HAT).
We've outlined the installation steps below. If you want a step-by-step tutorial on how to get balenaSound up and running, feel free to check these blog posts:

- [Turn your old speakers or Hi-Fi into Bluetooth, Airplay and Spotify receivers with a Raspberry Pi and this step-by-step guide](https://www.balena.io/blog/turn-your-old-speakers-or-hi-fi-into-bluetooth-receivers-using-only-a-raspberry-pi/)
- [Build your own multi-room audio system with Bluetooth, Airplay, and Spotify using Raspberry Pis](https://www.balena.io/blog/diy-raspberry-pi-multi-room-audio-system/)
- [Turn your old speakers or Hi-Fi into Bluetooth, Airplay2 and Spotify receivers with a Raspberry Pi and this step-by-step guide](https://www.balena.io/blog/turn-your-old-speakers-or-hi-fi-into-bluetooth-receivers-using-only-a-raspberry-pi/)
- [Build your own multi-room audio system with Bluetooth, Airplay2, and Spotify using Raspberry Pis](https://www.balena.io/blog/diy-raspberry-pi-multi-room-audio-system/)

## Hardware required

Expand Down
4 changes: 2 additions & 2 deletions docs/02-usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ It's a good idea to use the most powerful device on your fleet as the designated

When a device is in multi-room client mode it can only be used as a multi-room `client`. The only audio the device will play is audio coming from a `master` device, so you'll need at least another device in your fleet.

This mode is great for performance constrained devices as plugin services (Spotify, AirPlay, etc) won't be running and consuming CPU cycles. It's also a great choice if you usually stream to the same `master` device and don't want to have every device show up when pairing bluetooth for example.
This mode is great for performance constrained devices as plugin services (Spotify, AirPlay2, etc) won't be running and consuming CPU cycles. It's also a great choice if you usually stream to the same `master` device and don't want to have every device show up when pairing bluetooth for example.

### Standalone

Expand All @@ -60,7 +60,7 @@ balenaSound has been re-designed to easily allow integration with audio streamin
| Plugin | Library/Project |
| --------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Spotify | [raspotify](https://github.com/dtcooper/raspotify/) Spotify Connect only works with Spotify Premium accounts. Zeroconf authentication via your phone/device Spotify client is supported as well as providing user and password, see [customization](customization#plugins) section for details. |
| AirPlay | [shairport-sync](https://github.com/mikebrady/shairport-sync/) |
| AirPlay2 | [shairport-sync](https://github.com/mikebrady/shairport-sync/) |
| UPnP | [gmrenderer-resurrect](https://github.com/hzeller/gmrender-resurrect) |
| Bluetooth | balena [bluetooth](https://github.com/balenablocks/bluetooth/) and [audio](https://github.com/balenablocks/audio) blocks |
| Soundcard input | Experimental support through the balena [audio](https://github.com/balenablocks/audio) block. Check the [customization](customization#plugins) section to learn how to enable it. |
Expand Down
2 changes: 1 addition & 1 deletion docs/04-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The following plugins ship with balenaSound out of the box:

- Spotify Connect
- Bluetooth
- AirPlay
- AirPlay2
- Soundcard input (Requires setting `SOUND_ENABLE_SOUNDCARD_INPUT`, see [details](customization#plugins))

Default plugins can be disabled at runtime via variables. For more details see [here](customization#plugins).
Expand Down

0 comments on commit 286480d

Please sign in to comment.