Skip to content

Commit

Permalink
slightly less latency
Browse files Browse the repository at this point in the history
  • Loading branch information
bjerrep committed Oct 30, 2020
1 parent 29b83e0 commit 8f54088
Show file tree
Hide file tree
Showing 23 changed files with 658 additions and 283 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
*/__pycache__/*
*.idea*
*.pyc
remote_server_and_clients.txt

src/venv

doc
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The initial reason for the Ludit project was to see if it was possible to go wir
The expectation is that nobody else will actually construct a audio system like this. It requires a lot of work to get going, both with hardware and software, and there is no guarantee that it will be possible to reproduce a system like this in good working order.

## Hard time synchronization
The 2 wireless raspberry pi computers are located in 2 separate right and left speakers in a stereo setup. Both rpi's are hardware modified and have their normal 19.2 MHz X1 xtal replaced with a VCTCXO, a voltage controlled crystal oscillator. These VCTCXO's are controlled with the [twitse](https://github.com/bjerrep/twitse) client/server software. With this running both wireless rpi's are synchronized to each other typically within some +/- 20-30 microseconds. And synchronized means exactly that since the processors, buses and what not on the rpi's are now running at the same speed and are continuously tracking each other. This is a non-compromise solution to the problem of crystal drift over time between to separate computers if they should be in sync down to the actual hardware. As a consequence this audio player therefore have no concept of sample skipping, package dropping, re-sampling or anything that would be needed for correcting drift between speakers. Which in turn is why this player is useless to most for playing wireless stereo.
The 2 wireless raspberry pi computers are located in 2 separate right and left speakers in a stereo setup. Both rpi's are hardware modified and have their normal 19.2 MHz X1 xtal replaced with a VCTCXO, a voltage controlled crystal oscillator. These VCTCXO's are controlled with the [twitse](https://github.com/bjerrep/twitse) client/server software. With this running both wireless rpi's are synchronized to each other typically within some +/- 20-30 microseconds. And synchronized means exactly that since the processors, buses and what not on the rpi's are now running at the same speed and are continuously tracking each other. This is a non-compromise solution to the problem of crystal drift over time between two separate computers if they should be in sync down to the actual hardware. As a consequence this audio player therefore have no concept of sample skipping, package dropping, re-sampling or anything that would be needed for correcting drift between speakers. Which in turn is why this player is useless to most for playing wireless stereo.

## Bluetooth A2DP
Ludit is intended to be invisible and out of the way for normal users. It can currently play one thing only, bluetooth A2DP. Most likely driven by e.g. Spotify on a mobile. (It does a lot of buffering and can not be used for realtime audio). The server hosts a bluetooth dongle and a [BlueALSA](https://github.com/Arkq/bluez-alsa) -> [fork](https://github.com/bjerrep/bluez-alsa) delivers encoded audio (sbc/aac) for the ludit server.
Expand Down
17 changes: 14 additions & 3 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ Ludit
server_audio_source_setup
audio_processing
rpi_setup
client_audio_debugging

Introduction
============
Expand All @@ -27,9 +28,19 @@ Ludit is intended to get the best out of size constained loudspeakers in everyda
The following image shows one of two speakers made for the kitchen. It is very much in Ludits dna to play on two way systems where an electronic crossover can use right and left channel on a soundcard for tweeter and woofer. The carrier board underneath the raspberry pi is part of the twitse project although it sports a PCM5102A audio dac.

.. image:: ../artwork/kitchen_speaker.jpg
:alt: kitchen_speaker.jpg
:width: 300px

:alt: kitchen_speaker.jpg
:width: 300px


Links
===================

The Raspberry Pi: Audio out through I2S. Analysis of the native I2S from a raspberry pi which happens to be rather jitterish as it struggles to produce a 44.1 kHz samplerate. Be aware when using syncroneous DACs:

http://www.dimdim.gr/2014/12/the-rasberry-pi-audio-out-through-i2s/






2 changes: 1 addition & 1 deletion doc/quick_start.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Audio

In the third and last terminal launch the following gstreamer pipeline. Note that the volume is turned way down to prevent audio shock. Increase it to actually hear anything::

gst-launch-1.0 audiotestsrc wave=pink-noise volume=0.01 is-live=true ! audioconvert ! audio/x-raw, channels=2 ! faac ! aacparse ! avmux_adts ! tcpclientsink host=<hostname or ip> port=4666
gst-launch-1.0 audiotestsrc wave=pink-noise volume=0.01 is-live=true ! audioconvert ! audio/x-raw, channels=2 ! faac ! aacparse ! avmux_adts ! tcpclientsink host=<hostname or ip> port=4665

On the PC audio output the woofer signal will be in one channel and the tweeter signal in the other. It will sound horrible. The audiotestsrc source can be replaced with a gstreamer source playing a local file or streaming web radio if the noise gets too much.

Expand Down
21 changes: 20 additions & 1 deletion doc/server_audio_source_setup.rst
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,26 @@ For reference see https://github.com/mopidy/mopidy/pull/1712/commits/6e9ed9e8a9d
Audio source : gstreamer
*************************

An example of a gstreamer audio source can be found in :ref:`quick_start`.
There can be any number of gstreamer inputs configured in the server configuration file. The "gstreamer" value is a list of inputs to be initialized and it is located under "sources". The only constraint is that all enabled gstreamer inputs need to have an unique port assigned.

An example of a single PCM input listening on port 4777::

"gstreamer": [
{
"enabled": "true",
"format": "pcm",
"port": "4777",
"samplerate": "48000"
}
],

From any LAN computer it is now possible to test if the above works with the following gstreamer pipeline::

gst-launch-1.0 audiotestsrc volume=0.01 ! audioconvert ! audio/x-raw, channels=2 ! /
audioresample ! audio/x-raw, format=S16LE, rate=48000 ! tcpclientsink host=<server IP> port=4777


The :ref:`quick_start` also uses a gstreamer input for testing out.

Audio source : alsa
*******************
Expand Down
34 changes: 23 additions & 11 deletions src/client/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ def __init__(self, configuration):
self.hwctrl = hwctl.HwCtl()
self.player = player.Player(configuration, self.hwctrl)
self.player.connect('status', self.slot_player_status)
self.player.connect('message', self.slot_player_message)

self.socket = None
self.server_offline_counter = 10
Expand All @@ -55,7 +56,7 @@ def terminate(self):
self.socket.terminate()
self.socket.join()
self.player.terminate()
if self.player.isAlive():
if self.player.is_alive():
self.player.join()
self.hwctrl.terminate()
super().terminate()
Expand All @@ -72,26 +73,33 @@ def multicast_rx(self, message):
log.critical("server refused the connection (check the group and device name)")
time.sleep(1)
util.die('exiting..', 1, True)
log.info('server found, connecting to %s' % endpoint)
log.info(f'server found, connecting to {endpoint}')
self.server_endpoint = util.split_tcp(endpoint)
self.start_socket()

def slot_player_status(self, message):
if message in ('buffered', 'starved'):
def slot_player_status(self, status):
if status in ('buffered', 'starved'):
if self.socket:
self.socket.send({'command': 'status',
'clientname': self.devicename,
'state': message})
elif message in ('rt_stop', 'rt_play'):
log.info(f'realtime status: {message}')
'state': status})
elif status in ('rt_stop', 'rt_play'):
log.info(f'realtime status: {status}')
else:
log.error(f'got unknown message {str(message)}')
log.error(f'got unknown status {str(status)}')

def slot_player_message(self, message):
if self.socket:
header = {'command': 'message',
'clientname': self.devicename}
header.update(message)
self.socket.send(header)

def slot_new_message(self, message):
try:
self.player.process_server_message(message)
except Exception as e:
log.debug(traceback.format_exc())
log.critical(traceback.format_exc())
log.critical('client slot_new_message "%s" caught "%s"' % (str(message), str(e)))

def slot_socket(self, state):
Expand Down Expand Up @@ -210,6 +218,8 @@ def load_configuration(configuration_file):
log.info('loaded configuration %s' % configuration_file)
if version != util.CONFIG_VERSION:
util.die('expected configuration version %s but found version %s' % (util.CONFIG_VERSION, version))
except json.JSONDecodeError as e:
util.die(f'got fatal error loading configuration file "{e}"')
except Exception:
log.warning('no configuration file specified (--cfg), using defaults')
configuration = generate_config()
Expand Down Expand Up @@ -247,16 +257,18 @@ def start():
if args.verbose:
log.setLevel(logging.DEBUG)

try:
if args.cfg:
configuration = load_configuration(args.cfg)
except:
else:
log.info('no configuration file, using defaults')
configuration = generate_config(template=False)

try:
groupname, devicename = args.id.split(':')
configuration['group'] = groupname
configuration['device'] = devicename
except:
log.debug(configuration)
if not configuration.get('group'):
raise Exception('need a group:device name from --id argument or configuration file')

Expand Down

0 comments on commit 8f54088

Please sign in to comment.