Skip to content

Commit

Permalink
Run spellcheck
Browse files Browse the repository at this point in the history
  • Loading branch information
HEnquist committed Jun 4, 2021
1 parent 399f88b commit f9646ae
Show file tree
Hide file tree
Showing 6 changed files with 22 additions and 19 deletions.
2 changes: 1 addition & 1 deletion FAQ.md
Expand Up @@ -58,7 +58,7 @@
- When do I need to use an asynchronous resampler?

The asynchronous resampler must be used when the ratio between the input and output sample rates cannot be expressed as a fixed ratio.
This is only the case when resampling to adaptively match the rate of two devices with independant clocks, where the ratio drifts a little all the time.
This is only the case when resampling to adaptively match the rate of two devices with independent clocks, where the ratio drifts a little all the time.
Note that resampling between the fixed rates 44.1 kHz -> 48 kHz corresponds to a ratio of 160/147, and can be handled by the synchronous resampler.
This works for any fixed resampling between the standard rates, 44.1 <-> 96 kHz, 88.2 <-> 192 kHz, 88.1 <-> 48 kHz etc.

Expand Down
24 changes: 12 additions & 12 deletions README.md
Expand Up @@ -79,7 +79,7 @@ This chart shows the most important parts:
![Overview](overview.png)

### Capture
The capture thread reads a chunk samples from the audio device in the selected format. It then converts the samples to 64-bit floats (or optionally 32-bit). If resampling is enabled, the audio data is sent to the resampler. At the end, the chunk of samples is packed as a message that is then posted to the input queue of the processing thread. After this the capture thread returns to reading he next shunk of samples from the device.
The capture thread reads a chunk samples from the audio device in the selected format. It then converts the samples to 64-bit floats (or optionally 32-bit). If resampling is enabled, the audio data is sent to the resampler. At the end, the chunk of samples is packed as a message that is then posted to the input queue of the processing thread. After this the capture thread returns to reading the next chunk of samples from the device.

### Processing
The processing thread waits for audio chunk messages to arrive in the input queue. Once a message arrives, it's passed through all the defined filters and mixers of the pipeline. Once all processing is done, the audio data is posted to the input queue of the playback device.
Expand All @@ -91,7 +91,7 @@ The playback thread simply waits for audio messages to appear in the queue. Once
The supervisor monitors all threads by listening to their status messages. The requests for capture rate adjust are passed on to the capture thread. It's also responsible for updating the configuration when requested to do so via the websocket server or a SIGHUP signal.

### Websocket server
The websocket server lauches a separate thread to handle each connected client. All commands to change the config are send to the supervisor thread.
The websocket server launches a separate thread to handle each connected client. All commands to change the config are sent to the supervisor thread.

## System requirements
CamillaDSP runs on Linux, macOS and Windows. The exact system requirements are determined by the amount of processing the application requires, but even relatively weak CPUs like Intel Atom have much more processing power than most will need.
Expand All @@ -107,11 +107,11 @@ A few examples, done with CamillaDSP v0.5.0:
CPU usage just under 100%.

### Linux
Both 64 and 32 bit architechtures are supported. All platforms supported by the Rustc compiler should work.
Both 64 and 32 bit architectures are supported. All platforms supported by the Rustc compiler should work.

Pre-built binaries are provided for:
- x86_64 (almost all PCs)
- armv7 (32-bit arm, for example a Raspeberry Pi 2,3,4 with a 32-bit OS)
- armv7 (32-bit arm, for example a Raspberry Pi 2,3,4 with a 32-bit OS)
- aarch64 (64-bit arm, for example Raspberry Pis running a 64 bit OS)

### Windows
Expand All @@ -131,7 +131,7 @@ A crossover must filter all sound being played on the system. This is possible w
See the [tutorial for a step-by-step guide.](./stepbystep.md)

## Dependencies
These are the key dependencies for CamillDSP.
These are the key dependencies for CamillaDSP.
* https://crates.io/crates/alsa - Alsa audio backend
* https://crates.io/crates/clap - Command line argument parsing
* https://crates.io/crates/cpal - Jack and CoreAudio audio backends
Expand Down Expand Up @@ -175,7 +175,7 @@ The following configurations are provided:

All builds include the Websocket server.

The `.tar.gz`-files can be uncompressed with the `tar` comand:
The `.tar.gz`-files can be uncompressed with the `tar` command:

```sh
tar -xvf camilladsp-linux-amd64.tar.gz
Expand Down Expand Up @@ -451,7 +451,7 @@ In most cases exclusive mode is preferred since it gives a direct connection wit

### Exclusive mode

In exclusice mode CamillaDSP is able to control the sample rate of the devices. The settings for "Default format" in the Windows control panel isn't used. The sample format must be one that the device driver can accept. This usually matches the hardware capabilities of the device. For example a 24-bit USB device is likely to accept `S16LE` and `S24LE3`. Other formats may be supported depending on driver support.
In exclusive mode CamillaDSP is able to control the sample rate of the devices. The settings for "Default format" in the Windows control panel isn't used. The sample format must be one that the device driver can accept. This usually matches the hardware capabilities of the device. For example a 24-bit USB device is likely to accept `S16LE` and `S24LE3`. Other formats may be supported depending on driver support.

### Shared mode

Expand Down Expand Up @@ -523,7 +523,7 @@ devices:
- 88.2 or 96 kHz: 2048
- 176.4 or 192 kHz: 4096

The duration in seconds of a chunk is `chunksize/samplerate`, so the suggested values corresponds to about 22 ms per chunk. This is a resonable value, and making it shorter can increase the cpu usage and make buffer underruns more likely.
The duration in seconds of a chunk is `chunksize/samplerate`, so the suggested values corresponds to about 22 ms per chunk. This is a reasonable value, and making it shorter can increase the cpu usage and make buffer underruns more likely.

If you have long FIR filters you can reduce CPU usage by making the chunksize larger.
When increasing, try increasing in factors of two, like 1024 -> 2048 or 4096 -> 8192.
Expand Down Expand Up @@ -581,7 +581,7 @@ devices:

Set this to `true` to enable resampling of the input signal.
In addition to resampling the input to a different sample rate,
this can be useful for rate-matching capture and playback devices with independant clocks.
this can be useful for rate-matching capture and playback devices with independent clocks.

* `resampler_type` (optional, defaults to "BalancedAsync")

Expand Down Expand Up @@ -804,7 +804,7 @@ mixers:
```

### Skip processing of unused channels
Some audio interfaces bundle all their inputs togehter, meaning that it might be necessary to capture a large number of channels to get access to a particular input.
Some audio interfaces bundle all their inputs together, meaning that it might be necessary to capture a large number of channels to get access to a particular input.
To reduce the CPU load, CamillaDSP will try to avoid processing of any channel that is captured but not used in the pipeline.

Let's say we have an interface with one analog input, and one SPDIF. These are presented as a single 4-channel input where channels 0 and 1 are analog, 2 and 3 SPDIF. Then, setting the number of capture channels to 4 will enable both inputs. In this case we are only interested in the SPDIF input. This is then done by adding a mixer that reduces the number of channels to 2. In this mixer, input channels 0 and 1 are not mapped to anything. This is then detected, and no format conversion, resampling or processing will be done on these two channels.
Expand All @@ -828,7 +828,7 @@ filters:
```

### Volume
The Volume filter is intended to be used as a volume control. The inital volume and muting state can be set with the `gain` and `mute` command line parameters. The volume can then be changed via the websocket. A request to set the volume will be applied to all Volume filters. When the volume or mute state is changed, the gain is ramped smoothly to the new value. The duration of this ramp is set by the `ramp_time` parameter (unit milliseconds). The value must not be negative. If left out, it defaults to 200 ms. The value will be rounded to the nearest number of chunks. To use this filter, insert a Volume filter somewhere in the pipeline for each channel. It's possible to use this to make a dithered volume control by placing the Volume filter somewhere in the pipeline, and having a Dither filter as the last step.
The Volume filter is intended to be used as a volume control. The initial volume and muting state can be set with the `gain` and `mute` command line parameters. The volume can then be changed via the websocket. A request to set the volume will be applied to all Volume filters. When the volume or mute state is changed, the gain is ramped smoothly to the new value. The duration of this ramp is set by the `ramp_time` parameter (unit milliseconds). The value must not be negative. If left out, it defaults to 200 ms. The value will be rounded to the nearest number of chunks. To use this filter, insert a Volume filter somewhere in the pipeline for each channel. It's possible to use this to make a dithered volume control by placing the Volume filter somewhere in the pipeline, and having a Dither filter as the last step.

Example Volume filter:
```
Expand Down Expand Up @@ -927,7 +927,7 @@ For testing purposes the entire "parameters" block can be left out (or commented

#### Coefficients from Wav-file

Supplying the coefficients as `.wav` file is the most conveient method.
Supplying the coefficients as `.wav` file is the most convenient method.
The `Wav` type takes only one parameter `channel`. This is used to select which channel of a multi-channel file to load. For a standard stereo file, the left track is channel 0, and the right is channel 1. This parameter is optional and defaults to 0 if left out.
The sample rate of the file is ignored.

Expand Down
9 changes: 6 additions & 3 deletions devices.md
Expand Up @@ -80,9 +80,12 @@ Just download the binary and run it in a terminal. It will list all devices with
## WASAPI (Windows)
The device name is the same as seen in the Windows volume control. For example, the VB-CABLE device name is "CABLE Output (VB-Audio Virtual Cable)". The device name is built from the input/output name and card name, and the format is "{input/output name} ({card name})".

The sample format is always 32-bit float (FLOAT32LE) even if the device is configured to use another format.

The sample rate must match the default format of the device. To change this, open "Sound" in the Control panel, select the sound card, and click "Properties". Then open the "Advanced" tab and select the desired format under "Default Format".
Wasapi offers both shared and exclusive mode.
- In shared mode, the sample format is always 32-bit float (FLOAT32LE).
The sample rate must match the default format of the device.
To change this, open "Sound" in the Control panel, select the sound card, and click "Properties".
Then open the "Advanced" tab and select the desired format under "Default Format".
- In exclusive mode, the sample format and sample rate can be any combination supported natively by the device.

To help with finding the name of playback and capture devices, use the Windows version of "cpal-listdevices" program from here: https://github.com/HEnquist/cpal-listdevices/releases

Expand Down
2 changes: 1 addition & 1 deletion stepbystep.md
Expand Up @@ -101,7 +101,7 @@ Put everything together, and run it. It should work and give unfiltered output o


## Filters
The poor tweeters don't like the full range signel so we need lowpass filters for them. Left and right should be filtered with the same settings, so a single definition is enough.
The poor tweeters don't like the full range signal so we need lowpass filters for them. Left and right should be filtered with the same settings, so a single definition is enough.
Let's use a simple 2nd order Butterworth at 2 kHz and name it "highpass2k". Create a "filters" section like this:
```yaml
filters:
Expand Down
2 changes: 1 addition & 1 deletion troubleshooting.md
Expand Up @@ -69,7 +69,7 @@
- Unstable filter specified

This means that a Biquad filter definition was found to give an unstable filter,
meaning that the output signal can grow uncontrolled. Check that the coeffients were entered correctly.
meaning that the output signal can grow uncontrolled. Check that the coefficients were entered correctly.

- Negative delay specified

Expand Down
2 changes: 1 addition & 1 deletion websocket.md
Expand Up @@ -204,7 +204,7 @@ This is enabled by providing the two optional parameters "cert" and "pass", wher
How to properly generate the identity is outside the scope of this readme, but for simple tests a self-signed certificate can be used.

### Generate self-signed identity
First geneate rsa keys:
First generate rsa keys:
```sh
openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
```
Expand Down

0 comments on commit f9646ae

Please sign in to comment.