-
Notifications
You must be signed in to change notification settings - Fork 361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exposing the output device stream directly. #30
Comments
For the moment the user can retreive the format that is expected by the backend with the methods on the If you call Is that a bad system? |
No I think that code is great! My only issue with this is that it's not immediately obvious to the reader of the client code that using a custom format will be more expensive than the device's format. If we add an explicit So let mut buffer = voice.append_data(channels, sample_rate, max_elements); could become let mut buffer = voice.append_data().custom(channels, sample_rate, max_elements); ? |
What about |
Yeah that sounds good 👍 |
No longer relevant with the new design. |
At the moment the way CPAL interfaces with the audio stream is via
.append_data
..append_data
takes channels, sample rate and maximum buffer size as arguments while allowing the user to use any sample format they wish. This is a useful, high-level approach, allowing the user to not worry about the underlying stream format if they don't want to and also allowing a dynamic stream format.This can however require a conversion to take place between the stream format given and the output device's current stream format every time a buffer is requested (if any part of either stream format differs, that is). It is not immediately obvious that this conversion takes place or is even necessary from a user's perspective.
CPAL does not currently offer direct access to the pure device stream. Perhaps before exposing the sort of dynamic interface that is currently implemented, it could be a good idea to first provide the pure audio device stream itself. As CPAL aims to be a cross-platform audio library, it could be beneficial to first provide the direct stream for users who wish to gain as low-level access as possible before providing the dynamic abstraction on top.
I think there are a couple ways we could do this - the following is the most satisfying I could think of:
We could change append_data to provide direct access to the device's stream.
The dynamic stream format style that is currently in use could then be implemented on top of this:
@tomaka what are your thoughts? I'd be happy to implement this.
The text was updated successfully, but these errors were encountered: