-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Direct --player pipewire
output support for snapclient
?
#821
Comments
PipeWire is an audio server, just like PulseAudio. Under the hood both of them are using ALSA (on Linux), because ALSA provides an interface to the Linux kernel audio driver. If you aim for low latency, you should use the alsa backend.
If you use
If you use
This will yield lower latency, but on your desktop system with PA you will not hear any other audio or Snapclient will complain that it cannot connect to the device, if audio is already playing through PA (PA releases the ALSA device, just as Snapclient does when being idle). If you use
This will yield lower latency than All this will also apply for PipeWire. So, on systems running PipeWire instead of PulseAudio, it would make sense to have native support for it. For headless clients, there is the strong recommendation to use ALSA. Edit: I would guess that PipeWire's PulseAudio server emulation should be close to PipeWire's native client library (performance and feature wise). |
Very nice explanation, thanks. I understand ALSA's exclusive access limitation (thus the reason for PA's existence). Just out of curiosity, is PipeWire's PA emulation similar to Pulse's ALSA? What I mean is if it also creates a "fake device" (in Pulse's case I believe that would be called an output sink?) that programs can talk to, like Pulse does with ALSA?
or like this:
If it's a re-implementation of the PulseAudio protocol, is there any point in making programs interface directly with PipeWire, using their protocol instead of talking to it like they would with PulseAudio? |
Wow, this turned into a QnA session... You don't really have to answer if you don't want to. TLDR: For |
Well, I have no idea about PipeWire, this is just my 5min party knowledge, I've gathered from reading their GitLab Readme. |
PipeWire is intended to combine the features of both pulseaudio and Jack. It is supported by Poettering, there's no competition involved. It is a drop-in replacement for pulseaudio, but in general, using native interfaces is better than using the compatibility ones. |
@ssieb thanks for the info, do you have an idea on how widespread this is used, i.e. how man distributions use it as their default audio server |
I don't think any do yet. I just know that Fedora (that I use), is intending it to be the default for the release that is being worked on right now. |
Seems like PipeWire's devs actually recommend sticking with ALSA/PulseAudio libraries for now. |
Fedora will switch to it with release 34, but so far that is the only distribution as far as I know. A lot of users are switching already however due to PulseEffects having dropped support for PulseAudio and moving fully to Pipewire. It's hard to give an exact number, or even an estimate, of people using Pipewire right now but it's steadily increasing. I can say I would very much like direct Pipewire support 🤗 |
Is there a way to merge regular Desktop audio with SnapClient? I'm Running popOS with pipewire-alsa |
By now Fedora (>34) and Ubuntu (>22.10) use pipewire by default:
As a roule of thumb, if the disto uses wayland, chance are it is using pipewire as well. In any case its becoming increasingly relevant. |
Hi all Here is what've done. context.modules = [
{ name = libpipewire-module-pipe-tunnel
args = {
tunnel.mode = sink
# Set the pipe name to tunnel to
pipe.filename = "/tmp/snapcast"
node.name = "snapcast"
}
}
] Restart your pipewire service systemctl restart --user pipewire You should discover a snapcast sinks with
and a snapcast pipe in /tmp Now in
|
Good stuff, this works for me. Thanks, @marcpaulchand ! The only thing is that it's out of sync with video, unsurprisingly. Is there a way to mitigate that? Apparently, Pipewire has a latency offset, but it looks like it's not supported for this type of sink. At any rate, Pavucontrol shows it as a virtual output device and doesn't display an advanced section for it, wherein latency offset would be. Is there, perhaps, a way to tell Pipewire to (tell its clients to) delay video by a second via the configuration file? I tried setting stream.props -> node.latency for the Snapcast sink, and it shows up in pw-top, but it doesn't change the relative timing between audio and video. |
@muusik in the rare cases i wish to watch a movie this way, i use vlc or mplayer, both of which allow to move audio back and forth relative to video. |
From the latest Pipewire release
|
I've seen this in the release notes, but I don't really know what this module exactly does. |
It automatically searches for Snapcast servers on the networks and adds them as outputs to the local system. When playing audio over it PipeWire will create a new stream on the Snapcast side (which you then have to assign to Snapcast clients). This I understood from the documentation added alongside the module. |
Also, snapserver need to be advertised by avahi. |
This came about from a conversation in #pipewire on IRC. If it doesn't yet do exactly what it would need to do, I'm sure the author would be happy to get a visit there! |
Are there any instructions somewhere on how to do that? |
The server will always advertise his IP and port: server.cpp: #if defined(HAS_AVAHI) || defined(HAS_BONJOUR)
auto publishZeroConfg = std::make_unique<PublishZeroConf>("Snapcast", io_context);
vector<mDNSService> dns_services;
dns_services.emplace_back("_snapcast._tcp", settings.stream.port);
dns_services.emplace_back("_snapcast-stream._tcp", settings.stream.port);
if (settings.tcp.enabled)
{
dns_services.emplace_back("_snapcast-jsonrpc._tcp", settings.tcp.port);
dns_services.emplace_back("_snapcast-tcp._tcp", settings.tcp.port);
}
if (settings.http.enabled)
{
dns_services.emplace_back("_snapcast-http._tcp", settings.http.port);
}
publishZeroConfg->publish(dns_services);
#endif |
That's so sweet! I need to try that out ;) |
--player pipewire
output support for snapclient
?
I renamed the issue so it more closely matches the original comment since the discussion has gone a bit off-topic. |
So does this work? If so how do I make it work? |
The PipeWire Snapcast module is documented here. |
So PipeWire seems to be slowly getting more popular. And I get why, I mean it can interface with ALSA and PulseAudio applications while promising to be a low-level and low-latency API as well.
I've never had to deal with programming any kind of audio-related program, so I have only a very limited understanding of audio protocols, so this might sound stupid.
Would adding a
pipewire
option to--player
decrease latency as opposed to runningsnapclient
with--player alsa/pulse
and having PipeWire "translate" it? If there was a latency decrease, would it be significant enough to justify the effort needed to code support for another audio API?The text was updated successfully, but these errors were encountered: