Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct --player pipewire output support for snapclient? #821

Open
k8ieone opened this issue Mar 3, 2021 · 25 comments
Open

Direct --player pipewire output support for snapclient? #821

k8ieone opened this issue Mar 3, 2021 · 25 comments

Comments

@k8ieone
Copy link

k8ieone commented Mar 3, 2021

So PipeWire seems to be slowly getting more popular. And I get why, I mean it can interface with ALSA and PulseAudio applications while promising to be a low-level and low-latency API as well.

I've never had to deal with programming any kind of audio-related program, so I have only a very limited understanding of audio protocols, so this might sound stupid.
Would adding a pipewire option to --player decrease latency as opposed to running snapclient with --player alsa/pulse and having PipeWire "translate" it? If there was a latency decrease, would it be significant enough to justify the effort needed to code support for another audio API?

@badaix
Copy link
Owner

badaix commented Mar 3, 2021

PipeWire is an audio server, just like PulseAudio. Under the hood both of them are using ALSA (on Linux), because ALSA provides an interface to the Linux kernel audio driver. If you aim for low latency, you should use the alsa backend.
In it's core ALSA provides an exclusive access to the audio device, meaning that only one process can play audio at a time, alsa provides also some higher level virtual mix interfaces, but usually something like PulseAudio is used for non-exclusive access.
Because many applications do not natively support PulseAudio or PipeWire, they provide a virtual default ALSA device that will not interface with the audio device, but forward the audio stream to PA/PW and is getting processed there (effects, volume, etc) and mixed into the output stream that is fed to the "physical" Alsa device.
So on a system with PulseAudio you have several alsa devices (snapclient -l):

0: default
Playback/recording through the PulseAudio sound server
...
31: hw:CARD=Generic_1,DEV=0
HD-Audio Generic, ALC256 Analog
Direct hardware device without any conversions

If you use --player alsa -s 0 the audio will go through this pipeline:

Snapclient => ALSA => "virtual PA device" => PA => ALSA => device 31

If you use --player alsa -s 31 it is

Snapclient => ALSA => device 31

This will yield lower latency, but on your desktop system with PA you will not hear any other audio or Snapclient will complain that it cannot connect to the device, if audio is already playing through PA (PA releases the ALSA device, just as Snapclient does when being idle).

If you use --player pulse:

Snapclient => PA => ALSA => device 31

This will yield lower latency than --player alsa -s 0 and better error handling, control and jitter and other processes will be able to play audio, but it's higher latency and more jitter than --player alsa -s 31, but still preferred for a desktop system (it will also provide it's own volume control for PA).

All this will also apply for PipeWire. So, on systems running PipeWire instead of PulseAudio, it would make sense to have native support for it. For headless clients, there is the strong recommendation to use ALSA.

Edit: I would guess that PipeWire's PulseAudio server emulation should be close to PipeWire's native client library (performance and feature wise).

@k8ieone
Copy link
Author

k8ieone commented Mar 3, 2021

Very nice explanation, thanks. I understand ALSA's exclusive access limitation (thus the reason for PA's existence).

Just out of curiosity, is PipeWire's PA emulation similar to Pulse's ALSA? What I mean is if it also creates a "fake device" (in Pulse's case I believe that would be called an output sink?) that programs can talk to, like Pulse does with ALSA?
Or is PipeWire's Pulse emulation just an alternative implementation of the PA protocol?

snapclient => PA => PW => ALSA => device

or like this:

snapclient => PW (PA emu mode) => ALSA => device

If it's a re-implementation of the PulseAudio protocol, is there any point in making programs interface directly with PipeWire, using their protocol instead of talking to it like they would with PulseAudio?
I'd guess for having a program work universally, it should use ALSA, because it can either work with devices directly or go through either PipeWire or PulseAudio.

@k8ieone
Copy link
Author

k8ieone commented Mar 3, 2021

Wow, this turned into a QnA session... You don't really have to answer if you don't want to.

TLDR: For snapclient running on a PipeWire system, it's best to use --player pulse and it's not really worth it to implement PipeWire's native libraries. Correct?

@badaix
Copy link
Owner

badaix commented Mar 10, 2021

Well, I have no idea about PipeWire, this is just my 5min party knowledge, I've gathered from reading their GitLab Readme.
I think the idea behind PipeWire is that it not only focuses on Audio, but also on Video, and maybe it's better or more modern in some regards as PulseAudio, or the authors just don't like Poettering 😉
PulseAudio and PipeWire are running as server and they offer client libs to connect to it and play audio through it. If I would invent a new audio server (well, actually Snapcast is an audio server), I would also probably make the server compatible to the PulseAudio client libs, so that it can be a drop in replacement, while the PipeWire client lib might give better control or might have more features.
Bottom-line: I don't think that a --player pipewire which uses the native PipeWire client lib would give better results, since it does quite basic audio playback, but depends on reliable playback latency estimations, which would hopefully work.

@ssieb
Copy link

ssieb commented Mar 10, 2021

PipeWire is intended to combine the features of both pulseaudio and Jack. It is supported by Poettering, there's no competition involved. It is a drop-in replacement for pulseaudio, but in general, using native interfaces is better than using the compatibility ones.

@badaix
Copy link
Owner

badaix commented Mar 12, 2021

@ssieb thanks for the info, do you have an idea on how widespread this is used, i.e. how man distributions use it as their default audio server
@satcom886 I've created a "Feature requests" project to keep this issue tracker clean and to not drop this feature request, I have linked it there and will close it here.

@badaix badaix closed this as completed Mar 12, 2021
@ssieb
Copy link

ssieb commented Mar 12, 2021

I don't think any do yet. I just know that Fedora (that I use), is intending it to be the default for the release that is being worked on right now.
https://fedoraproject.org/wiki/Changes/DefaultPipeWire

@k8ieone
Copy link
Author

k8ieone commented Mar 25, 2021

Seems like PipeWire's devs actually recommend sticking with ALSA/PulseAudio libraries for now.
https://gitlab.freedesktop.org/pipewire/pipewire/-/wikis/FAQ#what-audio-api-do-you-recommend-to-use

@PureTryOut
Copy link
Contributor

PureTryOut commented Mar 29, 2021

Fedora will switch to it with release 34, but so far that is the only distribution as far as I know. A lot of users are switching already however due to PulseEffects having dropped support for PulseAudio and moving fully to Pipewire. It's hard to give an exact number, or even an estimate, of people using Pipewire right now but it's steadily increasing.

I can say I would very much like direct Pipewire support 🤗

@rwjack
Copy link

rwjack commented Feb 18, 2023

Is there a way to merge regular Desktop audio with SnapClient?

I'm Running popOS with pipewire-alsa

@deisi
Copy link

deisi commented May 12, 2023

@ssieb thanks for the info, do you have an idea on how widespread this is used, i.e. how man distributions use it as their default audio server

By now Fedora (>34) and Ubuntu (>22.10) use pipewire by default:

As a roule of thumb, if the disto uses wayland, chance are it is using pipewire as well. In any case its becoming increasingly relevant.

@marcpaulchand
Copy link

Hi all

Here is what've done.
Add /etc/pipewire/pipewire.conf.d/snapcast.conf

context.modules = [
{   name = libpipewire-module-pipe-tunnel
    args = {
        tunnel.mode = sink
        # Set the pipe name to tunnel to
        pipe.filename = "/tmp/snapcast"
        node.name = "snapcast"
    }
}
]

Restart your pipewire service

systemctl restart --user pipewire

You should discover a snapcast sinks with wpctl status (get wireplumber if you do not have the command)

$ wpctl status
PipeWire 'pipewire-0'
 └─ Clients:

Audio
 ├─ Devices:
 │
 ├─ Sinks:
 │  *   36. snapcast                            [vol: 1.00]

and a snapcast pipe in /tmp

Now in snapserver.conf, i just mention the pipe

source = pipe:///tmp/snapcast?name=pipewire_pipe

@muusik
Copy link

muusik commented May 31, 2023

Good stuff, this works for me. Thanks, @marcpaulchand ! The only thing is that it's out of sync with video, unsurprisingly. Is there a way to mitigate that? Apparently, Pipewire has a latency offset, but it looks like it's not supported for this type of sink. At any rate, Pavucontrol shows it as a virtual output device and doesn't display an advanced section for it, wherein latency offset would be.

Is there, perhaps, a way to tell Pipewire to (tell its clients to) delay video by a second via the configuration file? I tried setting stream.props -> node.latency for the Snapcast sink, and it shows up in pw-top, but it doesn't change the relative timing between audio and video.

@sixtyfive
Copy link

@muusik in the rare cases i wish to watch a movie this way, i use vlc or mplayer, both of which allow to move audio back and forth relative to video.

@thomas-mc-work
Copy link

From the latest Pipewire release

Add snapcast-discover module to stream to snapcast servers.

@badaix
Copy link
Owner

badaix commented May 28, 2024

I've seen this in the release notes, but I don't really know what this module exactly does.
In any case it's an honor to see that pipewire is adding Snapcast support.

@PureTryOut
Copy link
Contributor

It automatically searches for Snapcast servers on the networks and adds them as outputs to the local system. When playing audio over it PipeWire will create a new stream on the Snapcast side (which you then have to assign to Snapcast clients).

This I understood from the documentation added alongside the module.

@marcpaulchand
Copy link

Source:
https://gitlab.freedesktop.org/pipewire/pipewire/-/blob/master/src/modules/module-protocol-simple.c#L151

 * On the snapcast server, add the following to the `snapserver.conf` file:
 *
 *\code{.unparsed}
 * [stream]
 * sampleformat =  48000:16:2
 * source = tcp://127.0.0.1:4711?name=PipeWireSnapcast&mode=client
 *\endcode
 *
 * Snapcast will try to connect to the protocol-simple server and fetch the
 * samples from it. Snapcast tries to reconnect when the connection is somehow
 * broken.
 */

Also, snapserver need to be advertised by avahi.

@sixtyfive
Copy link

Add snapcast-discover module to stream to snapcast servers.

This came about from a conversation in #pipewire on IRC. If it doesn't yet do exactly what it would need to do, I'm sure the author would be happy to get a visit there!

@PureTryOut
Copy link
Contributor

Also, snapserver need to be advertised by avahi.

Are there any instructions somewhere on how to do that?

@badaix
Copy link
Owner

badaix commented May 28, 2024

The server will always advertise his IP and port:

server.cpp:

#if defined(HAS_AVAHI) || defined(HAS_BONJOUR)
        auto publishZeroConfg = std::make_unique<PublishZeroConf>("Snapcast", io_context);
        vector<mDNSService> dns_services;
        dns_services.emplace_back("_snapcast._tcp", settings.stream.port);
        dns_services.emplace_back("_snapcast-stream._tcp", settings.stream.port);
        if (settings.tcp.enabled)
        {
            dns_services.emplace_back("_snapcast-jsonrpc._tcp", settings.tcp.port);
            dns_services.emplace_back("_snapcast-tcp._tcp", settings.tcp.port);
        }
        if (settings.http.enabled)
        {
            dns_services.emplace_back("_snapcast-http._tcp", settings.http.port);
        }
        publishZeroConfg->publish(dns_services);
#endif

@k8ieone
Copy link
Author

k8ieone commented Jun 3, 2024

It automatically searches for Snapcast servers on the networks and adds them as outputs to the local system. When playing audio over it PipeWire will create a new stream on the Snapcast side (which you then have to assign to Snapcast clients).

That's so sweet! I need to try that out ;)

@k8ieone k8ieone changed the title Direct PipeWire support? Direct --player pipewire output support for snapclient? Jun 3, 2024
@k8ieone
Copy link
Author

k8ieone commented Jun 3, 2024

I renamed the issue so it more closely matches the original comment since the discussion has gone a bit off-topic.

@gerroon
Copy link

gerroon commented Aug 3, 2024

So does this work? If so how do I make it work?

@jwillikers
Copy link
Contributor

So does this work? If so how do I make it work?

The PipeWire Snapcast module is documented here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests