ALSA shouldn't be default in LMMS #1600

Closed
crank123 opened this Issue Jan 11, 2015 · 101 comments

Projects

None yet
@crank123

When using LMMS on Linux using the default settings one large problem is apparent. At any moment the sound can become very very distorted. Sometimes restarting pulseaudio will fix it. Other times you will have to restart the computer. However, switching to SDL will ALWAYS solve this problem. SDL does take up a bit more resources, but it is manageable. I ran LMMS on SDL on a RPI without overclocking while running Chromium, Scratch, and a terminal window without a hitch.

@crank123

#1600 whoot whoot!

@diizy
Contributor
diizy commented Jan 11, 2015

On 01/11/2015 10:52 PM, Gabe Bauer wrote:

When using LMMS on Linux using the default settings one large problem
is apparent. At any moment the sound can become very very distorted.
Sometimes restarting pulseaudio will fix it.

Don't use PulseAudio.

@crank123

By default LMMS uses ALSA. Which is the cause of this problem. However, this problem is not apparent when using SDL. But, SDL is not the default.

@diizy
Contributor
diizy commented Jan 11, 2015

On 01/12/2015 12:01 AM, Gabe Bauer wrote:

By default LMMS uses ALSA. Which is the cause of this problem.
However, this problem is not apparent when using SDL. But, SDL is not
the default.

ALSA works just fine. PulseAudio is the problem. Setup your ALSA backend
to use ALSA directly, instead of letting PulseAudio intercept the output.

@tresf
Member
tresf commented Jan 11, 2015

IIRC it already is default on Windows, for similar reasons (the direct sound drivers had bugs that made the startup experience bad for the majority of users)

I see no problem with changing this if it is the general consensus. I'd like to hear from more people that compose on Linux @Umcaruje @unfa @mikobuntu and see if this would appease the masses.

The solution "don't use PulseAudio" is like telling Windows users not to use DirectSound, or telling Apple users not to use CoreAudio. Sometimes we have to ship with sane defaults, and we changed it on one platform for this exact reason. Reasons against it should offer some good supporting arguments. I've been using Linux for too long to still see "Don't use pulse audio" still as solutions to these things.

@diizy
Contributor
diizy commented Jan 11, 2015

On 01/12/2015 12:55 AM, Tres Finocchiaro wrote:

IIRC it already is default on Windows, for similar reasons (the direct
sound drivers had bugs that made the startup experience bad for the
majority of users)

I see no problem with changing this if it is the general consensus.
I'd like to hear from more people that compose on Linux and see if
this would appease the masses.

The solution "don't use PulseAudio" is like telling Windows users not
to use direct sound, or telling Apple users not to use CoreAudio.
Sometimes we have to ship with sane defaults, and we changed it on one
platform for this exact reason. Reasons against it should offer some
good reasoning. I've been using Linux for too long to still see "Done
use pulse audio" still as solutions to these things.

We can expect a bit more from Linux users than we can from Windows
users. Linux users are generally more tech-savvy and know how to setup
their computers, it's a much more DIY environment.

ALSA works just fine for most people. SDL causes more overhead and is an
additional dependency which not everyone may want to install. Everyone
has ALSA installed, which makes it a safe default.

It's simply a fact that if you want to do audio work, PulseAudio is a
piece of crap. You don't need to get rid of it, just configure your
backend to use ALSA directly so that PA can't intercept it. Our user
wiki has step-by-step instructions for doing this.

@tresf
Member
tresf commented Jan 11, 2015

So the argument is a dependency and performance argument?

The performance problems exist identically on all platforms, but in Windows Port Audio just doesn't work 90% of the time. Fortunately for the proprietary OSs, these SDL libraries are bundled, so the dependency argument is out the window for those platforms.

It is unfortunate that the very platform our software is built on can't use it OOTB. 😢

-Tres

@tresf
Member
tresf commented Jan 11, 2015

... and from a performance perspective, from what I observe, SDL uses a lot of resources at idle, but performs quite well otherwise. This seems to be consensus on SDL in general due to the way its engine is written.

@crank123

I completely agree. It preformed well on a Raspberry Pi using SDL. So, we know it doesn't take up THAT many resources.

@Sti2nd
Contributor
Sti2nd commented Jan 11, 2015

diizy is right in that because PortAudio is default on Ubuntu (right?) it interferes with Alsa which is default for LMMS. At least that sounds familiar, look at the last Q/A https://lmms.io/documentation/User_FAQ

We can expect a bit more from Linux users than we can from Windows
users.

It is evening out though, linux is starting to become so simple, what a shame...

@crank123

Not every linux user is advanced. If you are new and just want to install LMMS and have it work out of the box than it would make sense to have SDL by default. You want this project to get better right? If your answer was yes than you should do everything in your power to do so. Which, would include making the audio work universally out of the box. If your answer was no than you should have nothing to do with the project.

@tresf
Member
tresf commented Jan 11, 2015

diizy is right in that because PortAudio is default on Ubuntu (right?) it interferes with Alsa which is default for LMMS.

Is this a statement or a question? Have you ever composed on Linux? If you're saying he's right about the wiki, we can deduct that ourselves, thank you. Peanut gallery is over to the right.

It is evening out though, linux is starting to become so simple, what a shame...

Is has been this way for years and years. So has PulseAudio. PulseAudio pisses off many Linux users, however with its anger it has brought some of the most basic multi-tasking capabilities out of cheap sound cards that seemed to have previously fought over sound devices (and one of them would lose, making life even sadder for most end-users).

If we want to fight a battle against PulseAudio, this is the wrong place to do it. If SDL fixes this, we can always offer lmms-nosdl to the purists while giving sane defaults to the masses. I personal prefer stuff that works out of the box. When I install a DAW often I don't even know if I'm going to use it for more than a day. If the audio doesn't work OOTB, it does more to damage that software than it does PulseAudio.

@crank123

And, switching to SDL doesn't mean you can't switch back. There is a reason for the settings. That being said, I do like the idea of the simple lmms-nosdl.

@tresf
Member
tresf commented Jan 11, 2015

... but that said, I do agree with Vesa that the average Linux user is used to this sort of crap. That's why I'd like to sample the Linux users a bit. I'd like to know how the community usually configures their system and if the majority of them are already doing this, or if they are utilizing our workarounds in our wiki.

@Sti2nd
Contributor
Sti2nd commented Jan 11, 2015

Is this a statement or a question? Have you ever composed on Linux?

Statement, but it is a long time ago since I used Ubuntu now, and if I recall correctly they changed sound driver sometime the last five years? That is why I were unsure what sound driver they use now. Talking about Ubuntu.

PulseAudio pisses off many Linux users,

Oh, so it isn't just a LMMS problem? I agree on that out of the box is the best, so yes, if SDL works on more distros/computers it should be default on linux too

@tresf tresf added the enhancement label Jan 11, 2015
@tresf tresf added this to the 1.2.0 milestone Jan 11, 2015
@crank123

Linux users who have used for more than a year are generally used to crap like this. But, that is exactly what it is...crap. Users shouldn't be force to go through useless steps to get a single application to work.

@diizy
Contributor
diizy commented Jan 11, 2015

On 01/12/2015 01:20 AM, Gabe Bauer wrote:

Not every linux user is advanced. If you are new and just want to
install LMMS and have it work out of the box than it would make sense
to have SDL by default. You want this project to get better right? If
your answer was yes than you should do everything in your power to do
so. Which, would include making the audio work universally out of the
box. If your answer was no than you should have nothing to do with the
project.

Opinions are like arseholes: everyone has one.

It's obvious that this is going nowhere. We've heard your view,
repeating it over and over is not going to change anything. I consider
this issue closed.

@tresf
Member
tresf commented Jan 11, 2015

If your answer was no than you should have nothing to do with the project.

Careful... These are fighting words and Vesa's opinion matters here regardless of whether or not you agree with it. Furthermore, he does belong on this project. He's our 2nd most active coder in the history of ever. (both 2nd and 3rd according to our about dialog). Let's not bite the hand that feeds us now. :)

@diizy
Contributor
diizy commented Jan 11, 2015

An audio software/DAW is not like a notepad or a browser flash game. It requires some investment and RTFM to get going. That's how it should be. LMMS does not need to be "a DAW for the dummies" and we really need to be careful not to head towards that direction. I've written about the reasons why we should avoid that before on the mailing list, and don't really care to repeat it here...

ALSA is currently the backend that works the most reliably and provides the best performance on Linux. I see no reason to change the default.

However, there has been talk about making the configuration of the ALSA backend easier. The idea was to replace the "device" textbox with a dropdown menu, similar to how Audacity handles it. This way configuring your ALSA backend properly becomes much easier and possibly less of a hassle for a beginner.

@tresf
Member
tresf commented Jan 11, 2015

It requires some investment and RTFM to get going. That's how it should be. LMMS does not need to be "a DAW for the dummies" and we really need to be careful not to head towards that direction.

I think the creative process appeals differently to all types of artists. I also feel there is a difference between an "Easy Button" and saner defaults. This is why I think opinions matter here. Hopefully we'll get some more feedback about this....

What happens if SDL is not available? Does it fall back to Dummy Audio, or does it fall back to ALSA?

@Umcaruje
Member

I'd like to sample the Linux users a bit. I'd like to know how the community usually configures their system and if the majority of them are already doing this, or if they are utilizing our workarounds in our wiki.

I use ALSA, but I always specify my device. If I leave it at default PulseAudio adopts it and the sound becomes crap. As a linux user I don't see why SDL shouldn't be default though. Its the only audio backend that worked out of the box every time, on any platform.

Also, SDL does not take over your soundcard like ALSA. Sure, it is more CPU heavy, but its a relief for people with only one soundcard, which I guess is most of our users (I do not fall in that category though).

@softrabbit
Member

Could be that the LMMS ALSA output isn't 100% compatible with PulseAudio. Did a quick test, and on this system it went like this:

  • ALSA "default" -> PA: distorted sound
  • ALSA -> hw:0,0: OK
  • SDL -> PA (not sure if this goes through some ALSA interface as well): OK
  • straight to PA: OK

I won't make any guesses at this point whether it's LMMS or PA that's buggy.

@tresf
Member
tresf commented Jan 12, 2015

So should PulseAudio be default then? That way we are not relying on optional packages?

I started looking around for the SDL packages last night on Ubuntu and wasn't able to find them. Are they installed by default now? Is the dependency argument moot for the *buntus?

@tresf
Member
tresf commented Jan 12, 2015

Edit... PA says "Bad latency!", so I assume that isn't a good option.

@diizy
Contributor
diizy commented Jan 12, 2015

On 01/12/2015 12:14 PM, Raine M. Ekman wrote:

Could be that the LMMS ALSA output isn't 100% compatible with
PulseAudio. Did a quick test, and on this system it went like this:

  • ALSA "default" -> PA: distorted sound
  • ALSA -> hw:0,0: OK
  • SDL -> PA (not sure if this goes through some ALSA interface as
    well): OK
  • straight to PA: OK

I won't make any guesses at this point whether it's LMMS or PA that's
buggy.

Yeah, what we need is to get the Jack backend working. It's the only
real option for pro-quality audio on Linux, which also allows other
sound sources working at the same time.

And to do that we need to make the engine RT-safe...

@diizy
Contributor
diizy commented Jan 12, 2015

On 01/12/2015 03:33 PM, Tres Finocchiaro wrote:

So should PulseAudio be default then?

PA is the worst possible choice for audio work. Latencies are horrible,
etc. If SDL uses PA as a backend then SDL probably will have problems
with latencies as well.

I see absolutely no point in encouraging users to use inferior backends
just to make it "easier" for them - it'll only hurt them in the long
run, they'll have to learn how to setup their system anyway. I
personally have no interest whatsoever in creating a "toy DAW for
dummies". The whole mentality where everything has to "just work" - even
at the cost of functionality - is the problem with software these days,
everything getting dumbed down... we can expect more from our users.

@tresf
Member
tresf commented Jan 12, 2015

If SDL uses PA as a backend then SDL probably will have problems with latencies as well.

That is a good point. @Umcaruje can you confirm this?

The whole mentality where everything has to "just work" - even at the cost of functionality - is the problem with software these days, everything getting dumbed down... we can expect more from our users.

Perhaps what you are describing is a growing problem, but the problem in bug #1600 is not about things being dumbed down. It's not about coddling our users. This is about a default setting that breaks our software for most users and if we have the ability to improve this out of the box.

I still fail to see what this setting hurts. If your argument has merit, why don't we force "DummyAudio" by default so that the masses on Linux are forced to learn and investigate their back-end?

@diizy
Contributor
diizy commented Jan 12, 2015

On 01/12/2015 04:47 PM, Tres Finocchiaro wrote:

I still fail to see what this setting hurts.

Performance.

What we should do (as I already mentioned) is make configuring the ALSA
backend easier. Replace textbox with dropdown. Instead of offering an
inferior backend to make things "easier", offer the users the tools they
need to more easily setup their system for good quality audio playback.

@tresf
Member
tresf commented Jan 12, 2015

What we should do (as I already mentioned) is make configuring the ALSA backend easier. Replace textbox with dropdown. Instead of offering an inferior backend to make things "easier", offer the users the tools they need to more easily setup their system for good quality audio playback.

We should fix DirectSound support too, but instead we ship with SDL enabled on Windows until we can fix it (assuming it CAN be fixed). That's the point here. Saner defaults.

But we really need some test results to make any Performance arguments. @crank123 @Umcaruje can you please offer some use-case tests. We need to know if SDL suffers from the Bad latency! issues documented with PulseAudio's menu selection. unfa-Spoken is a pretty resource intensive benchmark IMO. If it's unplayable with a certain backend, we should document that and make our decision based on that.

I'd also like to know if SDL is something that comes pre-installed with most Linux distros -- and if not -- the average dependency file size it carries.

@tresf tresf referenced this issue in LMMS/lmms.io Jan 12, 2015
Closed

User FAQ, troubleshooting section #135

@mikobuntu
Contributor

@diizy @tresf I did a quick test on my system ( ubuntu-12-04 ) which has standard pulseaudio installed, and realtime enabled on a user level i.e Im a member of the @audio group and have pulse running in a higher priority, but anyway my test was disabling pulseaudio temporarily ( turning off autospawn ) and running LMMS with the SDL backend, which gives audio, proving SDL does not require pulseaudio.

My 2 cents, personally I like most here would like to see the jack engine 'fixed' in LMMS, but I believe using ALSA directly is the closest we are going to get regarding realtime latency at this moment in time..

I have yet to test further how SDL compares to ALSA, but my guess is according to 99% of all other audio apps on Linux, ALSA will win hands down ...

@tresf Im almost sure I had to install libsdl to compile LMMS on my box, so therefore I dont think it comes pre-installed. I can possibly check my install log if need be.

@tresf
Member
tresf commented Jan 12, 2015

@mikobuntu Thanks for the feedback. I'm going to fire up dedicated hardware on vanilla 14.04 to see for myself. Had to pull an old laptop from the archives. It is a bit slow. Hopefully I can get some good test results from it.

@mikobuntu
Contributor

@tresf no problem ;) And it is actually a good idea to test on an older machine as not every user will be running state of the art hardware. You will find that unless you have LMMS running realtime, all the system calls etc will grind it to a halt, and if your soundcard is sharing a port or IRQ with something else, expect to get lots of jitter and latency .. check with command :- cat /proc/interrupts

@tresf
Member
tresf commented Jan 13, 2015

FYI - On Ubuntu 14.04.1 32-bit, stable-1.0 is 127MB download.

(QT dependencies probably weight in pretty large there...)

@Sti2nd
Contributor
Sti2nd commented Jan 13, 2015

On Ubuntu 14.04.1 32-bit, stable-1.0 is 127MB download

And maybe a soundfont? I don't remember where I read it, but I recall Israel telling about having added a 50mb, or was it 100mb soundfont, as optional on Ubuntu.

@tresf
Member
tresf commented Jan 13, 2015

Test results:

  • Hardware: Dell Inspiron 6000
  • Audio: Onboard (Intel ICH6)
  • CPU: Intel Pentium M 760 "Centrino", 2000.0 MHz
  • RAM: 2.0 GB
  • OS: Ubuntu 14.04-1 x86, with updates applied
  • LMMS: 1.0.0 (non-Frankenstein build) from Canonical

UNFA SPOKEN TEST

ALSA

  • unfa-Spoken.mmpz - Unplayable with any device settings

SDL

  • unfa-Spoken.mmpz - Unplayable with any device settings

PulseAudio

  • unfa-Spoken.mmpz - Unplayable with any device settings

MOMO64 TEST

ALSA

  • Momo64-esp.mmpz - 42% CPU (Visual average for playback)
    • Used with pasuspender command, hw:0,0
    • When LMMS is minimized, track playback skips - playback becomes unbearable

SDL

  • Momo64-esp.mmpz - 49% CPU (Visual average for playback)
  • When LMMS is minimized, track playback skips - playback corrects itself and continues

PulseAudio

  • Momo64-esp.mmpz - 49% CPU (Visual average for playback)
  • When LMMS is minimized, track playback skips - playback corrects itself and continues
  • Results seem to be identical to SDL
  • Did not experience Bad latency! as described in dropdown

Live playback latency on all three settings was not noticeable to me (under 100ms)

@diizy
Contributor
diizy commented Jan 13, 2015

On 01/13/2015 07:24 PM, Tres Finocchiaro wrote:

  • Did not experience |Bad latency!| as described in dropdown

Live playback latency on all three settings was not noticeable to me
(under 100ms)

What period size were you using? Try with different period sizes, as
this affects things like latency very much. I use 256 frames usually.

@tresf
Member
tresf commented Jan 13, 2015

What period size were you using?

Default. (IIRC 256).

-Tres

@diizy
Contributor
diizy commented Jan 13, 2015

On 01/13/2015 10:53 PM, Tres Finocchiaro wrote:

What period size were you using?

Default. (IIRC 256).

Cool. I however get very noticeable latencies on the PA backend. Like
almost a second of latency, making any live playing practically
impossible... and from what I hear this is not an uncommon case.

@tresf
Member
tresf commented Jan 13, 2015

Cool. I however get very noticeable latencies on the PA backend. Like almost a second of latency, making any live playing practically impossible... and from what I hear this is not an uncommon case.

Yeah, I remember that symptom too but it has been a while (sorry!). This is the first I've tried it out on shitty hardware. Is SDL just as bad for you?

@tresf
Member
tresf commented Jan 13, 2015

Related to SDL performance.... IIRC, Fets on Fire (similar to Guitar Hero) uses SDL and I'd assume latency is quite important for the timing on that game.

@diizy
Contributor
diizy commented Jan 13, 2015

On 01/13/2015 11:03 PM, Tres Finocchiaro wrote:

Cool. I however get /very/ noticeable latencies on the PA backend.
Like almost a second of latency, making any live playing
practically impossible... and from what I hear this is not an
uncommon case.

Yeah, I remember that symptom too but it has been a while (sorry!).
This is the first I've tried it out on shitty hardware. Is SDL just as
bad for you?

Just tried it - and yes, it is. The same outrageous latency is
noticeable with both SDL and PA.

@crank123

So, overall we seem to get the same results when using either SDL or PA. They both increase CPU usage but not drastically. Some more tests should help. I'll do some on both my HP Stream and Raspberry Pi to get the best results.

@crank123 crank123 closed this Jan 13, 2015
@crank123 crank123 reopened this Jan 13, 2015
@crank123

That was a mistake...sorry

@crank123

HP Stream
RAM - 2 GB
OS - Ubuntu 14.10 64-bit
CPU - 2.16GHz
HD - 29.6 GB

CPU Usage
Idle (LMMS not running) - 20-25%
Idle (LMMS running) - 40-45%
unfa-Spoken.mmpz (playing-ALSA) - 100% and sound distorted
unfa-Spoken.mmpz (playing-PA) - 93-100% horrible latency
unfa-Spoken.mmpz (playing-SDL) - 93-100% no problems

@diizy
Contributor
diizy commented Jan 13, 2015

On 01/14/2015 01:21 AM, Gabe Bauer wrote:

HP Stream
RAM - 2 GB
OS - Ubuntu 14.10 64-bit
CPU - 2.16GHz
HD - 29.6 GB

CPU Usage
Idle (LMMS not running) - 20-25%
Idle (LMMS running) - 40-45%
unfa-Spoken.mmpz (playing-ALSA) - 100% and sound distorted
unfa-Spoken.mmpz (playing-PA) - 93-100% horrible latency
unfa-Spoken.mmpz (playing-SDL) - 93-100% no problems

Try with ALSA with the device setting configured properly.

@tresf
Member
tresf commented Jan 14, 2015

VirtualBox + 12.04 x64 seems to work well with the SDL option as well.

All sounds in general have about 60ms latency due to the virtualization layer.

ALSA seemed to work ok using pasuspender and hw:0,0 but I accidentally started lmms up witih SDL and pasuspender and system sounds never recovered afterward requiting a reboot.

Overall within VirtualBox, SDL seems to do a better job of halting and resuming audio when it does cut out, but in all scenarios -- just like the low powered Dell Inspiron tests -- playback really seems to take a back seat to just about any other CPU function.

I haven't done serious composition on Linux in ages... Do you guys have some other tricks with ALSA (or even SDL for that matter) for performance? I realize my resources are a bit underpowered here, so my test cases may not represent the masses.

@diizy
Contributor
diizy commented Jan 14, 2015

On 01/14/2015 04:13 AM, Tres Finocchiaro wrote:

I haven't done serious composition on Linux in ages... Do you guys
have some other tricks with ALSA (or even SDL for that matter) for
performance? I realize my resources are a bit underpowered here, so my
test cases may not represent the masses.

No, just set the device, and set period size. Larger period size ->
better performance (but worse latency + worse accuracy on
non-sample-exact controls).

@tresf
Member
tresf commented Jan 14, 2015

Hmm... Do we do any prioritization with QThreads now?
http://stackoverflow.com/questions/21918344/how-to-lower-qt-gui-thread-priority

The back ends seem to be invoked by QThreads already.

Win32 seems to be a horse of a different color tho... Some proprietary windows API hooks for prioritizarion on a system level.

@diizy
Contributor
diizy commented Jan 14, 2015

On 01/14/2015 06:41 AM, Tres Finocchiaro wrote:

Hmm... Do we do any prioritization with QThreads now?

Not to my knowledge.

The back ends seem to be invoked by QThreads already.

The mixer thread handles the backend. What we need is to make the mixer
thread completely non-blocking.

@tresf
Member
tresf commented Jan 14, 2015

What we need is to make the mixer thread completely non-blocking.

Hmm.. I'm not sure we're on the same page here...

What you are describing... blocking... is a thread waiting on something we told it to.

What I'm talking about is prioritization, which is the CPU deciding to let one process take priority over another.

Generally the GUI runs on a dedicated thread (which can block other threads if they have concurrent objects with locks). So.... changing a volume knob from the GUI thread can block another thread until finished (just the nature of an atomic variable/state), but scrolling the Song Editor or minimizing and maximizing the windows content really should not be blocking anything, right? Unless there is something about the way QT redraws stuff that is putting a mutex on something.

@tresf
Member
tresf commented Jan 14, 2015

Ok... Here's a place that we seem to change priority Mixer.cpp#L141

I'm interested to know the effect of doing something similar on the GUI thread, but lower -- although I'm not sure where the GUI thread is initialized (inherited by the main window?). I'll look around.

for( int i = 0; i < m_numWorkers+1; ++i )
{
    MixerWorkerThread * wt = new MixerWorkerThread( this );
    if( i < m_numWorkers )
    {
        wt->start( QThread::TimeCriticalPriority );
    }
    m_workers.push_back( wt );
}
@tresf
Member
tresf commented Jan 14, 2015

So it looks like QT objects are self-aware of parent threads. This is very nice as it alleviates us trying to track the GUI thread throughout our code.

This means in any QObject code (such as in the constructor of MainWindow.cpp) we should be able to put:

QThread::currentThread()->setPriority("## SOME PRIORITY ##");

The possible values are:

QThread::IdlePriority   // 0    scheduled only when no other threads are running.
QThread::LowestPriority // 1    scheduled less often than LowPriority.
QThread::LowPriority    // 2    scheduled less often than NormalPriority.
QThread::NormalPriority // 3    the default priority of the operating system.
QThread::HighPriority   // 4    scheduled more often than NormalPriority.
QThread::HighestPriority    // 5    scheduled more often than HighPriority.
QThread::TimeCriticalPriority   // 6    scheduled as often as possible.
QThread::InheritPriority    // 7    use the same priority as the creating thread. This is the default.

By default the parent's priority is passed to child threads. Of course, as you can see in the snippet from the previous post, Mixer threads are set as TimeCriticalPriority, so perhaps we just need to bump MainWindow down a bit. MainWindow likely defaults to NormalPriority, which is the default for the OS.

Also as a reminder, I've read that Microsoft Windows needs some special massaging in this area, so what works for Linux might not necessarily work for the Redmond flavors. 👍

@tresf
Member
tresf commented Jan 14, 2015

Ok, I set an execution cap on the VM and that setting in the constructor MainWindow.cpp really doesn't seem to help all that much by itself although I invite others to try.

@diizy do you think that in some of our MixerWorkerThreads we are updating the GUI on accident (such as the FadeButton)?

If we're attaching GUI slots to worker theads, what is promising that they are not being fired with the same priority of their parent or worse, that the slot is being fired in the worker queue (which may be the blocking you are referring to... )

@tresf
Member
tresf commented Jan 14, 2015

Debugging QThread::currentThread()->priority() in FadeButton's paint event, I've confirmed that the FadeButton priority matches that of the MainWindow which is a good sign... still investigating... :)

@diizy
Contributor
diizy commented Jan 14, 2015

On 01/14/2015 03:52 PM, Tres Finocchiaro wrote:

What we need is to make the mixer thread completely non-blocking.

Hmm.. I'm not sure we're on the same page here...

What you are describing... blocking... is a thread waiting on
something we told it to.

What I'm talking about it prioritization, which is the CPU deciding to
let one process take priority over another.

They're related issues. Because blocking can lead to something called
priority inversion (lower priority thread blocks higher priority
thread). If the main mixer thread is completely non-blocking, then it
obviously can't be blocked by lower-priority threads.

Generally the GUI runs on a dedicated thread (which can block other
threads if they have concurrent objects with locks). So.... changing a
volume knob from the GUI thread can block another thread until finished

Nope, changing a volume knob is not a blocking action.

(just the nature of an atomic variable/state),

Atomics don't use locks, unless the platform doesn't support atomic
operations - in which case they may be emulated by using locks. The
whole point of atomic operations is that they enable threadsafety
without using locks.

but scrolling the Song Editor or minimizing and maximizing the windows
content really should not be blocking anything, right? Unless there is
something about the way QT redraws stuff that is putting a mutex on
something.

GUI stuff shouldn't generally lock DSP threads at all. If they do, it's
usually a design flaw - and we do have plenty of those...

@diizy
Contributor
diizy commented Jan 14, 2015

On 01/14/2015 04:37 PM, Tres Finocchiaro wrote:

Ok, I set an execution cap on the VM and that setting in the
constructor |MainWindow.cpp| really doesn't seem to help all that much
by itself although I invite others to try.

@diizy https://github.com/diizy do you think that in some of our
MixerWorkerThreads we are updating the GUI on accident (such as the
|FadeButton|)?

No... if a function is called from a thread, it is executed in that same
thread. Ie. if we call a function that updates FadeButton from a DSP
thread, it gets executed in the DSP thread.

If we're attaching GUI slots to worker theads, what is promising that
they are not being fired with the same priority of their parent or
worse, that the slot is being fired in the worker queue (which may be
the |blocking| you are referring to... )

Slots/signals shouldn't be used in DSP at all. This is one reason we
need to rework the instrument API, so that we can get rid of
signals/slots in instruments.

@tresf
Member
tresf commented Jan 14, 2015

If the main mixer thread is completely non-blocking, then it obviously can't be blocked by lower-priority threads.

I still don't understand what you are saying... The idea of "blocking" is waiting for something on the thread. What does that have to do with an unrelated simultaneous threads (unless of course they use mutual blocking objects)?

Nope, changing a volume knob is not a blocking action.

You'd know better than me, but what I'm describing is changing the model, and what I'm not sure of is how a GUI thread changes a model that's attached to a separate thread, but that's not what I'm observing anyway... What I'm observing is that the GUI is destroying performance (which we all are well aware of by now) and when we start talking performance (ALSA vs. SDL), I have to step back and ask why it is so bad to begin with... :)

Slots/signals shouldn't be used in DSP at all. This is one reason we need to rework the instrument API, so that we can get rid of signals/slots in instruments.

Are you sure? So as long as the slots are executed on the GUI thread, I don't see why this would matter. I've seen some wrap these in timer threads although I think QT provides something called a Qt::QueuedConnection http://qt-project.org/doc/qt-4.8/qt.html#ConnectionType-enum which should help put slots on the right parent thread (right?). This stuff is new to me in the C++ world, but the concept is pretty consistent across interfaces.

... but perhaps I'm over-thinking this as AutomatableModelView.cpp#L170 seems to already make use of the queued connections.

-Tres

@tresf
Member
tresf commented Jan 14, 2015

Ok, I can confirm that the fade button's activate() function is getting put on HighPriority on song playback, but not on instrument playback... On instrument playback it matches the priority of MainWindow... Now to find out where.... 😈

-Tres

@tresf
Member
tresf commented Jan 14, 2015

So the offending line was advice you gave Dave (@curlymorphic). We should change that back and also be more mindful of the performance impact of these decisions. Queuing up GUI change is what we want, and if SIGNALS/SLOTS do that for us naturally, we should be using them until we roll our own equivalent.

@curlymorphic
Contributor

SIGNALS/SLOTS

These are the way Qt does block free cross thread communication.

I was going to try and give an explanation. but rather than giving miss information I will link the the 4 pages in the Qt manual that explain it so much clearer than i ever could.

http://qt-project.org/doc/qt-4.8/threads-qobject.html
http://qt-project.org/doc/qt-4.8/threads-reentrancy.html
http://qt-project.org/doc/qt-4.8/threads-synchronizing.html
http://qt-project.org/doc/qt-4.8/qthread.html

Every time i read them, i learn, (or rather understand) something different.

Thread priority does seem to be different on OS's. I wrote some code as an exercise in threads over xmas, basically a simple synth app aimed at android. My aim was to have a ui thread and an audio thread. On Android using HighestPriority on the audio thread gave a large improvement, but on my desktop the difference between HighestPriority and Normal was minimal, but i have a feeling that may be down the the difference in processing power between my phone and pc.

Queuing up GUI change is what we want, and if SIGNALS/SLOTS do that for us naturally, we should be using them until we roll our own equivalent.

My opinion +1

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/14/2015 05:34 PM, Tres Finocchiaro wrote:

If the main mixer thread is completely non-blocking, then it
obviously can't be blocked by lower-priority threads.

I still don't understand what you are saying... The idea of "blocking"
is waiting for something on the thread. What does that have to do with
an unrelated simultaneous threads (unless of course they use mutual
blocking objects)?

You answered your own question...

Nope, changing a volume knob is not a blocking action.

You'd know better than me, but what I'm describing is changing the
model, and what I'm not sure of is how a GUI thread changes a model
that's attached to a separate thread, but that's not what I'm
observing anyway...

Changing the model is just a matter of calling a function in the model.
If a function is called in thread 1, and the model is owned by thread 2,
the function is still executed in thread 1, regardless of the ownership
of the model. Thread 3 may read the value of the model with another
function, and that function is then executed in thread 3.

Slots/signals shouldn't be used in DSP at all. This is one reason
we need to rework the instrument API, so that we can get rid of
signals/slots in instruments.

Are you sure? So as long as the slots are executed on the GUI thread,
I don't see why this would matter.

It matters because they bring pointless overhead. It may not be a big
deal if it's something you only have to call once, but considering we
have automations, a signal attached to a knob can easily get called
every period...

Calling a slot-function from a signal takes a LOT more CPU than simply
calling the same function directly. The overhead comes from the
metaobject-system and genericity of the slot/signal system.

Also they are obviously not RT-safe...

@tresf
Member
tresf commented Jan 15, 2015

Calling a slot-function from a signal takes a LOT more CPU than simply calling the same function directly. The overhead comes from the metaobject-system and genericity of the slot/signal system.

So as long as that added CPU cycle isn't blocking our playback, we accept it and move on. We can improve this later, but for now recommending to fire a priority 4 GUI animation on our processing thread is bad and needs to be fixed.

@tresf
Member
tresf commented Jan 15, 2015

It matters because they bring pointless overhead.

When you spend time in another language trying to manage GUI changes from separate threads you quickly realize this QT signal/slot design is tremendously efficient. I'm not sure how many more of these mistakes have been made, but we need to eradicate them. Queuing GUI changes on the GUI thread is what we want and need. Unless you have a better proposal for this, we will be changing this back and looking for other similar mistakes in our code.

@tresf
Member
tresf commented Jan 15, 2015

Also they are obviously not RT-safe...

No, this is not obvious to me, but I'd rather talk about that in the futuremap, rather than here, since this is a stable-1.2 problem, and RT safety is a stable-2.0 and/or beyond problem.

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 04:33 PM, Tres Finocchiaro wrote:

Calling a slot-function from a signal takes a LOT more CPU than
simply calling the same function directly. The overhead comes from
the metaobject-system and genericity of the slot/signal system.

So as long as that added CPU cycle isn't blocking our playback, we
accept it and move on.

We don't need to accept it.

We can improve this later, but for now recommending to fire a priority
4 GUI animation on our processing thread is bad and needs to be fixed.

That's not what is recommended. The function called from the
instrumentTrack shouldn't do any animation, it should just set a flag on
the GUI element, letting it know it needs to fire. The GUI element
should still handle the animation in its own thread.

@tresf
Member
tresf commented Jan 15, 2015

it should just set a flag on the GUI element, letting it know it needs to fire

But this is what the slot does. The slot sets a flag to fire the function on a separate thread. What part of this do you not understand? Why hold 1000 bool's when we can use the tried and trued technologies we already have in place?

@tresf
Member
tresf commented Jan 15, 2015

We don't need to accept it.

Yes, we do. It is our code design and eradicating it blindly will plague the code. We need to come to an agreement on this, and rewriting all of our signals and slots before 1.2 isn't a possibility.

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 04:38 PM, Tres Finocchiaro wrote:

It matters because they bring pointless overhead.

When you spend time in another language trying to manage GUI changes
from separate threads you quickly realize this QT signal/slot design
is tremendously efficient.

No, no it really is not. Each signal/slot call causes extra overhead and
there's no guarantee of RT-safety. Convenience (for the programmer) is
not the same thing as efficiency.

I'm not sure how many more of these mistakes have been made, but we
need to eradicate them. Queuing GUI changes on the GUI thread is what
we want and need. Unless you have a better proposal for this, we will
be changing this back and looking for other similar mistakes in our code.

I have. Signals/slots should be used for internal GUI stuff only. If we
need something executed in another thread, then a message should be
passed to that thread to let it know to execute it. Signals/slots should
never be used in any DSP context.

If we want to someday get LMMS in a shape where it can be used with
Jack, these are the things we need to do. Qt is, as a whole, not very
suitable for RT-safe code - many Qt classes are non-RT-safe, meaning
they can't be used in DSP threads.

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 04:47 PM, Tres Finocchiaro wrote:

it should just set a flag on the GUI element, letting it know it
needs to fire

But this is what the slot does.

Inefficiently, with extra overhead, with no guarantee of RT-safety. What
part of this do you not understand?

The slot sets a flag to fire the function on a separate thread. What
part of this do you not understand? Why hold 1000 bool's when we can
use the tried and trued technologies we already have in place?

Because performance matters more than doing things the easy way.

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 04:39 PM, Tres Finocchiaro wrote:

Also they are obviously not RT-safe...

No, this is not obvious to me, but I'd rather talk about that in the
futuremap, rather than here, since this is a stable-1.2 problem, and
RT safety is a stable-2.0 and/or beyond problem.

Even if we can't make LMMS completely RT-safe right now, that doesn't
mean it's OK to just completely ignore it. We should be doing what we
can to improve it right now so that there's less work to do in the future.

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 04:49 PM, Tres Finocchiaro wrote:

We don't need to accept it.

Yes, we do. It is our code design and eradicating it blindly will
plague the code. We need to come to an agreement on this, and
rewriting all of our signals and slots before 1.2 isn't a possibility.

No one is talking about eradicating all signals and slots. They just
shouldn't be used in any DSP context. When they're confined to GUI only,
or transient one time events (eg. button clicks, moving a FX channel)
they're fine. They shouldn't be used in anything that is linked to DSP,
such as models.

Also, if a signal only ever connects to one slot, then it's pointless
overhead that can be better handled with a direct function call (or
message-passing).

@tresf
Member
tresf commented Jan 15, 2015

Also, if a signal only ever connects to one slot, then it's pointless overhead that can be better handled with a direct function call (or message-passing).

That is opinion, not fact. How slots are architected from an API perspective has merits, regardless of how many callers it has.

Inefficiently, with extra overhead, with no guarantee of RT-safety. What part of this do you not understand?

Your hypothetical bool has no guarantee of RT-safety either, so please don't use that as an argument. The inefficiencies you mention are very well accepted methods for notify the GUI of changes. A good idea today is better than a great idea tomorrow and firing this event directly is unarguably a terrible idea. What part of that do you not understand?

@tresf
Member
tresf commented Jan 15, 2015

Because performance matters more than doing things the easy way.

But performance is the exact issue here, is it not? In your experience, how common is it for us to fire slots from DSP code currently?

no guarantee of RT-safety

I'd really like to shelve this conversation for a rainy day, but it seems to be a focus of most of your arguments...

Internally, the underlying technology around Signals and Slots use atomic variables for thread safety. I don't know much about real-time safety, but I'm a bit confused as to how notifying the GUI that something has changed and to perform a GUI update or animation breaks any atomic operations. I'm not saying it IS safe, I just fail to understand what is unsafe about this.

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 05:12 PM, Tres Finocchiaro wrote:

Also, if a signal only ever connects to one slot, then it's
pointless overhead that can be better handled with a direct
function call (or message-passing).

That is opinion, not fact.

Uh, no, it's a fact that signals/slots cause extra overhead. It's even
stated in the Qt documentation.

How a slots are architected from an API perspective has merits,
regardless of how many callers it has.

No. The only real value of signals/slots is if you a) have a signal and
b) want to connect it to an arbitrary number of slots, which may change
dynamically. If you always know the exact slots you want to call, then
it's always cheaper to just call the slots directly, rather than invoke
the metaobject system and all the overhead that entails.

Inefficiently, with extra overhead, with no guarantee of
RT-safety. What part of this do you not understand?

Your hypothetical |bool| has no guarantee of RT-safety either, so
please don't use that as an argument.

First of all, you're the one who brought up "bools" here (I don't even
know what your "bool" is supposed to signify here). So don't go putting
words in my mouth.

Secondly, if I'm specifically talking about making something RT-safe,
then it's pretty much safe to assume that I'm talking about a mechanism
that is RT-safe.

The inefficiencies you mention are very well accepted methods for
notify the GUI of changes.

Weasel words. Accepted by whom?

A good idea today is better than a great idea tomorrow and firing this
event directly is unarguably a terrible idea. What part of that do you
not understand?

Using signals/slots here is not a good idea today or any day.

I'm not sure why you're getting so argumentative about this one issue
when you're not even maintaining this part of the codebase...

@tresf
Member
tresf commented Jan 15, 2015

No one is talking about eradicating all signals and slots. They just shouldn't be used in any DSP context.

Is the fear that the amount of time it takes QT to queue this function call is slowing down your DSP code? Reason being is that we need to queue it either way and writing our own sleep threads for this stuff is a nightmare when QT already provides the queuing framework for it. I'm not trying to pretend that I know more about signals/slots, but I can say with all certainty that queuing the call on the receiver thread is exactly what we want to achieve in these cases, but I cannot speak to the amount of use-cases we have for this, how many DSP->slot calls we make today.

I'm not sure why you're getting so argumentative about this one issue when you're not even maintaining this part of the codebase...

The same reason you are arguing, we want the software better. P.S. We get nowhere when we star pointing fingers.

@tresf
Member
tresf commented Jan 15, 2015

So don't go putting words in my mouth.

You used the word "flag", I assumed bool but there's no reason to argue semantics on a hypothetical.

The inefficiencies you mention are very well accepted methods for notify the GUI of changes.

Weasel words. Accepted by whom?

This is one of the selling points of the technology to prevent us from rolling out our own thread queue management, but here's a decent article on it:

https://mayaposch.wordpress.com/2011/11/01/how-to-really-truly-use-qthreads-the-full-explanation/

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 05:21 PM, Tres Finocchiaro wrote:

Because performance matters more than doing things the easy way.

But performance is the exact issue here, is it not? In your
experience, how common is it for us to fire slots from DSP code currently?

Look at anywhere where a model's dataChanged() signal is connected to a
slot in a DSP object. This is all too common in our effect & instrument
plugins. The result of this is that when you automate the knob that is
connected to that model, that signal gets fired every tick...

With instrument plugins, the way our instrument API currently works,
it's sometimes unavoidable to use slots, since the whole instrument
processing is separated to per-note threadjobs, and there is no single
place that gets called periodically in most of our instruments. That's
one reason the instrument API needs a redesign/rewrite.

With single-streamed instruments and effects, there's no reason to use
signals/slots, though.

no guarantee of RT-safety

I'd really like to shelve this conversation for a rainy day, but it
seems to be a focus of most of your arguments...

Internally, the underlying technology around Signals and Slots use
atomic variables for thread safety. I don't know much about real-time
safety, but I'm a bit confused as to how notifying the GUI that
something has changed and to perform a GUI update or animation breaks
any atomic operations. I'm not saying it IS safe, I just fail to
understand what is unsafe about this.

RT-safety is not just about atomic operations. There are many ways to
violate RT-safety. One is locking (which is what atomics are used to
avoid), another is dynamic allocations (causing context switches, which
causes latencies), etc...

The MemoryManager strives to solve the latter issue (get rid of
mallocs), but it can't be applied to Qt's internal classes (we can't
change Qt classes/functions to use the MemoryManager). Most Qt classes
are not RT-safe: their methods may invoke dynamic allocations. For
instance, if you add an item to a QVector that is not large enough to
contain it, then the QVector gets reallocated, using dynamic allocation
(malloc) which violates RT-safety.

So those Qt classes should not be used in DSP threads, unless we can
ensure that no unsafe operations are performed.

When it comes to signals & slots, I'm pretty confident that there's no
guarantee of RT-safety within them. A signal invokes the metaobject
system, and uses Qt's internals to figure out what functions it needs to
call or schedule. If simple things like QVectors or QStrings do not
operate RT-safely, then what about a complex mechanism like signals/slots...

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 05:31 PM, Tres Finocchiaro wrote:

No one is talking about eradicating all signals and slots. They
just shouldn't be used in any DSP context.

Is the fear that the amount of time it takes QT to queue this function
call is slowing down your DSP code?

It's extra overhead that we don't need. The metaobject system gets
invoked with each signal: the fired signal looks up the connections from
the metaobject system, gets pointers to the slot functions, then either
calls those functions directly or informs (in some way) another thread
that it needs to call those functions.

When there's only one function, that we know at compile-time, that we
want to queue, then invoking the metaobject system is pointless, because
we already know the function we want to queue.

So we should just pass the message to the object, then inform the GUI
thread that the object needs to be updated.

Reason being is that we need to queue it either way and writing our
own sleep threads for this stuff is a nightmare when QT already
provides the queuing framework for it. I'm not trying to pretend that
I know more about signals/slots, but I can say with all certainty that
queuing the call on the receiver thread is exactly what we want to
achieve in these cases, but I cannot speak to the amount of use-cases
we have for this, how many DSP->slot calls we make today.

Yeah, there's lots of things that are easy with Qt, but when you want to
make things RT-safe, they're unusable. Sometimes we just have to go
through some extra effort in order to make things better.

I'm not sure why you're getting so argumentative about this one
issue when you're not even maintaining this part of the codebase...

The same reason you are arguing, we want the software better. P.S. We
get nowhere when we star pointing fingers.

Fair enough, it's just that you used to trust my judgement when I say
something isn't the best way to do things...

@tresf
Member
tresf commented Jan 15, 2015

you used to trust my judgement when I say something isn't the best way to do things...

Well I hope we can agree that 1. We're not all perfect and 2. We all make mistakes.

In the case of the offending code, a mistake was made and I'm trying to understand 1. Why. 2. Has this mistake been made before.

In this case, the GUI needs notification of a change. I'm still uncertain as to the problem though. To your argument, the GUI is QT and will always fall victim to RT safety issues. Instead of arguing against the signals/slots, shouldn't we be advocating for less signals/slots for the non-GUI actions?

This seems like a good use case for signals and slots from a thread queue perspective and I'd also like to review other places where we're updating the GUI on the wrong thread where we could use a delayed queue schedule to increase performance.

Right now, I can't use the QT scroll bars without tremendous performance issues and I feel that if we can isolate places where the GUI is being updated from DSP code, we can really boost performance. As an interim step, I'd like to use the slots we have available today as a mechanism for queuing these events. Much like the memory manager, drastic changes take time and testing, so my instinct is to put this back the way it was in terms of emitting a slot call (we should explicitly queue it -- and other similar uses -- on the receiving thread, as slots allow).

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 06:01 PM, Tres Finocchiaro wrote:

you used to trust my judgement when I say something isn't the best
way to do things...

Well I hope we can agree that 1. We're not all perfect and 2. We all
make mistakes.

In the case of the offending code, a mistake was made and I'm trying
to understand 1. Why. 2. Has this mistake been made before.

In this case, the GUI needs notification of a change. I'm still
uncertain as to the problem though. To your argument, the GUI is QT
and will always fall victim to RT safety issues. Instead of arguing
against the signals/slots, shouldn't we be advocating for less
signals/slots for the non-GUI actions?

This seems like a good use case for signals and slots from a thread
queue perspective and I'd also like to review other places where we're
updating the GUI on the wrong thread where we could use a delayed
queue schedule to increase performance.

Right now, I can't use the QT scroll bars without tremendous
performance issues and I feel that if we can isolate places where the
GUI is being updated from DSP code, we can really boost performance.
As an interim step, I'd like to use the slots we have available today
as a mechanism for queuing these events. Much like the memory manager,
drastic changes take time and testing, so my instinct is to put this
back the way it was in terms of emitting a slot call (we should
explicitly queue it -- and other similar uses -- on the receiving
thread, as slots allow).

Implementing a simple message-passing mechanism shouldn't be a drastic
operation.

@tresf
Member
tresf commented Jan 15, 2015

Implementing a simple message-passing mechanism shouldn't be a drastic operation.

Well, can we agree that if this isn't implemented for 1.2, the slot call is going back in?

@diizy
Contributor
diizy commented Jan 15, 2015

On 01/15/2015 06:23 PM, Tres Finocchiaro wrote:

Implementing a simple message-passing mechanism shouldn't be a
drastic operation.

Well, can we agree that if this isn't implemented for 1.2, the slot
call is going back in?

sure

@tresf
Member
tresf commented Apr 6, 2015

So back on topic... if we switch the default to something like PulseAudio or SDL, I think we'll be mimicking what @tobydox did in 054abf7 where he brought SDL up in the Mixer.cpp code which should be a relatively small change.

This is more of a group decision than it is a bug. We should decide on what to do and either change this or leave it alone. There doesn't seem to be any platform-dependant logic in there, so whatever we decide may have a cascading effect to other platforms.

@crank123
crank123 commented Apr 7, 2015

Agreed

@eagles051387
Contributor

If toby brought SDL up in the mixer code, should we do that across the
board?

On Tue, Apr 7, 2015 at 2:49 AM, Gabe Bauer notifications@github.com wrote:

Agreed


Reply to this email directly or view it on GitHub
#1600 (comment).

Jonathan Aquilina

@tresf
Member
tresf commented Apr 7, 2015

If toby brought SDL up in the mixer code, should we do that across the board?

This was already identified and asked above.

@curlymorphic
Contributor

Unfortunately I cant help with the decision, I cant make up my mind what is a better approach. My inner geek is shouting leave it as it is. The rest of me says SDL, you dont get a second change to make a first impression.

really sorry I cant be of more help :(

@tobydox
Member
tobydox commented Apr 9, 2015

Suggestion: runtime detection of whether PulseAudio (do not mix up with PortAudio!) is running or not. If not, always prefer ALSA because all other backends like SDL introduce an additional layer causing additional problems and/or latencies.

The AudioDevice class could have another virtual method which returns whether the backend-specific implementation is available/running on that platform. Before probing AudioDevice-classes in a certain order, the Mixer could call this new method for all backends (BTW: we should rename AudioDevice to AudioBackend) and if one returns true, it's chosen first.

@tresf
Member
tresf commented Apr 9, 2015

Always prefer ALSA because all other backends like SDL introduce an additional layer causing additional problems and/or latencies.

Normally, I'd agree with this statement, but as can be observed in my testing, our ALSA instructions cause more work for the end-user and in the average-user-use-case examples, provided equal or worse output (despite slightly better performance), see MOMO64 TEST above.

PulseAudio seems to be the black sheep here because it notoriously has been awful, but that's not necessarily the case anymore as in testing it performed just as well as SDL on the modern *buntu flavors out of the box (which wasn't the case years ago -- thus we've removed the warning). SDL on the other hand seems to "just works" on all platforms.

I'm not trying to form a camp around a particular setting, I just have a hard time telling people ALSA should be default when our own wiki instructions don't work out of the box.

do not mix up with PortAudio

Speaking of PortAudio... If we could fix the issues with DirectSound on Windows, that would be a viable default back-end as well I feel... :)

@unfa
Contributor
unfa commented Apr 11, 2015

On 12 Jan 2015 15:23, "Vesa V" notifications@github.com wrote:

On 01/12/2015 03:33 PM, Tres Finocchiaro wrote:

So should PulseAudio be default then?

PA is the worst possible choice for audio work. Latencies are horrible,
etc. If SDL uses PA as a backend then SDL probably will have problems
with latencies as well.

I see absolutely no point in encouraging users to use inferior backends
just to make it "easier" for them - it'll only hurt them in the long
run, they'll have to learn how to setup their system anyway. I
personally have no interest whatsoever in creating a "toy DAW for
dummies". The whole mentality where everything has to "just work" - even
at the cost of functionality - is the problem with software these days,
everything getting dumbed down... we can expect more from our users.
I think that appealing to beginners is also a good thing, as they might
grow up and become a valuable part of our community. And if they install 5
DAWs to try out where one has distorted sound from the beggining - it means
our future with that user might cease to exist.

On the other hand - Ardour, the most professional GPL DAW I know offers no
choice here. You either use JACK or no Ardour for you! However it's highly
understandable when you look at what Ardour does, being able to capture 60
tracks of audio simultaneously and doing advanced routing of the signals,
both internally and externally (JACK makes it the same thing).

I personally use KX Studio for years now and it uses JACK by default. No
Pulse Audio. However I find myself in need to use ALSA backend, as LMMS
tends to loose it's connection with JACK every time I open Zyn GUI. I have
to use an ALSA-JACK bridge with increased quality (KX Studio defaults are
cheap on CPU but introduce quantization noise and aliasing), and that costs
me around 10% of CPU time and often introduces bad xruns.

I never figured out what the "device" box does in ALSA backend and what to
put there. A combo box would be great help indeed, but I'd prefer the JACK
backend to be reliable instead.

Also the ability to change audio backend without restarting LMMS would be
great.


Reply to this email directly or view it on GitHub.

@Spekular
Contributor
@Umcaruje
Member

I never figured out what the "device" box does in ALSA backend and what to
put there.

The device box is there so you select the desired soundcard to use with alsa.

You can see a list of all your devices by doing aplay -l in the terminal:

screenshot from 2015-04-11 15 13 21

Then you put the name of the desired soundcard inside lmms in the hw:X,X format:
screenshot from 2015-04-11 15 20 15

Hope that helps.

@michaelgregorius
Contributor

I have implemented the selection of the ALSA device via a combo box. Please check pull request #2135 to see how it looks. Thanks!

@midi-pascal
Contributor

👍

@michaelgregorius michaelgregorius added a commit to michaelgregorius/lmms that referenced this issue Jun 27, 2015
@michaelgregorius michaelgregorius Fixes most of stuff found in Wallacoloo's code review for #1600
Removal of a superfluous include in AudioAlsaSetupWidget.cpp

Removal of the function "bool hasCapabilities(char *device_name)" which
was not used anyway. It implemented a test for ALSA device capabilities
needed by LMMS (SND_PCM_ACCESS_RW_INTERLEAVED, SND_PCM_FORMAT_S16_LE,
etc.).

Corrected header name in AudioAlsaSetupWidget.h.

Created an implementation file for AudioDeviceSetupWidget to make more
clear that it's part of the GUI.

Fix build for builds that use Port Audio. The setup widget of
AudioPortAudio.h still inherited from AudioDevice::setupWidget instead
of the new AudioDeviceSetupWidget.
5a8dce2
@Umcaruje Umcaruje added the ux label Jul 4, 2015
@sunnystormy

On my budget ASUS and DELL inspiron, I've needed to switch to SDL in order to get sound to play properly. Prior to doing so, some of the complimentary "cool songs" wouldn't play properly (ALSA with distorted, buggy sound, sometimes inaudible). My ASUS has an Ivy-Bridge Intel Chip, and my DELL a Kaveri AMD. I'm also running Debian Jessie on both machines.

TLDR: +1 for SDL as default.

@tresf
Member
tresf commented Aug 27, 2015

Can I get a new poll of the SDL vs. ALSA vs. whatever?

Last we asked, most agreed and a few strongly disagreed that SDL should be default in Linux.

I did some low-performance bench-marking on Ubuntu via #1600 (comment) and found SDL was our lowest common denominator for a new Linux workstation with minimum setup fuss and decent performance in those tests (Sorry, I didn't test other distros).

So I'd like to be able to either make this change or close this issue out entirely. Your vote counts (if you use Linux/Unix). :)

@Wallacoloo
Member

@tresf I apologize in advance: I am not going to read this entire thread so my statement may be redundant.

Quote: Diizy

Don't use PulseAudio.

My understanding from the ALSA wikipedia page is that ALSA is a kernel component. Therefore it seems a totally valid assumption that it should work on any desktop Linux system. So I agree with @diizy's first comment. It seems like SDL working on a system with broken ALSA support only serves to mask the underlying issues that the user is really having, and doing that is likely to only cause more pain in the future.

OTOH, and speaking as a developer of this generally frightening codebase, it's not unlikely that the bugs lie in our ALSA implementation. Statistically speaking, if SDL seems to be more reliable than ALSA, then sure, my vote is: default to SDL. It's a trivial thing to change down the road if ALSA support improves, or if we can dynamically detect when a backend is fundamentally broken & fallback to SDL (what is the source of the distortion, anyway? Buffer underruns?)

@michaelgregorius
Contributor

OTOH, and speaking as a developer of this generally frightening codebase, it's not unlikely that the bugs lie in our ALSA implementation.

@Wallacoloo I have made some performance analysis with Valgrind. You can find the results in #2295. There is also lots of inefficient code that's called which is not related to ALSA. However even neutralizing these calls still leads to a high CPU load. Is it possible that the busy waiting in AudioAlsa::run is the cause for this? If that's the case: is it possible to use ALSA with callbacks similar to the Jack API?

It also seems that we also feed ALSA signed 16 bit integers instead of directly the float values. The floats are converted in AudioDevice::convertToS16 which is quite inefficient. So one thing to consider is to feed ALSA directly the float data instead of doing the conversion.

@ThomasJClark ThomasJClark added a commit to ThomasJClark/lmms that referenced this issue Sep 12, 2015
@michaelgregorius @ThomasJClark michaelgregorius + ThomasJClark Fixes most of stuff found in Wallacoloo's code review for #1600
Removal of a superfluous include in AudioAlsaSetupWidget.cpp

Removal of the function "bool hasCapabilities(char *device_name)" which
was not used anyway. It implemented a test for ALSA device capabilities
needed by LMMS (SND_PCM_ACCESS_RW_INTERLEAVED, SND_PCM_FORMAT_S16_LE,
etc.).

Corrected header name in AudioAlsaSetupWidget.h.

Created an implementation file for AudioDeviceSetupWidget to make more
clear that it's part of the GUI.

Fix build for builds that use Port Audio. The setup widget of
AudioPortAudio.h still inherited from AudioDevice::setupWidget instead
of the new AudioDeviceSetupWidget.
4a2536a
@tresf tresf added a commit that closed this issue Sep 18, 2015
@tresf tresf Make SDL default for all platforms
Closes #1600
1bb276b
@tresf tresf closed this in 1bb276b Sep 18, 2015
@tresf
Member
tresf commented Sep 18, 2015

I've made SDL default for all platform on a fresh install. Since this is a controversial topic, I'd like to end this thread with the quote from @Wallacoloo:

It's a trivial thing to change down the road if ALSA support improves, or if we can dynamically detect when a backend is fundamentally broken & fallback to SDL

@zonkmachine
Member

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment