New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LiquidSoap Segfaulting #876
Comments
@multi023 Seems like something is flooding your logs. Try running |
@SlvrEagle23 We have looked and are the .coreXXXX files This happens with each radio station: Apparently, it happens with the latest version with docker, with the 18.x of Ubuntu. With 16.x, it did not happen. Nor did it happen with the previous version of the docker without updating. Thank you. Greetings, |
Hmm those look like core dump files, my first guess is something is up with our fork of Icecast but:
Only seems to be happening a couple times a day though... Can you check the output of the command @SlvrEagle23 gave above? ( |
We only use ShoutCast. I'm going to put screenshots with the hours |
Hmm yeah should have noticed from those log files. Well that rules out that our fork of Icecast is the problem. We'll need the output of |
Hi, have you one email for send you all this? |
Ok it appears that Liquidsoap is segfaulting, we've found some recent issues detailing segfaults: savonet/liquidsoap#640 So what you can do for now:
What I can maybe guess is to look at your memory usage over time, I know some configurations will cause a segfault instead of a SIGKILL and I'm not sure how docker behaves in OOM situations. I noticed that basically things seem to run fine, then ~4 stations will segfault within 30 minutes, then be fine for another like 4 hours. Kind of odd. |
#884 also has to do with LiquidSoap segfaulting. Possibly do to very large playlists? I'll see if I can't build a very large playlist (1800+ songs). Edit: my entire music library is only 1534 songs... it'll have to do. |
@CodeSteele If you need large library i can give you this. But i'm not sure that was libquidSoap. Are you sse this ? #884 (comment) |
@MyTheValentinus hmm 14 hours in no crash/disconnect for me. LiquidSoap seems to be hovering around 90mb of memory usage for me. I did have disconnects when connecting over ngrok, but no crashes (so either ngrok isn't kind to long-held connections or we're sensitive to internet quality issues). How many stations are you running? Do they all have 1700+ songs? |
Hello, |
@MyTheValentinus no luck in replicating segfaults... Been running an entire week with some 1324 tracks with no problem. |
#946 reported this:
Interruption of the stream caused LiquidSoap to segfault? That's an interesting one... |
@CodeSteele Update was on 6-11-2018 |
We're awaiting ocaml to publish ocaml-duppy 0.7.4, once that happens we'll be pushing a patch that updates to the latest duppy to hopefully resolve this issue. |
Hello, |
@MyTheValentinus I'm hoping that the segfault being reported is causing both problems (entirely possible), and we're not chasing multiple segfaults. |
Okay, i waiting the update ;) Can you send me a message on telegram when you are ready to test ? I can test en production server |
I got two PRs available for when 0.7.4 becomes available here: https://opam.ocaml.org/packages/duppy/ |
0.8.0 are available x) I dont understand duppy project architecture. |
Yep just saw that pop... hmm OK, going to update to that then. |
I have been make docker.sh update. It's done ? Juste this is good ? Where the duppy library was download ? Via composer ? |
The |
If the updates takes too long to come in, it should be possible to provide your own package definition, |
The PR for |
Yeah, it's in now :-) |
Oh doh, that makes sense we're on Opam 1.2 (sorry, this is all new to me), considering we'll have to be looking at Opam 2.0 in the future I'll see what we have to do to upgrade that both for our docker installs (shouldn't be a big deal) and our traditional installs (we only really support Ubuntu 16.04 and 18.04 on that so it shouldn't be too bad). May do that on another pass though. thanks for getting that backported to Opam 1.2. :D |
@multi023 and others who may have been experiencing this: I've tested the changes made upstream to the Note: Make sure that your Thanks to @CodeSteele for helping investigate this elusive issue, and a big thanks to @toots for working with us and OPAM to make sure the fixes made it out (and in general for Liquidsoap <3). We'll be updating to OPAM 2.0 shortly, which should make future fixes of this nature faster. |
Hello, Thanks for the work |
Hi, Updated and testing. Thank you very much for the work :) |
Hello, Crash on all stations |
I also have two crashes on all the radios Now they occur more or less at six-hour intervals. |
Sorry to hear. If y'all have logs I'm available to look at them. |
Hello @toots |
Hey @MyTheValentinus! Liquidsoap logs, yeah for sure although it looks like your logs don't have much apparently. The usual info would be liquidsoap version and, here, to make sure that you're using
That should give you the stacktrace for all threads at crash time.. |
when exporting libquidsoap.log:
|
More azuracast oriented i think:
|
Ok so the liquidsoap logs indicate that it's still using |
Ho yeah, i didn't see. Sorry for this fake report. |
Ok, this time it was the good version:
Test in progress... |
@multi023 can you confirm that you're on 0.8.0 too? Curious if both of you ended up on duppy 0.7.3 somehow. |
@multi023 @MyTheValentinus Just as a reminder, you should check your Your compose file SHOULD say this: services:
web:
# ...many lines...
stations:
container_name: azuracast_stations
image: azuracast/azuracast_radio:latest And not this: services:
web:
# ...many lines...
stations:
container_name: azuracast_stations
image: azuracast/azuracast_stations:latest |
Yeah, sure @SlvrEagle23. |
60 hours after the last update... no crash ! Amazing ! I think it is fixed for real ! Thanks @CodeSteele @SlvrEagle23 @toots |
Excellent! Closing this issue as resolved. Thanks to all involved for the excellent collaborative effort. |
Now is all perfect! Thank youuu @SlvrEagle23 @CodeSteele @toots |
This issue has not been updated in over a year, so it is being closed for further discussion. If you are experiencing a similar issue, please create a new issue. Thank you! |
Installation method
I am using "Docker"
Every day the disk space, -without uploading new files-, decreases approximately 2 GB.
I attached capture from one day to the next.
Why does this happen? How can I solve it?
Thank you. Greetings,
The text was updated successfully, but these errors were encountered: