You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
External source (from Stereotools) connects in (OGG 320Kbps) to /live1
CPU load quickly increases over an hour (4 Core Azure b4ms instance) and then starts failing over as per logs below (Ignore the fade in the logs, I took it out of the example below as it's not relevant)
2021/09/04 10:15:07 [switch_9:3] Switch to fade.final.
2021/09/04 10:15:57 [switch_1:3] Switch to mksafe with transition.
2021/09/04 10:15:59 [switch_1:3] Switch to input.harbor_0 with transition.
2021/09/04 10:15:59 [lang.deprecated:2] WARNING: "fade.initial" is deprecated and will be removed in future version. Please use "fade.in" instead.
2021/09/04 10:15:59 [lang.deprecated:2] WARNING: "fade.final" is deprecated and will be removed in future version. Please use "fade.out" instead.
2021/09/04 10:15:59 [switch_10:3] Switch to fade.final.
2021/09/04 10:16:52 [switch_1:3] Switch to mksafe with transition.
2021/09/04 10:16:54 [switch_1:3] Switch to input.harbor_0 with transition.
2021/09/04 10:16:54 [lang.deprecated:2] WARNING: "fade.initial" is deprecated and will be removed in future version. Please use "fade.in" instead.
2021/09/04 10:16:54 [lang.deprecated:2] WARNING: "fade.final" is deprecated and will be removed in future version. Please use "fade.out" instead.
2021/09/04 10:16:54 [switch_11:3] Switch to fade.final.
2021/09/04 10:17:21 [switch_1:3] Switch to mksafe with transition.
2021/09/04 10:17:23 [switch_1:3] Switch to input.harbor_0 with transition.
2021/09/04 10:17:23 [lang.deprecated:2] WARNING: "fade.initial" is deprecated and will be removed in future version. Please use "fade.in" instead.
2021/09/04 10:17:23 [lang.deprecated:2] WARNING: "fade.final" is deprecated and will be removed in future version. Please use "fade.out" instead.
2021/09/04 10:17:23 [switch_12:3] Switch to fade.final.
2021/09/04 10:17:55 [switch_1:3] Switch to mksafe with transition.
2021/09/04 10:17:57 [switch_1:3] Switch to input.harbor_0 with transition.
2021/09/04 10:17:57 [lang.deprecated:2] WARNING: "fade.initial" is deprecated and will be removed in future version. Please use "fade.in" instead.
2021/09/04 10:17:57 [lang.deprecated:2] WARNING: "fade.final" is deprecated and will be removed in future version. Please use "fade.out" instead.
2021/09/04 10:17:57 [switch_13:3] Switch to fade.final.
To Reproduce
#!/usr/bin/liquidsoap
# General settings
log.level.set(3)
log.stdout.set(true)
# Harbor HTTP server settings
set("harbor.bind_addrs",["0.0.0.0"])
set("harbor.max_connections",10)
set("harbor.timeout",10.)
set("harbor.verbose",false)
# Audio settings
set("frame.audio.samplerate",44100)
set("frame.audio.channels",2)
set("audio.converter.samplerate.libsamplerate.quality","fast")
# Clocks settings
set("root.max_latency",5.)
set("clock.allow_streaming_errors",false)
#####################
# START OF PROCESSING
#####################
# Incoming icecast/shoutcast stream on /live1
live1 = input.harbor("live1",port=8050,password="xxx", max=20.)
# Incoming icecast/shoutcast stream on /live2
live2 = input.harbor("live2",port=8050,password="xxx", max=20.)
# Incoming icecast/shoutcast stream on /automation
automation = input.harbor("automation",port=8050,password="xxx", max=20.)
automation = mksafe(automation)
# Define the radio stream
radio = fallback(track_sensitive=false, [live1, live2, automation])
# Stream it out
output.icecast(%mp3(bitrate=128),
host = "icecast", port = 8001,
password = "xxx", mount = "xyz-mp3", icy_metadata = "true", public = false,
radio)
output.icecast(%ffmpeg(format="adts",
%audio(
channels=2,
samplerate=44100,
codec="aac",
b="196k",
profile="aac_low"
)),
host = "icecast", port = 8001,
password = "xxx", mount = "xyz-aac", icy_metadata = "true", public = false,
radio)
Hi @scottgrobinson. I'm looking at this rn. I don't see an increase in CPU locally. Something interesting in your logs is the regular disconnections. Is that happening because of a networking issue? Or is liquidsoap getting late, something that you would see with a catchup log message.
Could you send some logs with log level set to 4? If liquidsoap is catching up from real time, it will end-up consuming more CPU resources. Also, if liquidsoap is too late, it eventually gives up and reset all sources, which could explain the drop in CPU usage as well.
Lastly, if you are using docker/kubernetes, you might want to check on the container's allocated CPU resources. Even if the machine has a powerful CPU, if the container is allocated too little CPU, liquidsoap may end-up not processing data fast enough.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Describe the bug
External source (from Stereotools) connects in (OGG 320Kbps) to /live1
CPU load quickly increases over an hour (4 Core Azure b4ms instance) and then starts failing over as per logs below (Ignore the fade in the logs, I took it out of the example below as it's not relevant)
Graph of increase @ https://ibb.co/d5nWZz0
To Reproduce
Expected behavior
CPU load stays stable
Version details
Version 2.0.0-beta3
Install method
Docker savonet/liquidsoap:v2.0.0-beta3
The text was updated successfully, but these errors were encountered: