FFMPEG is a very hungry application, occupying all available threads and processing cycles if one lets it.
Because of context switching, this leads to greatly increased execution latency for other processes. Such as the Fireshare webservers serving the UI and videos.
This means that while transcodes are active, the web interface can become quite slow, and videos can fail to be served.
To Reproduce
Run a transcode, use the webUI. The problem will be more severe the less powerful the host system.
Expected behavior
I usually separate UI/service/realtime processes and work threads onto different cores on my system. This is not possible when the jobs I'd like to separate are within a single docker container.
I don't really care HOW this is solved, a simple solution would be to add a setting to limit how many threads transcodes can use.
Even better, allow the user to pin the transcodes to certain cores, and use taskset when starting transcode processes to enforce this, allowing the remaining cores to always be available for low-latency tasks.
FFMPEG is a very hungry application, occupying all available threads and processing cycles if one lets it.
Because of context switching, this leads to greatly increased execution latency for other processes. Such as the Fireshare webservers serving the UI and videos.
This means that while transcodes are active, the web interface can become quite slow, and videos can fail to be served.
To Reproduce
Run a transcode, use the webUI. The problem will be more severe the less powerful the host system.
Expected behavior
I usually separate UI/service/realtime processes and work threads onto different cores on my system. This is not possible when the jobs I'd like to separate are within a single docker container.
I don't really care HOW this is solved, a simple solution would be to add a setting to limit how many threads transcodes can use.
Even better, allow the user to pin the transcodes to certain cores, and use taskset when starting transcode processes to enforce this, allowing the remaining cores to always be available for low-latency tasks.