You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RAM usage largely depends on model size since it must be loaded to RAM (multiplied by the number of runners since models aren't shared in memory).
For example with whisper-ctranslate2 (which uses faster-whisper which is CPU friendly) the models tend to be larger than the one provided for openai-whisper :
large-v3 ~3GB
medium ~1.5GB
tiny ~75MB
Of course, transcript quality will get worse the further you decrease the size.
Describe the problem to be solved
v6.2.0-RC1 comes with an incredible feature: automatic subtitles generation.
But the models that are used can use a lot of CPU and RAM.
I did not see any option to limit their usage (as we can with video transcoding).
Describe the solution you would like
Would it be possible to add some options?
The text was updated successfully, but these errors were encountered: