You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How can i use the large-v2 model with fp16 and a batch size of 8 in docker compose? I do not see any documentation about flags for the bacth size, maybe it is not possible to set it?
Is it possible to set and use vad models?
Reason for change
Maybe better performance regarding to the original documentation of faster-whisper
Proposed code change
No response
The text was updated successfully, but these errors were encountered:
@targor Hi, if you want to play with this parameter you have to use directly the faster-whisper main project. This project only use the wyoming-faster-whisper to build it's image, and the ENV variable are only replacing this value:
parser.add_argument("--model", required=True, help="Name of faster-whisper model to use (or auto)") parser.add_argument("--beam-size", type=int, default=5, help="Size of beam during decoding (0 for auto)") parser.add_argument("--language", help="Default language to set for transcription")
Is this a new feature request?
Wanted change
How can i use the large-v2 model with fp16 and a batch size of 8 in docker compose? I do not see any documentation about flags for the bacth size, maybe it is not possible to set it?
Is it possible to set and use vad models?
Reason for change
Maybe better performance regarding to the original documentation of faster-whisper
Proposed code change
No response
The text was updated successfully, but these errors were encountered: