Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] how to use batch size? #36

Open
1 task done
targor opened this issue Feb 17, 2025 · 2 comments
Open
1 task done

[QUESTION] how to use batch size? #36

targor opened this issue Feb 17, 2025 · 2 comments
Labels
enhancement New feature or request

Comments

@targor
Copy link

targor commented Feb 17, 2025

Is this a new feature request?

  • I have searched the existing issues

Wanted change

  • How can i use the large-v2 model with fp16 and a batch size of 8 in docker compose? I do not see any documentation about flags for the bacth size, maybe it is not possible to set it?

  • Is it possible to set and use vad models?

Reason for change

Maybe better performance regarding to the original documentation of faster-whisper

Proposed code change

No response

@targor targor added the enhancement New feature or request label Feb 17, 2025
Copy link

Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.

@mg-dev25
Copy link

@targor Hi, if you want to play with this parameter you have to use directly the faster-whisper main project. This project only use the wyoming-faster-whisper to build it's image, and the ENV variable are only replacing this value:

parser.add_argument("--model", required=True, help="Name of faster-whisper model to use (or auto)") parser.add_argument("--beam-size", type=int, default=5, help="Size of beam during decoding (0 for auto)") parser.add_argument("--language", help="Default language to set for transcription")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Issues
Development

No branches or pull requests

2 participants