Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What should be the AWS Machine configuration for Whisper Large model deployment #50

Open
monk1337 opened this issue Apr 26, 2023 · 0 comments

Comments

@monk1337
Copy link

monk1337 commented Apr 26, 2023

Thank you, @sanchit-gandhi, for your fantastic work. I would appreciate your opinion on configuring an AWS machine for deploying Hugging Face's Whisper large model (JAX version) and data storage for both audio and streamed textual data.

My end goal is to deploy the stream output model, but for now, I am setting up the current model without steam functionality. What would be the optimal AWS configuration to consider the future scope of the project?

  1. If I decide to use your Whisper version, what would be the best configuration for large, taking into account the future streaming component?
  2. If I choose to use other implementations with streaming support, what would be the optimal configuration for large?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant