Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds support for FasterWhisper #81

Merged
merged 3 commits into from
Apr 16, 2023
Merged

Conversation

alienware
Copy link
Contributor

@alienware alienware commented Mar 5, 2023

WHAT

  • Install faster_whisper package side-by-side
  • Adds faster boolean parameter to transcribe and detect-language APIs to switch between models at request time
  • Adds utils for faster_whisper module to support whisper style helpers

WHY

CHANGELOG

  • Adds docker compose support
  • Better poetry cache handling in Dockerfile
  • Whisper model download and conversion for faster_whisper is done at docker image build stage

* Adds faster param to support faster_whisper instead of whisper
* Adds utils for faster_whisper to support whisper style helpers
@ayancey
Copy link
Sponsor Collaborator

ayancey commented Mar 6, 2023

Really looking forward to this!

@ahmetoner
Copy link
Owner

Thanks for your contribution, I will merge it soon.

@alienware alienware marked this pull request as ready for review March 11, 2023 19:16
@tammo0
Copy link

tammo0 commented Mar 22, 2023

Hi! Thanks for your work!

I builded a docker with alienware's repo on windows 11 and had some troubles.
But finally it worked when you follow the information from #83 and change line 21 of Dockerfile.gpu to && $POETRY_VENV/bin/pip install "poetry==1.4.0" and keep an eye on the line endings of the faster_whisper_model_converstion.sh because it doesn't get executed at windows right now. Adding this to line 40 did the trick for me:

RUN sed -i 's/\r$//' faster_whisper_model_conversion.sh && \
chmod +x faster_whisper_model_conversion.sh

@tarjei
Copy link

tarjei commented Mar 23, 2023

Hi, I tried out this PR but got the error:

RuntimeError: Unable to open file 'model.bin' in model '/root/.cache/faster_whisper'

after running podman build -t fast-whisper . && podman run -d -p 9000:9000 -e ASR_MODEL=large-v2 localhost/fast-whisper

@tarjei
Copy link

tarjei commented Mar 23, 2023

Update. I got around this issue by adding this to the Dockerfile:

+ENV ASR_MODEL="large-v2"
+
+# TODO: Skip based on ENV variable
+RUN ./faster_whisper_model_conversion.sh ${ASR_MODEL}

But after starting the container, there is no response on port 9000 nor any errors in the logs. Any tips on where to debug this?

@alienware
Copy link
Contributor Author

@tarjei My guess with be to use podman logs command to check the pod logs.

@ahmetoner ahmetoner self-requested a review March 25, 2023 21:05
@jordimas
Copy link

You may also consider using https://pypi.org/project/whisper-ctranslate2/

@blundercode
Copy link

Would love support for faster whisper!

@ahmetoner ahmetoner changed the base branch from main to faster-whisper April 16, 2023 14:13
@ahmetoner ahmetoner merged commit 3f89ed6 into ahmetoner:faster-whisper Apr 16, 2023
@ahmetoner
Copy link
Owner

I will have some changes on ahmetoner:faster-whisper. After finish, I will merge it with main branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants