This repository provides a pre-built docker image for OpenAI's Whisper model.
To use the model, you can pull the pre-built docker image, or build the image yourself.
docker pull ghcr.io/wgbh-mla/whisper-bot:latest
docker build -t whisper-bot .
docker run --rm -itv $(pwd):/root -v $HOME/.cache/whisper/:/root/.cache/whisper/ ghcr.io/wgbh-mla/whisper-bot:latest whisper [WHISPER_ARGS] FILENAME
--model
: The model to use. Defaults tobase
.- Options:
tiny
,base
,small
,medium
,large
- Options:
--language
: The language to use. Defaults toen
.
The full list of arguments can be found by running whisper --help
If you are running on an ARM machine (including the Mac M series processors), use the arm64-main
tag to get an ARM optimized image.
See the whisper-bot directory for an example of running whisper-bot in a distributed system.
Note: This script is specific to the GBH MLA environment and should only be used an an example.
Created by GBH Media Library & Archives