FastMCP server for audio transcription using Whisper.
npx -y @smithery/cli install @tapiocapioca/whisper-mcp-server --client claudepip install whisper-mcp-serverThen configure MCP:
{
"mcpServers": {
"whisper": {
"command": "python",
"args": ["-m", "whisper_mcp"]
}
}
}- Python 3.10+
- Whisper server running on localhost:9102
Start the container:
cd brainery-containers
docker-compose up -d whisper-serverTranscribe audio file using Whisper.
Parameters:
file_path(required): Absolute path to audio filelanguage(optional): Language code or 'auto' for detection (default: auto)model(optional): Whisper model size - tiny/base/small/medium/large (default: base)
Supported formats: MP3, M4A, WAV, OGG, FLAC, WEBM
Returns:
{
"text": "Transcribed text content...",
"language": "en",
"duration": 120.5,
"segments": 15,
"processing_time_ms": 3500
}# Transcribe with auto language detection
result = transcribe_audio(
file_path="/path/to/audio.mp3",
language="auto",
model="base"
)
# Transcribe Italian audio with larger model
result = transcribe_audio(
file_path="/path/to/podcast.m4a",
language="it",
model="medium"
)MIT