feat: add POST /v1/audio/transcriptions and /v1/audio/speech endpoints#44
Open
feat: add POST /v1/audio/transcriptions and /v1/audio/speech endpoints#44
Conversation
Add OpenAI-compatible audio transcription and speech endpoints to the gateway, proxying requests through any_llm's atranscription() and aspeech() functions. Transcription endpoint (POST /v1/audio/transcriptions): - Accepts multipart/form-data with audio file upload - Supports optional language, prompt, response_format, temperature fields - Returns JSON transcription response Speech endpoint (POST /v1/audio/speech): - Accepts JSON body with model, input text, and voice - Returns raw binary audio with correct Content-Type per format - Supports optional instructions, response_format, speed fields Both endpoints include full auth, rate limiting, budget validation, and usage logging. Added python-multipart dependency for file uploads. - 22 integration tests covering auth, usage logging, error handling, optional fields, and content type mapping - OpenAPI spec regenerated Depends on: mozilla-ai/any-llm#1036
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
POST /v1/audio/transcriptions) and speech/TTS (POST /v1/audio/speech) endpointsany_llm'satranscription()andaspeech()functionspython-multipartdependency for file upload supportDetails
Transcription endpoint (
POST /v1/audio/transcriptions):multipart/form-datawith audio file upload via FastAPIUploadFilemodel(required),file(required),language,prompt,response_format,temperature,useratranscription()via any_llmSpeech endpoint (
POST /v1/audio/speech):model,input,voice(required),instructions,response_format,speed,userStreamingResponsewith raw binary audioaudio/mpeg(mp3),audio/opus,audio/aac,audio/flac,audio/wav,audio/L16(pcm)Both endpoints follow standard gateway flow: auth, rate limiting, budget validation, usage logging, error handling.
New dependency:
python-multipart>=0.0.18(required by FastAPI forUploadFile/File/Form)Tests:
tests/integration/test_audio_endpoint.py— 22 tests:Dependencies: Requires mozilla-ai/any-llm#1036 for
atranscription()/aspeech()support in the SDK.