-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Insights: BerriAI/litellm
Overview
Could not load contribution data
Please try again later
4 Releases published by 1 person
-
v1.74.9-stable.patch.1
published
Aug 3, 2025 -
v1.74.15.rc.2
published
Aug 5, 2025 -
v1.75.0-nightly
published
Aug 5, 2025 -
v1.75.0.dev2
published
Aug 5, 2025
39 Pull requests merged by 9 people
-
feat(JinaAI): support multimodal embedding models
#13181 merged
Aug 6, 2025 -
fix(streaming_handler.py): include cost in streaming usage object
#13319 merged
Aug 6, 2025 -
[LLM Translation] Fix model group on clientside auth with API calls
#13314 merged
Aug 6, 2025 -
[Feat] - When using custom tags on prometheus allow using wildcard patterns
#13316 merged
Aug 6, 2025 -
[Bug]: Fix Mimetype Resolution Error in Bedrock Document Understanding
#13309 merged
Aug 6, 2025 -
Litellm fix OpenAI spec tools
#13315 merged
Aug 6, 2025 -
[MCP Gateway] refactor mcp guardrails
#13238 merged
Aug 5, 2025 -
Fix double slash issue in SSO login URL construction
#13289 merged
Aug 5, 2025 -
[Proxy server] Add apscheduler log suppress
#13299 merged
Aug 5, 2025 -
[Redis IAM] Change documentation
#13306 merged
Aug 5, 2025 -
[LLM Translation] Fix model group on clientside auth with API calls
#13293 merged
Aug 5, 2025 -
Create New Key - Make Team Field Required for Service Account
#13302 merged
Aug 5, 2025 -
[Feat] Add fireworks gpt-oss models
#13303 merged
Aug 5, 2025 -
New models - add fireworks_ai/glm-4p5 model family
#13297 merged
Aug 5, 2025 -
[New model] add bedrock/us.anthropic.claude-opus-4-1-20250805-v1:0
#13295 merged
Aug 5, 2025 -
[LLM Translation] claude opus 4.1 support for anthropic provider
#13296 merged
Aug 5, 2025 -
[LLM Translation + Coding tools] Added litellm claude code count tokens support
#13261 merged
Aug 5, 2025 -
[Redis] - Add ability to add client through GCP IAM Auth
#13275 merged
Aug 5, 2025 -
Revert "Fix: Langfuse reporting "client closed" error due to httpx client TTL"
#13291 merged
Aug 5, 2025 -
fix(main.py): handle tool being a pydantic object + Fix unpack defs deepcopy issue for bedrock
#13274 merged
Aug 5, 2025 -
Ensure disable_llm_api_endpoints works + Add wildcard model support for 'team-byok' model
#13278 merged
Aug 5, 2025 -
fix OCI linting errors
#13279 merged
Aug 5, 2025 -
[UI] - Add ability to set model alias per key/team
#13276 merged
Aug 5, 2025 -
[LLM Translation] Support /v1/models/{model_id} retrieval
#13268 merged
Aug 5, 2025 -
[LLM Translation] input cost per token higher than $1 test
#13270 merged
Aug 5, 2025 -
[LLM Translation] Correct pricing for web search on 4o-mini
#13269 merged
Aug 5, 2025 -
Fix: Langfuse reporting "client closed" error due to httpx client TTL
#13045 merged
Aug 4, 2025 -
Add GCS bucket caching support
#13122 merged
Aug 4, 2025 -
Support OCI provider
#13206 merged
Aug 4, 2025 -
[Bug Fix] Fix Server root path regression on UI when using "Login"
#13267 merged
Aug 4, 2025 -
Minor formatting changes to token-cost.json
#13244 merged
Aug 4, 2025 -
Bug Fix - Responses API raises error with Gemini Tool Calls in
input
#13260 merged
Aug 4, 2025 -
[Bug Fix] OpenAI / Azure Responses API - Add
service_tier
,safety_identifier
supported params#13258 merged
Aug 4, 2025 -
[UI] Add team deletion check for teams with keys
#12953 merged
Aug 4, 2025 -
[LLM Translation] Fix Model Usage not having text tokens
#13234 merged
Aug 4, 2025 -
[Proxy] Add OpenShift Support to non root docker image
#13239 merged
Aug 3, 2025 -
Prompt Management - add prompts on UI
#13240 merged
Aug 3, 2025
10 Pull requests opened by 9 people
-
Ensure that `function_call_prompt` extends system messages following its current schema
#13243 opened
Aug 3, 2025 -
fix: improve Gemini API key masking in debug logs
#13272 opened
Aug 4, 2025 -
Fix/create vector store
#13285 opened
Aug 5, 2025 -
Bypass end-user budget check when listing models
#13287 opened
Aug 5, 2025 -
🆕 [FEATURE]: Add WandB by Coreweave Inference Endpoints as a hub.
#13290 opened
Aug 5, 2025 -
Add Custom Tooltips to Model Mapping Table
#13294 opened
Aug 5, 2025 -
Feat/sambanova embeddings
#13308 opened
Aug 5, 2025 -
[MCP Gateway] fix auth on ui for bearer servers
#13312 opened
Aug 5, 2025 -
Litellm dev 08 05 2025 p1
#13317 opened
Aug 6, 2025 -
Litellm dev 08 05 2025 p1 v2
#13320 opened
Aug 6, 2025
59 Issues closed by 7 people
-
[Feature]: Return x-litellm-response-cost header when streaming with include_usage: true
#12689 closed
Aug 6, 2025 -
[Bug]: Mimetype Resolution Error in Bedrock Document Understanding
#12260 closed
Aug 6, 2025 -
[Feature]: `function_to_dict` supporting type unions
#4249 closed
Aug 6, 2025 -
[Feature]: `function_to_dict` supporting defaulted arguments
#4250 closed
Aug 6, 2025 -
[Bug]: inability to use `DeploymentTypedDict` in Pydantic `TypeAdapter` with Python<3.12
#5664 closed
Aug 6, 2025 -
[Bug]: Router not respecting TPM limits in concurrent async calls
#5783 closed
Aug 6, 2025 -
[Feature]: convenience `Enum` for `tool_choice`
#6091 closed
Aug 6, 2025 -
[Bug]: `litellm.text_completion` not respecting `model_list`
#6157 closed
Aug 6, 2025 -
[Feature]: automated nesting when using litellm sdk within langfuse observe() decorated function
#8423 closed
Aug 6, 2025 -
Google Authentication Issue
#8424 closed
Aug 6, 2025 -
API Keys not displaying after creation
#8446 closed
Aug 6, 2025 -
[Feature]: Add Qwen2.5-3B-Instrcut
#10437 closed
Aug 6, 2025 -
[Bug]: Azure Audio Transcription & DALL-E (Access denied)
#10440 closed
Aug 6, 2025 -
[Bug]: Random problems with docker images using OpenSSL 3.5
#10444 closed
Aug 6, 2025 -
[Bug]: gemini request body is not logged
#10449 closed
Aug 6, 2025 -
[Feature]: Add "claude-opus-4-1-20250805" in "model_prices_and_context_window.json"
#13305 closed
Aug 5, 2025 -
[Bug]: Claude Code count_tokens API is not implemented in the LiteLLM proxy.
#13252 closed
Aug 5, 2025 -
[Bug]: Tool calling broken from expecting "keys" on OpenAITool (v1.74.9)
#13064 closed
Aug 5, 2025 -
[Bug]: api request works even if DISABLE_LLM_API_ENDPOINTS=true
#13095 closed
Aug 5, 2025 -
[Feature]: (litellm proxy) support /v1/models/{model_id} retrieval
#13128 closed
Aug 5, 2025 -
[Bug]: Enforcement of Admin-Only Route Not Working
#13127 closed
Aug 5, 2025 -
[Feature]: `litellm --version` not requiring `proxy` extra
#7975 closed
Aug 5, 2025 -
[Feature]: Improve Lago integration to better support token based usage billing
#8243 closed
Aug 5, 2025 -
[Feature]: adding `ollama/llama3.2` to cost tracking
#9644 closed
Aug 5, 2025 -
[Bug]: when new user login to console , there is not have any model to select
#9678 closed
Aug 5, 2025 -
litellm.BadRequestError: OpenAIException
#10083 closed
Aug 5, 2025 -
[Feature]: Assign Users team role at account creation
#10109 closed
Aug 5, 2025 -
[Bug]: Litellm proxy, return empty completion content but rejects to fail
#10144 closed
Aug 5, 2025 -
[Bug]: Cost stays zero in the UI and nothing in response headers for ollama custom models
#10155 closed
Aug 5, 2025 -
[Bug]: Future attached to a different loop in DualCache.async_batch_get_cache()
#10376 closed
Aug 5, 2025 -
[Bug]: dashboard cann't add vllm provider model
#10400 closed
Aug 5, 2025 -
Deepseek connection error -- from wren
#10403 closed
Aug 5, 2025 -
[Bug]: Gemini response_format list of unions fails with LiteLLM
#10405 closed
Aug 5, 2025 -
[Bug]: scheduler keeps creating duplicated pass through route
#10408 closed
Aug 5, 2025 -
[Bug]: litellm --health outputs auth_error even the LITELLM_MASTER_KEY envvar is set
#10412 closed
Aug 5, 2025 -
Fix "gpt-4o-mini-2024-07-18" entry in "model_prices_and_context_window.json"
#12913 closed
Aug 5, 2025 -
[Bug]: Proxy server responses API throws "Invalid content type: <class 'NoneType'>"
#12670 closed
Aug 4, 2025 -
[Bug]: Gemini CLI JSON parse error unexpected character "`"
#12496 closed
Aug 4, 2025 -
[Bug]: <meta> tag in CLAUDE.md causes 403 when using LiteLLM.
#12421 closed
Aug 4, 2025 -
[Bug]: Add service_tier support for /responses API
#13257 closed
Aug 4, 2025 -
[Bug]: Versions after 1.73.2 do not automatically route vendors
#12531 closed
Aug 4, 2025 -
[Bug]: Missing 'voyage' and 'jina' providers on UI
#12574 closed
Aug 4, 2025 -
Organization: is it an enterprise feature?
#12727 closed
Aug 4, 2025 -
[Bug]: Deleting a team via UI does not warn or block when related keys exist
#12947 closed
Aug 4, 2025 -
[Bug]: Usage.completion_tokens_details.text_tokens isn't serialized
#13223 closed
Aug 4, 2025 -
Add "bge-reranker-v2-m3" in "model_prices_and_context_window.json"
#13121 closed
Aug 4, 2025 -
[Bug]: MCP server on local docker network is not allowed
#13132 closed
Aug 4, 2025 -
[Feature]: Set fallback model for database model in UI
#13253 closed
Aug 4, 2025 -
[Bug]: Unable to Remove Datadog Active Logging Callback Configuration
#10375 closed
Aug 4, 2025 -
Issue with LLm lite
#10377 closed
Aug 4, 2025 -
[Bug]: How to get Gemini 2.5 cache_read and reasoning_token count while using acompletion with Stream ?
#10387 closed
Aug 4, 2025 -
[Bug]: Can't seem to get any Gemini example to work?
#13241 closed
Aug 3, 2025
31 Issues opened by 31 people
-
[Feature]: Support for routing requests to the Cohere `v2/chat` api when submitted through OpenAI SDK
#13311 opened
Aug 5, 2025 -
[Bug]: Model Alias Resolution Causes Permission Check Failure
#13310 opened
Aug 5, 2025 -
[Feature]: SambanNova embeddings
#13307 opened
Aug 5, 2025 -
[Bug]: DBRX - Signature Error with OSS GPT models
#13304 opened
Aug 5, 2025 -
[Feature]: Support reasoning in harmony response format for gpt-oss models
#13300 opened
Aug 5, 2025 -
[Bug]: Empty list of MCP tools returned on streamable connection
#13298 opened
Aug 5, 2025 -
[Bug]: LiteLLM_MCPServerTable.server_name does not exist (missing migration?)
#13288 opened
Aug 5, 2025 -
[Bug]: Listing models should work even when end-user budget is exceeded
#13286 opened
Aug 5, 2025 -
[Bug]: Create Vector Store fail
#13284 opened
Aug 5, 2025 -
[Bug]: APIConnectionError - negative file descriptor with Claude models
#13283 opened
Aug 5, 2025 -
[Bug]: Migration using non-root image failed
#13282 opened
Aug 5, 2025 -
Changes to the Bitnami Chart and Image Catalog
#13281 opened
Aug 5, 2025 -
[Bug]: Missing spend logs
#13280 opened
Aug 5, 2025 -
[Feature]: Load balancing support for multiple credentials in passthrough endpoint
#13277 opened
Aug 5, 2025 -
[Feature]: Add WandB Inference Endpoint
#13273 opened
Aug 4, 2025 -
[Bug]: Unable to use DATABASE_* environment variables for configuring DB
#13266 opened
Aug 4, 2025 -
[Bug]: No cost on client side for streaming
#13264 opened
Aug 4, 2025 -
[Bug]: Librechat outputs gibberish with LiteLLM
#13263 opened
Aug 4, 2025 -
[Feature]: integrate with LiveKit for voice agent
#13262 opened
Aug 4, 2025 -
[Bug]: Missing Editor-Version Header for IDE Auth in GitHub Copilot provider in Proxy Mode
#13256 opened
Aug 4, 2025 -
[Bug]: Http 500 error when trying to call dall-e-3
#13254 opened
Aug 4, 2025 -
[Bug]: Unclosed aiohttp client session when using acompletion with concurrent requests
#13251 opened
Aug 4, 2025 -
[Bug]: Embedding query returns null values on Redis cache retrieval
#13250 opened
Aug 4, 2025 -
[Bug]: litellm.APIConnectionError: OllamaException - Server disconnected without sending a response.
#13249 opened
Aug 4, 2025 -
[Bug]: OpenAI models health check fails
#13248 opened
Aug 4, 2025 -
[Bug]: Litellm Not supporting the model gpt-35-turbo-instruct text model
#13247 opened
Aug 4, 2025 -
[Bug]: Migration failed
#13246 opened
Aug 4, 2025 -
[Bug]: Incomplete spend tracking for non-streaming Bedrock calls when client disconnects
#13245 opened
Aug 4, 2025
106 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Add Vercel AI Gateway provider
#13144 commented on
Aug 5, 2025 • 2 new comments -
feat: Add logo customization for LiteLLM admin UI
#12958 commented on
Aug 4, 2025 • 2 new comments -
Fix: Handle missing 'choices' field in Azure OpenAI responses
#13201 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Container fails when redis semantic cache is configured
#10352 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: DynamoDB integration error on embedding model
#10426 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: Run openHands with bedrock
#10609 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: AttributeError: 'function' object has no attribute 'create' [client]
#10610 commented on
Aug 6, 2025 • 0 new comments -
How to config 3rd part API provider use tokenizer base on model name ?
#10614 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: HF inference provider Cohere `huggingface/cohere/CohereLabs/aya-expanse-32b` calls incorrect route
#10619 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: Add Cost Tracking for Mistral OCR Models
#10620 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: OTEL authentication header for `arize_phoenix` is missing
#10621 commented on
Aug 6, 2025 • 0 new comments -
Metadata dict unsupported in `token_counter` since v1.68.0
#10623 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: Tool-Calling of Anthropic via Bedrock not working: Expected 'id' to be a string
#13124 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: Support MCP Server instructions in the gateway
#13119 commented on
Aug 5, 2025 • 0 new comments -
[Feature]: Add support for custom OAuth2-based LLM provider
#12367 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Error when using bind_tools() with a fine-tuned gemini-2.0-flash-001 model hosted on GCP
#12001 commented on
Aug 5, 2025 • 0 new comments -
[Feature]: Batch Inference support for AWS Bedrock
#12681 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Heavy RAM Usage over time
#12685 commented on
Aug 5, 2025 • 0 new comments -
[BUG]: Bedrock streaming responses buffer in 1024-byte chunks causing choppy output
#11747 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Copy to Clipboard does not work
#12474 commented on
Aug 5, 2025 • 0 new comments -
Intermittent API Connection Errors while sending the async requests to OpenAI (More with the reasoning models)
#12807 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: o1-pro and Cohere models returns "finish_reason": "stop" when using streaming
#12862 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Edited `Model Info` fields via UI are not persisted
#13082 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Total tokens do not count cache create input tokens (LiteLLM Proxy)
#13207 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Job "update_spend" raised an exception
#13204 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: OSError: [Errno 24] Too many open files
#13220 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Running the embedding function of ollama multiple times or in parallel raises Exception.
#9487 commented on
Aug 5, 2025 • 0 new comments -
[Feature]: Support dual stack (IPv4/IPv6)
#9563 commented on
Aug 5, 2025 • 0 new comments -
Update Pangea Guardrail to support new AIDR endpoint
#13160 commented on
Aug 6, 2025 • 0 new comments -
Team Member Permissions Page - Access Column Changes
#13145 commented on
Aug 4, 2025 • 0 new comments -
(Not fully tested, LLM-generated code) fix issue where vertex ai fails to use new credentials after token expiration plus gcloud auth login --update-adc
#13092 commented on
Aug 4, 2025 • 0 new comments -
fix: enable metadata parameter for OpenAI API calls
#12999 commented on
Aug 5, 2025 • 0 new comments -
fix: streaming finish_reason returns tool_calls for o1-pro and Cohere models
#12866 commented on
Aug 4, 2025 • 0 new comments -
[Feature: Added the support to add api_url in custom_guardrail]
#12512 commented on
Aug 5, 2025 • 0 new comments -
feat: add Cloudflare Llama-4 and 3.3 multimodel with advanced features + BGE embeddings
#11037 commented on
Aug 3, 2025 • 0 new comments -
fix(litellm/caching/caching_handler.py): fix kwargs[litellm_params][p…
#10612 commented on
Aug 5, 2025 • 0 new comments -
Fix AzureChatCompletion adding stream_options when stream is False
#10594 commented on
Aug 6, 2025 • 0 new comments -
Fix config.yaml environment_variables never got loaded to os.env
#10547 commented on
Aug 4, 2025 • 0 new comments -
Update Model Pricing Information for Groq Llama 4 models
#10273 commented on
Aug 6, 2025 • 0 new comments -
Remove files generated by FE builds
#10067 commented on
Aug 6, 2025 • 0 new comments -
add tinybird integration
#9911 commented on
Aug 4, 2025 • 0 new comments -
fix bedrock embedding invocations with app inference profiles
#9902 commented on
Aug 5, 2025 • 0 new comments -
Support vllm quantization
#7297 commented on
Aug 4, 2025 • 0 new comments -
1215同步
#7243 commented on
Aug 5, 2025 • 0 new comments -
add custom health probes in helm chart
#6851 commented on
Aug 5, 2025 • 0 new comments -
(feat) Docker.non_root improvements for handing `nobody` user
#6656 commented on
Aug 4, 2025 • 0 new comments -
Integrating Not Diamond with LiteLLM
#4971 commented on
Aug 4, 2025 • 0 new comments -
add Lite llm docker proxy (Gemini ver)
#2574 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: tool_calls coming back as null while they should be Array
#13055 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: Add support for Llamafile provider
#3225 commented on
Aug 6, 2025 • 0 new comments -
New Models/Endpoints/Providers
#4922 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: error while accesing usage section in UI
#6348 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: Supprt for Vertex AI with global region
#9234 commented on
Aug 6, 2025 • 0 new comments -
IndexError: list index out of range
#9682 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Cohere provider appears to ignore system prompt when user prompt is also present.
#13235 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: (PROXY) aws_secret_manager_v2 will fail at startup trying to read DATABASE_URL
#13076 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: recursion issue with langfuse
#12710 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Gemini CLI (v0.1.9) Fails with model_group_alias Setup in LiteLLM v1.73.6.rc.1
#12275 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: hosted_vllm: merge_reasoning_content_in_choices: unsupported operand type(s) for +=: 'NoneType' and 'str'
#9578 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Gpt-images and dall-e-3 problem with cost
#13209 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: missing tzdata in docker images
#13197 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: sync with redis for budget / spend happens too infrequently
#6288 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: MonthlyGlobalSpend doesn't exist, Last30dKeysBySpend relation doesn't exist
#6419 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Anthropic usage prompt cache details missing from logging callbacks when streaming
#7790 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: NotFoundError (404) with Custom API Base for Gemini
#8772 commented on
Aug 4, 2025 • 0 new comments -
Self-Hosted DeepSeek Model used in OpenAI-Codex - Endpoint Issue
#10309 commented on
Aug 4, 2025 • 0 new comments -
Critical vulnerabilities identified by PrismaCloud
#10513 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Somehow optional column is not working in function calling for o4-mini
#10552 commented on
Aug 4, 2025 • 0 new comments -
Getting 'Exception' object has no attribute 'request' in every step of my agent
#10560 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: /file Unable to use files_settings with Azure config unless env vars are explicitly set
#10561 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Small floats in scientific notation get interpreted as strings
#10564 commented on
Aug 4, 2025 • 0 new comments -
[Question] Guardrails not working as expected
#10565 commented on
Aug 4, 2025 • 0 new comments -
[Feature]: `babbage:2023-07-21-v2` in cost tracking
#10571 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: `litellm>=1.65.5` not propagating `model` during streaming
#10572 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: r1 metadata incorrectly marks "supports_function_calling":true,"supports_tool_choice":true
#10574 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Non-root image error
#13090 commented on
Aug 3, 2025 • 0 new comments -
[Feature]: Semantic MCP tool auto-filtering
#12079 commented on
Aug 3, 2025 • 0 new comments -
[Bug]: APIConnectionError parsing Tool call response from Ollama
#11267 commented on
Aug 3, 2025 • 0 new comments -
[Feature]: DeepInfra new reranking Endpoint
#13097 commented on
Aug 3, 2025 • 0 new comments -
使用 deepseek-chat 模型报错,貌似是 401 权限问题
#9514 commented on
Aug 3, 2025 • 0 new comments -
UnicodeDecodeError when importing LiteLlm
#10340 commented on
Aug 5, 2025 • 0 new comments -
[Feature]: The Model Hub should show the TPM/RPM quota, and also models health info
#10461 commented on
Aug 5, 2025 • 0 new comments -
TPM Limit Not Enforced as Expected with LiteLLM API
#10555 commented on
Aug 5, 2025 • 0 new comments -
Fix "claude-3-7-sonnet-latest" entry in "model_prices_and_context_window.json"
#10586 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Credentials not saved correctly ?
#10587 commented on
Aug 5, 2025 • 0 new comments -
不會解
#10592 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Litellm always gives a 200 code status!
#10593 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Google AI generateContent endpoints require a different request body shape than the native API
#12671 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: groq/whisper-large-v3 returns 400 BadRequestError with OPENAI_TRANSCRIPTION_PARAMS
#11402 commented on
Aug 4, 2025 • 0 new comments -
Anthropic claude returning no content
#9412 commented on
Aug 4, 2025 • 0 new comments -
[Bug][Minor]: Misleading Error Message Where Content-Type not configured on /chat/completions
#13040 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: See multiple pillars for the same day in usage tab daily spend chart
#12735 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Model health is not showing properly in the dashboard
#12993 commented on
Aug 4, 2025 • 0 new comments -
Audio transcription request routed to /chat/completions instead of /audio/transcriptions (Azure Whisper)
#12897 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Bedrock Guardrail modified outputs for blocked responses are sent to the model with exceptions disabled
#13126 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Invalid/empty API response in fake stream using claude code
#12854 commented on
Aug 4, 2025 • 0 new comments -
Docker Database connection Issue
#7450 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: The "Responses API" does not maintain conversation context for 10 seconds
#12364 commented on
Aug 4, 2025 • 0 new comments -
[Feature]: support enabling "/responses to /chat/completions Bridge" on openai (llama.cpp) models
#13130 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Failed to get available MCP tools - MCP Auth
#13199 commented on
Aug 4, 2025 • 0 new comments -
[Feature]: Track LiteLLM Sessions/Threads/Conversations/Chats also into Opik via the integration
#13179 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Internal Server Error with GET request for /routes litellm 1.74.12 and several other releases
#13184 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Bedrock PII Masking issue
#13180 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Model Hub - Invalid models appear when using custom prefixes for wildcard models.
#13190 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Multi-instance rate limiting not working even in version 1.74.9
#13202 commented on
Aug 4, 2025 • 0 new comments -
[Feature]: Add fal.ai as an AI provider (support for text, image, and video generation)
#13229 commented on
Aug 4, 2025 • 0 new comments