-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Insights: BerriAI/litellm
Overview
Could not load contribution data
Please try again later
41 Releases published by 2 people
-
v1.74.2-nightly
published
Jul 11, 2025 -
v1.74.3-nightly
published
Jul 12, 2025 -
v1.74.3-stable-draft
published
Jul 12, 2025 -
v1.74.3.rc.1
published
Jul 13, 2025 -
v1.74.3.rc.2
published
Jul 13, 2025 -
v1.74.3.rc.3
published
Jul 16, 2025 -
v1.74.3.dev2
published
Jul 16, 2025 -
v1.74.4-nightly
published
Jul 17, 2025 -
v1.74.3-stable
published
Jul 18, 2025 -
v1.74.5.dev1
published
Jul 18, 2025 -
v1.74.6-nightly
published
Jul 19, 2025 -
v1.74.7-nightly
published
Jul 20, 2025 -
v1.74.3-stable.patch.1
published
Jul 21, 2025 -
v1.74.3-stable.patch.2
published
Jul 21, 2025 -
v1.74.3-stable.patch.3
published
Jul 21, 2025 -
v1.74.7.rc.1
published
Jul 21, 2025 -
v1.72.2-stable.debug
published
Jul 22, 2025 -
v1.74.3-stable.patch.4
published
Jul 23, 2025 -
v1.74.8-nightly
published
Jul 24, 2025 -
v1.74.7-stable
published
Jul 25, 2025 -
v1.74.7-stable.patch.1
published
Jul 25, 2025 -
v1.74.7-stable.patch.2
published
Jul 25, 2025 -
v1.74.9.rc-draft
published
Jul 26, 2025 -
litellm_v1.65.4-dev_fix
published
Jul 28, 2025 -
v1.74.9.rc.1
published
Jul 29, 2025 -
v1.74.12-nightly
published
Jul 31, 2025 -
v1.74.9-stable
published
Aug 1, 2025 -
v1.74.14-nightly
published
Aug 2, 2025 -
v1.74.14.dev1
published
Aug 2, 2025 -
v1.74.15-nightly
published
Aug 2, 2025 -
1.74.15.rc.1
published
Aug 2, 2025 -
v1.74.9-stable.patch.1
published
Aug 3, 2025 -
v1.74.15.rc.2
published
Aug 5, 2025 -
v1.75.0-nightly
published
Aug 5, 2025 -
v1.75.0.dev2
published
Aug 5, 2025 -
v1.75.2-nightly
published
Aug 8, 2025 -
v1.75.3-nightly
published
Aug 8, 2025 -
v1.74.15-stable
published
Aug 8, 2025 -
v1.75.4-nightly
published
Aug 9, 2025 -
v1.75.5-stable.rc-draft
published
Aug 9, 2025 -
v1.75.5.rc.1
published
Aug 10, 2025
435 Pull requests merged by 86 people
-
[Bug Fix] - Allow using
reasoning_effort
for gpt-5 model family andreasoning
for Responses API#13475 merged
Aug 10, 2025 -
ui - build updated ui + increase max_tokens in health_check = 10 (gpt-5-nano throws error for max_token=1)
#13482 merged
Aug 10, 2025 -
Litellm model cost map fixes
#13480 merged
Aug 10, 2025 -
Litellm release notes 08 10 2025
#13479 merged
Aug 10, 2025 -
Model Hub - show price for azure models + Responses API - support custom tool
#13418 merged
Aug 9, 2025 -
Add digitalocean provider
#12169 merged
Aug 9, 2025 -
[Bug]: Fix JWTs access not working with model groups
#13474 merged
Aug 9, 2025 -
[MCP Gateway] Add local storage auth for tokens
#13473 merged
Aug 9, 2025 -
[Proxy changes] Litellm add model price reload schedule for multi-pod
#13470 merged
Aug 9, 2025 -
Router - reduce p99 latency w/ redis enabled by 50% + OTEL - track pre_call hook latency
#13362 merged
Aug 9, 2025 -
[Bug Fix] Allow using Swagger for /chat/completions
#13469 merged
Aug 9, 2025 -
[Proxy + UI] Litellm add reload model api and button
#13464 merged
Aug 9, 2025 -
[LLM Translation] Litellm azure o series drop params
#13353 merged
Aug 9, 2025 -
[Bug Fix] /health checks should not use hardcoded trace_id
#13468 merged
Aug 9, 2025 -
[Bug Fix] - Get Routes
#13466 merged
Aug 9, 2025 -
[Bug Fix] Responses API - Responses API failed if input containing ResponseReasoningItem
#13465 merged
Aug 9, 2025 -
docs - native LiteLLM prompt mgmt
#13463 merged
Aug 9, 2025 -
feat(models): add OpenRouter and Cerebras GPT-OSS (20b, 120b) to cost map
#13442 merged
Aug 9, 2025 -
[Documentation ]added mcp guardrails doc in mcp.md
#13452 merged
Aug 9, 2025 -
[Feat] Working e2e flow for Responses API session management with media
#13456 merged
Aug 9, 2025 -
feat(reasoning): support 'minimal' effort type for OpenAI
#13447 merged
Aug 9, 2025 -
Disable logging settings for non-enterprise users - Create Key
#13431 merged
Aug 9, 2025 -
LiteLLM UI - Test Key Page - allow uploading images for /chat/completions and /responses
#13445 merged
Aug 8, 2025 -
[Docs] Add docs on how router / cooldowns work
#13444 merged
Aug 8, 2025 -
[Bug Fix] Improve error message from - key creation permission error message
#13443 merged
Aug 8, 2025 -
Add support for reasoning_effort minimal
#13401 merged
Aug 8, 2025 -
LLM Translation - fix prices for oai gpt 5
#13441 merged
Aug 8, 2025 -
Display Error from Backend on the UI - Notification
#13427 merged
Aug 8, 2025 -
[User Delete from team] fix user membership issue
#13433 merged
Aug 8, 2025 -
[Feat] Add
reasoning_effort
to OpenAIGPT5Config#13434 merged
Aug 8, 2025 -
fix(proxy): add missing braintrust api base to env vars
#13412 merged
Aug 8, 2025 -
fix(access group): allow access group on mcp tool retrieval
#13425 merged
Aug 8, 2025 -
Correct GPT-5 token limits and price
#13423 merged
Aug 8, 2025 -
feat(usage): aggregated user daily activity endpoint and UI integration
#13395 merged
Aug 8, 2025 -
Add Custom Tooltips to Model Mapping Table
#13294 merged
Aug 8, 2025 -
[Bug Fix] Responses api session management for streaming responses
#13396 merged
Aug 8, 2025 -
fix(responses api): fix streaming ID consistency and tool format handling
#12640 merged
Aug 8, 2025 -
Add presidio MCP pre call docs
#13392 merged
Aug 7, 2025 -
[Bug Fix] Mistral Tool Calling - Grammar error: at 3(11): failed to compile JSON schema
#13389 merged
Aug 7, 2025 -
Fix non-root docker image for migration
#13379 merged
Aug 7, 2025 -
Revert "Fix SSO Logout | Create Unified Login Page with SSO and Usern…
#13387 merged
Aug 7, 2025 -
[Feat] add azure/gpt-5 model family
#13385 merged
Aug 7, 2025 -
feat: Add GPT-5 model family with official OpenAI specifications (#13…
#13386 merged
Aug 7, 2025 -
feat: Add GPT-5 model family with official OpenAI specifications
#13378 merged
Aug 7, 2025 -
feat - add
claude-opus-4-1
on cost map#13384 merged
Aug 7, 2025 -
Add GPT 5 models
#13377 merged
Aug 7, 2025 -
[Feat] Responses API Session Handling - Multi media support
#13347 merged
Aug 7, 2025 -
Update OCI docs
#13336 merged
Aug 7, 2025 -
Add labels to migrations job template
#13343 merged
Aug 7, 2025 -
fix: 12152 - Redacted sensitive information logged in guardrails
#13356 merged
Aug 7, 2025 -
feat(integrations): allow setting of braintrust callback base url
#13368 merged
Aug 7, 2025 -
Provider logos on usage page
#13372 merged
Aug 7, 2025 -
Feat - New models add groq/openai/gpt-oss
#13363 merged
Aug 7, 2025 -
[UI]added token breakdown in ui
#13357 merged
Aug 7, 2025 -
[MCP Gateway] Added route check for internal users
#13350 merged
Aug 6, 2025 -
[Fix migration for MCP server name and alias] added new migration files
#13345 merged
Aug 6, 2025 -
Fix create, search vector store error
#13285 merged
Aug 6, 2025 -
[MCP Gateway] fix auth on ui for bearer servers
#13312 merged
Aug 6, 2025 -
[Feat] - New model - Add Bedrock gpt oss models -
"openai.gpt-oss-20b-1:0"
,"openai.gpt-oss-120b-1:0"
#13342 merged
Aug 6, 2025 -
feat: Add logo customization for LiteLLM admin UI
#12958 merged
Aug 6, 2025 -
feat(JinaAI): support multimodal embedding models
#13181 merged
Aug 6, 2025 -
fix(streaming_handler.py): include cost in streaming usage object
#13319 merged
Aug 6, 2025 -
[LLM Translation] Fix model group on clientside auth with API calls
#13314 merged
Aug 6, 2025 -
[Feat] - When using custom tags on prometheus allow using wildcard patterns
#13316 merged
Aug 6, 2025 -
[Bug]: Fix Mimetype Resolution Error in Bedrock Document Understanding
#13309 merged
Aug 6, 2025 -
Litellm fix OpenAI spec tools
#13315 merged
Aug 6, 2025 -
[MCP Gateway] refactor mcp guardrails
#13238 merged
Aug 5, 2025 -
Fix double slash issue in SSO login URL construction
#13289 merged
Aug 5, 2025 -
[Proxy server] Add apscheduler log suppress
#13299 merged
Aug 5, 2025 -
[Redis IAM] Change documentation
#13306 merged
Aug 5, 2025 -
[LLM Translation] Fix model group on clientside auth with API calls
#13293 merged
Aug 5, 2025 -
Create New Key - Make Team Field Required for Service Account
#13302 merged
Aug 5, 2025 -
[Feat] Add fireworks gpt-oss models
#13303 merged
Aug 5, 2025 -
New models - add fireworks_ai/glm-4p5 model family
#13297 merged
Aug 5, 2025 -
[New model] add bedrock/us.anthropic.claude-opus-4-1-20250805-v1:0
#13295 merged
Aug 5, 2025 -
[LLM Translation] claude opus 4.1 support for anthropic provider
#13296 merged
Aug 5, 2025 -
[LLM Translation + Coding tools] Added litellm claude code count tokens support
#13261 merged
Aug 5, 2025 -
[Redis] - Add ability to add client through GCP IAM Auth
#13275 merged
Aug 5, 2025 -
Revert "Fix: Langfuse reporting "client closed" error due to httpx client TTL"
#13291 merged
Aug 5, 2025 -
fix(main.py): handle tool being a pydantic object + Fix unpack defs deepcopy issue for bedrock
#13274 merged
Aug 5, 2025 -
Ensure disable_llm_api_endpoints works + Add wildcard model support for 'team-byok' model
#13278 merged
Aug 5, 2025 -
fix OCI linting errors
#13279 merged
Aug 5, 2025 -
[UI] - Add ability to set model alias per key/team
#13276 merged
Aug 5, 2025 -
[LLM Translation] Support /v1/models/{model_id} retrieval
#13268 merged
Aug 5, 2025 -
[LLM Translation] input cost per token higher than $1 test
#13270 merged
Aug 5, 2025 -
[LLM Translation] Correct pricing for web search on 4o-mini
#13269 merged
Aug 5, 2025 -
Fix: Langfuse reporting "client closed" error due to httpx client TTL
#13045 merged
Aug 4, 2025 -
Add GCS bucket caching support
#13122 merged
Aug 4, 2025 -
Support OCI provider
#13206 merged
Aug 4, 2025 -
[Bug Fix] Fix Server root path regression on UI when using "Login"
#13267 merged
Aug 4, 2025 -
Minor formatting changes to token-cost.json
#13244 merged
Aug 4, 2025 -
Bug Fix - Responses API raises error with Gemini Tool Calls in
input
#13260 merged
Aug 4, 2025 -
[Bug Fix] OpenAI / Azure Responses API - Add
service_tier
,safety_identifier
supported params#13258 merged
Aug 4, 2025 -
[UI] Add team deletion check for teams with keys
#12953 merged
Aug 4, 2025 -
[LLM Translation] Fix Model Usage not having text tokens
#13234 merged
Aug 4, 2025 -
[Proxy] Add OpenShift Support to non root docker image
#13239 merged
Aug 3, 2025 -
Prompt Management - add prompts on UI
#13240 merged
Aug 3, 2025 -
Prompt Management - Add table + prompt info page to UI
#13232 merged
Aug 3, 2025 -
UI - Add giving keys prompt access
#13233 merged
Aug 3, 2025 -
[docs release notes]
#13237 merged
Aug 2, 2025 -
[UI QA Fixes] Stable release
#13231 merged
Aug 2, 2025 -
Prompt Management (2/2) - New
/prompt/list
endpoint + key-based access to prompt templates#13218 merged
Aug 2, 2025 -
Revert "fix: role chaining and session name with webauthentication for aws bedrock"
#13230 merged
Aug 2, 2025 -
[QA Fixes for MCP] - Ensure MCPs load + don't run a health check everytime we load MCPs on UI
#13228 merged
Aug 2, 2025 -
litellm/proxy: preserve model order of /v1/models and /model_group/info
#13178 merged
Aug 2, 2025 -
Fix missing extra_headers support for vLLM/openai_like embeddings
#13198 merged
Aug 2, 2025 -
fix: role chaining and session name with webauthentication for aws bedrock
#13205 merged
Aug 2, 2025 -
Add Perplexity citation annotations support
#13225 merged
Aug 2, 2025 -
Add advanced date picker to all the tabs on the usage page
#13221 merged
Aug 2, 2025 -
[MCP Gateway] Litellm mcp pre and during guardrails
#13188 merged
Aug 2, 2025 -
[LLM] - suppress httpx logging
#13217 merged
Aug 2, 2025 -
[LLM] fix model reload on model update
#13216 merged
Aug 2, 2025 -
[Proxy]fix key mgmt
#13148 merged
Aug 2, 2025 -
[Separate Health App] Update Helm Deployment.yaml
#13162 merged
Aug 1, 2025 -
[QA] Viewing Agent Activity Headers on UI Usage Page
#13212 merged
Aug 1, 2025 -
[LLM translation] Fix bedrock computer use #13143
#13150 merged
Aug 1, 2025 -
Index.md - cleanup docs
#13215 merged
Aug 1, 2025 -
Fix API Key Being Logged
#12978 merged
Aug 1, 2025 -
Allow to redifine LLM base api URL in the pass trough endpoints
#13134 merged
Aug 1, 2025 -
add openssl in apk install in runtime stage in dockerfile.non_root
#13168 merged
Aug 1, 2025 -
feat(helm): allow helm hooks for migrations job
#13174 merged
Aug 1, 2025 -
Fix/panw prisma airs post call hook
#13185 merged
Aug 1, 2025 -
[UI QA] QA - Agent Activity Tab
#13203 merged
Aug 1, 2025 -
Anthropic - mid stream fallbacks p2 (add token usage across both calls)
#13170 merged
Aug 1, 2025 -
Anthropic - working mid-stream fallbacks
#13149 merged
Aug 1, 2025 -
Fix - using managed files w/ OTEL + UI - add model group alias on UI
#13171 merged
Aug 1, 2025 -
[Docs] Add details on when to use specific health endpoints
#13193 merged
Aug 1, 2025 -
Fix langfuse test patch path causing CI failures
#13192 merged
Aug 1, 2025 -
Litellm fix fallbacks UI
#13191 merged
Aug 1, 2025 -
[Feat] Allow redacting message / response content for specific logging integrations - DD LLM Observability
#13158 merged
Jul 31, 2025 -
[Bug Fix] Infra - ensure that stale Prisma clients disconnect DB connection
#13140 merged
Jul 31, 2025 -
[Bug Fix] Gemini-CLI Integration - ensure tool calling works as expected on generateContent
#13189 merged
Jul 31, 2025 -
fix: support negative indexes in cache_control_injection_points for Anthropic Claude (#10226)
#13187 merged
Jul 31, 2025 -
[Proxy Startup]fix db config through envs
#13111 merged
Jul 31, 2025 -
[Feat] Background Health Checks - Allow disabling background health checks for a specific
#13186 merged
Jul 31, 2025 -
fix: remove obsolete attribute
version
in docker compose#13172 merged
Jul 31, 2025 -
add framework name to UserAgent header in AWS Bedrock API call
#13159 merged
Jul 31, 2025 -
build(config.yml): migrate build_and_test to ci/cd pg db
#13166 merged
Jul 31, 2025 -
[MCP Gateway] fix migrations
#13157 merged
Jul 30, 2025 -
[MCP Gateway] Litellm mcp client list fail
#13114 merged
Jul 30, 2025 -
Litellm explore postgres db ci cd
#13156 merged
Jul 30, 2025 -
[Feat] v2 updates - tracking DAU, WAU, MAU for coding tool usage + show Daily Usage per User
#13147 merged
Jul 30, 2025 -
[MCP Guardrails] move pre and during hooks to ProxyLoggin
#13109 merged
Jul 30, 2025 -
[LLM translation] Fix bedrock computer use
#13143 merged
Jul 30, 2025 -
[Feat] UI + Backend add a tab for use agent activity
#13146 merged
Jul 30, 2025 -
New Advanced Date Range Picker Component
#13141 merged
Jul 30, 2025 -
[Proxy UI] fix object permission for orgs
#13142 merged
Jul 30, 2025 -
Resolve Anthropic Overloaded error during stream
#9809 merged
Jul 30, 2025 -
Added Voyage, Jinai, Deepinfra and VolcEngine providers on the UI
#13131 merged
Jul 30, 2025 -
[MCP Protocol header] fix issue with clients protocol header
#13112 merged
Jul 30, 2025 -
[MCP Gateway] add health check endpoints for MCP
#13106 merged
Jul 30, 2025 -
fix tool aws bedrock call index when the function only have optional arg
#13115 merged
Jul 30, 2025 -
move to use_prisma_migrate by default + resolve team-only models on auth checks + UI - add sagemaker on UI
#13117 merged
Jul 30, 2025 -
fix(model_checks.py): handle custom values in wildcard model name (e.g. genai/test/*)
#13116 merged
Jul 30, 2025 -
Revert "[LLM translation] Add support for bedrock computer use"
#13118 merged
Jul 30, 2025 -
After selecting date range show loader on usage cost charts
#13113 merged
Jul 30, 2025 -
[LLM translation] Add support for bedrock computer use
#12948 merged
Jul 29, 2025 -
[Feat] MLFlow Logging - Allow adding tags for ML Flow logging requests
#13108 merged
Jul 29, 2025 -
feat: Add dot notation support for all JWT fields
#13013 merged
Jul 29, 2025 -
Custom Auth - bubble up custom exceptions
#13093 merged
Jul 29, 2025 -
Fix token counter to ignore unsupported keys like prefix (#11791)
#11954 merged
Jul 29, 2025 -
[MCP Gateway] Add protocol headers
#13062 merged
Jul 29, 2025 -
Fix/gemini api key environment variable support
#12507 merged
Jul 29, 2025 -
Fix fallback delete
#12606 merged
Jul 29, 2025 -
Revert "[Bug]: Set user from token user_id for OpenMeter integration"
#13107 merged
Jul 29, 2025 -
[Bug]: Set user from token user_id for OpenMeter integration
#13029 merged
Jul 29, 2025 -
fix: helm migration job not running schema update
#12809 merged
Jul 29, 2025 -
BUGFIX: Jitter should be added not multiplied (#12877)
#12901 merged
Jul 29, 2025 -
Optionally enable GenAI telemetry following semantic conventions
#12626 merged
Jul 29, 2025 -
fix: always use choice index=0 for Anthropic streaming responses
#12666 merged
Jul 29, 2025 -
[Infra] Looses MCP python version restrictions
#13102 merged
Jul 29, 2025 -
[LLM translation] add openrouter grok4
#13018 merged
Jul 29, 2025 -
[Feat] Allow using query_params for setting API Key for generateContent routes
#13100 merged
Jul 29, 2025 -
set default value for mcp namespace tool name in spend table to prevent duplicate entry in table
#12894 merged
Jul 29, 2025 -
[Bug Fix] Gemini-CLI - The Gemini Custom API request has an incorrect authorization format
#13098 merged
Jul 29, 2025 -
fix: improve MCP server URL validation to support internal/Kubernetes URLs
#13099 merged
Jul 29, 2025 -
[MCP gateway] add pre and during call hooks init
#13067 merged
Jul 29, 2025 -
Fix list team v2 security check
#13094 merged
Jul 29, 2025 -
Remove extraneous
s
in docs#13079 merged
Jul 29, 2025 -
Azure
api_version="preview"
support + Bedrock cost tracking via Anthropic/v1/messages
#13072 merged
Jul 29, 2025 -
Fix anthropic passthrough logging handler model fallback for streaming requests
#13022 merged
Jul 29, 2025 -
docs: add Qwen Code CLI tutorial
#12915 merged
Jul 29, 2025 -
Added handling for pwd protected cert files in AOAI CertificateCreden…
#12995 merged
Jul 29, 2025 -
Default Usage Chart Date Range: Last 7 Days
#12917 merged
Jul 29, 2025 -
[Feat] Add Google AI Studio Imagen4 model family
#13065 merged
Jul 29, 2025 -
[Bug Fix] The model gemini-2.5-flash with the merge_reasoning_content_in_choices parameter does not work
#13066 merged
Jul 29, 2025 -
[MCP gateway] add url namespacing docs
#13063 merged
Jul 29, 2025 -
[MCP Gateway] MCP tools fix scrolling issue
#13015 merged
Jul 29, 2025 -
feat(langfuse-otel): Add comprehensive metadata support to Langfuse OpenTelemetry integration
#12956 merged
Jul 28, 2025 -
chore: Improve docs for cost tracking
#12976 merged
Jul 28, 2025 -
fix: correct CompletionRequest messages type to match OpenAI API spec
#12980 merged
Jul 28, 2025 -
Properly parse json options for key generation in the UI
#12989 merged
Jul 28, 2025 -
Remove duplicate test case verifying field filtering logic
#13023 merged
Jul 28, 2025 -
[Bug Fix] Pass through logging handler VertexAI - ensure multimodal embedding responses are logged
#13050 merged
Jul 28, 2025 -
docs - openwebui show how to include reasoning content for gemini models
#13060 merged
Jul 28, 2025 -
add X-Initiator header for GitHub Copilot to reduce premium requests
#13016 merged
Jul 28, 2025 -
Bulk User Edit - additional improvements - edit all users + set 'no-default-models' on all users
#12925 merged
Jul 27, 2025 -
Litellm release notes 07 27 2025 p1
#13027 merged
Jul 27, 2025 -
[MCP Gateway] Litellm mcp multi header propagation
#13003 merged
Jul 26, 2025 -
UI SSO - fix reset env var when ui_access_mode is updated
#13011 merged
Jul 26, 2025 -
[FEAT] Model-Guardrails: Add on UI
#13006 merged
Jul 26, 2025 -
[Vector Store] make vector store permission management OSS
#12990 merged
Jul 26, 2025 -
Fixup ollama model listing (again)
#13008 merged
Jul 26, 2025 -
[MCP Gateway] add Litellm mcp alias for prefixing
#12994 merged
Jul 26, 2025 -
[BUG Fix] Cannot pickle coroutine object
#13005 merged
Jul 26, 2025 -
Fix issue writing db
#13001 merged
Jul 26, 2025 -
build: update pip package
#12998 merged
Jul 25, 2025 -
[MCP Gateway] Move cost tracking and permission management to OSS
#12988 merged
Jul 25, 2025 -
[LLM Translation] fix query params for realtime api intent
#12838 merged
Jul 25, 2025 -
clean and verify key before inserting
#12840 merged
Jul 25, 2025 -
[LLM Translation] Add bytedance/ui-tars-1.5-7b on openrouter
#12882 merged
Jul 25, 2025 -
Guardrails - support model-level guardrails
#12968 merged
Jul 25, 2025 -
Show global retry policy on UI
#12969 merged
Jul 25, 2025 -
GuardrailsAI: use validatedOutput to allow usage of "fix" guards
#12891 merged
Jul 25, 2025 -
fix(auth_utils): make header comparison case-insensitive
#12950 merged
Jul 25, 2025 -
feat: add openrouter/qwen/qwen3-coder model configuration
#12910 merged
Jul 25, 2025 -
Fix: Shorten Gemini tool_call_id for Open AI compatibility
#12941 merged
Jul 25, 2025 -
docs: added documentation about metadata exposed over the
/v1/models
endpoint#12942 merged
Jul 25, 2025 -
[Feat] Add inpainting support and corresponding tests for Amazon Nova…
#12949 merged
Jul 25, 2025 -
[Feat] Edit Auto Router Settings on UI
#12966 merged
Jul 25, 2025 -
[Feat] UI - Allow Adding LiteLLM Auto Router on UI
#12960 merged
Jul 25, 2025 -
[LLM Translation] added new realtime model for openai
#12946 merged
Jul 25, 2025 -
[LLM Translation] - Bug fix Anthropic Tool calling
#12959 merged
Jul 25, 2025 -
[Feat] Backend Router - Add Auto-Router powered by
semantic-router
#12955 merged
Jul 25, 2025 -
fix(internal_user_endpoints.py): delete member from team table on
/user/delete
#12926 merged
Jul 24, 2025 -
Proxy - specify
key_type
- allows specifying if key can call LLM API routes vs. Management routes only#12909 merged
Jul 24, 2025 -
Prometheus - tags, fix '[tag]="false"' when tag is set
#12916 merged
Jul 24, 2025 -
Update control_plane_and_data_plane.md
#12939 merged
Jul 24, 2025 -
[UI] Allow setting up CloudZero Usage through LiteLLM UI
#12923 merged
Jul 24, 2025 -
Add GA version of gemini 2.5 flash lite for both vertex and gemini
#12920 merged
Jul 24, 2025 -
[Feat] LiteLLM CloudZero Integration updates - using LiteLLM_SpendLogs Table
#12922 merged
Jul 24, 2025 -
[Feat] LiteLLM x Cloudzero integration - Allow exporting spend to cloudzero
#12908 merged
Jul 23, 2025 -
feat: Add Pillar Security guardrail integration
#12791 merged
Jul 23, 2025 -
feat: extended
/v1/models
endpoint, now it returns with fallbacks on demand#12811 merged
Jul 23, 2025 -
rm retired anthropic models from model_prices_and_context_window.json
#12864 merged
Jul 23, 2025 -
[Add health check] add architecture diagram
#12879 merged
Jul 23, 2025 -
[Docs ]Litellm mcp access group doc
#12883 merged
Jul 23, 2025 -
Request Headers - support
x-litellm-num-retries
+ Usage - support usage by model group#12890 merged
Jul 23, 2025 -
[Feat] - Track cost + add tags for health checks done by LiteLLM Proxy
#12880 merged
Jul 23, 2025 -
[Feat] Add cost tracking for new vertex_ai/llama-3 API models
#12878 merged
Jul 22, 2025 -
[LLM Translation] Litellm gemini 2.0 live support
#12839 merged
Jul 22, 2025 -
[Feat] Add Recraft API - Image Edits Support
#12874 merged
Jul 22, 2025 -
[Separate Health App] Pass through cmd args via supervisord
#12871 merged
Jul 22, 2025 -
Bug fix - Azure KeyVault not in image, add
azure-keyvault==4.2.0
to Docker img#12873 merged
Jul 22, 2025 -
build(deps): bump form-data from 4.0.3 to 4.0.4 in /docs/my-website
#12867 merged
Jul 22, 2025 -
Replace non-root Dockerfile base with Alpine multi-stage build;
#12707 merged
Jul 22, 2025 -
Improvements on the Regenerate Key Flow
#12788 merged
Jul 22, 2025 -
Openrouter - filter out cache_control flag for non-anthropic models (allows usage with claude code)
#12850 merged
Jul 22, 2025 -
Fix team_member_budget update logic
#12843 merged
Jul 22, 2025 -
build(deps): bump form-data from 4.0.0 to 4.0.4 in /ui/litellm-dashboard
#12851 merged
Jul 22, 2025 -
Passthrough Auth - make Auth checks OSS + Anthropic - only show 'reasoning_effort' for supported models
#12847 merged
Jul 22, 2025 -
Litellm batch cost tracking debug
#12782 merged
Jul 22, 2025 -
feat: add Hyperbolic provider support
#12826 merged
Jul 22, 2025 -
fix(watsonx): IBM Watsonx - use correct parameter name for tool choice
#9980 merged
Jul 22, 2025 -
Docs - litellm benchmarks
#12842 merged
Jul 22, 2025 -
[Azure OpenAI Feature] - Support DefaultAzureCredential without hard-coded environment variables
#12841 merged
Jul 22, 2025 -
[LLM Translation] add qwen-vl-plus
#12829 merged
Jul 21, 2025 -
[Feat] Add fireworks -
fireworks/models/kimi-k2-instruct
#12837 merged
Jul 21, 2025 -
[Bug Fix] - gemini leaking FD for sync calls with litellm.completion
#12824 merged
Jul 21, 2025 -
[Feat] Add Recraft Image Generation API Support - New LLM Provider
#12832 merged
Jul 21, 2025 -
Add Google Cloud Model Armor guardrail documentation
#12814 merged
Jul 21, 2025 -
fix: remove deprecated groq/qwen-qwq-32b and add qwen/qwen3-32b
#12831 merged
Jul 21, 2025 -
feat: add Morph provider support
#12821 merged
Jul 21, 2025 -
[LLM Translation - GH Copilot] added dynamic endpoint support
#12827 merged
Jul 21, 2025 -
[Docs] Show correct list of vertex ai mistral models
#12828 merged
Jul 21, 2025 -
[UI Bug Fix] Show correct guardrails when editing a team
#12823 merged
Jul 21, 2025 -
feat: Add Lambda AI provider support
#12817 merged
Jul 21, 2025 -
Adding HolmesGPT to projects using LiteLLM
#12798 merged
Jul 21, 2025 -
docs(moonshot): correct base url and document CN-specific endpoint
#12804 merged
Jul 21, 2025 -
Fix SSO Logout | Create Unified Login Page with SSO and Username/Password Options
#12703 merged
Jul 21, 2025 -
docs - vector stores
#12781 merged
Jul 20, 2025 -
Litellm fix proxy unit testing p2
#12779 merged
Jul 19, 2025 -
UI - support adding links to model hub
#12776 merged
Jul 19, 2025 -
Litellm fix proxy unit testing
#12778 merged
Jul 19, 2025 -
[LLM Translation] Add Gov Cloud bedrock model pricing and context windows
#12773 merged
Jul 19, 2025 -
[LLM Translation] added switchpoint router
#12777 merged
Jul 19, 2025 -
fix(proxy): Fix Model Armor project_id initialization order
#12766 merged
Jul 19, 2025 -
Allow forwarding clientside headers by model group
#12753 merged
Jul 19, 2025 -
[QA] Disable Logging settings for Keys
#12774 merged
Jul 19, 2025 -
Fix moonshot/kimi-thinking-preview tool choice support
#12772 merged
Jul 19, 2025 -
Feature/track bedrock gov cloud models
#12771 merged
Jul 19, 2025 -
[Key Access] Litellm disabled callbacks for UI
#12769 merged
Jul 19, 2025 -
UI - Support 'batch' model health checks + make 'team-only' model concept clearer
#12770 merged
Jul 19, 2025 -
[JSON Logs] fix ciruclar ref error by adding safe dumps
#12764 merged
Jul 19, 2025 -
fix: correct Groq model naming convention for moonshotai/kimi-k2-instruct
#12768 merged
Jul 19, 2025 -
feat(proxy_server.py): add model hub to the swagger
#12767 merged
Jul 19, 2025 -
[Docs] 1.74.6.rc note
#12765 merged
Jul 19, 2025 -
Litellm gemini grounding metadata stream
#12673 merged
Jul 19, 2025 -
Bulk Edit Users on UI
#12763 merged
Jul 19, 2025 -
[Feat] Backend - Add support for disabling callbacks in request body
#12762 merged
Jul 19, 2025 -
Fix Y-axis labels overlap on Spend per Tag
#12754 merged
Jul 19, 2025 -
Copy MCP Server name
#12760 merged
Jul 19, 2025 -
feat: add v0 provider support
#12751 merged
Jul 19, 2025 -
[Feat] UI Vector Stores - Allow adding Vertex RAG Engine, OpenAI, Azure
#12752 merged
Jul 19, 2025 -
[Feat] LLM API Endpoint - Expose OpenAI Compatible /vector_stores/{vector_store_id}/search endpoint
#12749 merged
Jul 19, 2025 -
[LLM Translation] Added model name formats
#12745 merged
Jul 19, 2025 -
[LLM Translation - Redis] fix: redis caching for embedding response models
#12750 merged
Jul 18, 2025 -
[LLM Translation] Change System prompts to assistant prompts as a workaround for GH Copilot
#12742 merged
Jul 18, 2025 -
build(deps): bump on-headers and compression in /docs/my-website
#12721 merged
Jul 18, 2025 -
fix(lowest_latency.py): Handle ZeroDivisionError with zero completion tokens
#12734 merged
Jul 18, 2025 -
[Feat] UI - Allow clicking into Vector Stores
#12741 merged
Jul 18, 2025 -
Add project_id to cached credentials for VertexAI
#12661 merged
Jul 18, 2025 -
feat: integrate Google Cloud Model Armor guardrails
#12492 merged
Jul 18, 2025 -
[jais-30b-chat] added model to prices and context window
#12739 merged
Jul 18, 2025 -
[Prometheus] Move Prometheus to enterprise folder
#12659 merged
Jul 18, 2025 -
Guardrails AI - support
llmOutput
based guardrails as pre-call hooks#12674 merged
Jul 18, 2025 -
Health check app on separate port
#12718 merged
Jul 18, 2025 -
Anthropic - add tool cache control support
#12668 merged
Jul 18, 2025 -
/streamGenerateContent - non-gemini model support
#12647 merged
Jul 18, 2025 -
Add Hosted VLLM rerank provider integration
#12738 merged
Jul 18, 2025 -
Vllm rerank
#12737 merged
Jul 18, 2025 -
chore(proxy): loosen rich version from ==13.7.1 to >=13.7.1
#12704 merged
Jul 18, 2025 -
[Bug fix] s3 v2 log uploader crashes when using with guardrails
#12733 merged
Jul 18, 2025 -
[Bug Fix] QA - Use PG Vector Vector Store with LiteLLM
#12716 merged
Jul 18, 2025 -
fixed comment in docs for anthropic provider
#12725 merged
Jul 18, 2025 -
[Feat] Add
azure_ai/grok-3
model family + Cost tracking#12732 merged
Jul 18, 2025 -
Fix AsyncMock error in team endpoints test
#12730 merged
Jul 18, 2025 -
Regenerate Key State Management and Authentication Issues
#12729 merged
Jul 18, 2025 -
Teams - allow setting custom key duration + show many user + service account keys have been created
#12722 merged
Jul 18, 2025 -
fix(team_endpoints.py): ensure user id correctly added when new team …
#12719 merged
Jul 18, 2025 -
feat(internal_user_endpoints.py): new
/user/bulk_update
endpoint#12720 merged
Jul 18, 2025 -
Litellm encrypt admin UI values - ensures SSO keys work on UI
#12675 merged
Jul 18, 2025 -
MCP Gateway Change name for all servers to all available servers for Internal Users
#12702 merged
Jul 18, 2025 -
[MCP Gateway] add fix to update object permission on update/delete key/team
#12701 merged
Jul 18, 2025 -
[MCP Gateway] added docs for mcp namespacing by URL
#12700 merged
Jul 18, 2025 -
[Refactor] Vector Stores - Use class VectorStorePreCallHook for all Vector Store Integrations
#12715 merged
Jul 17, 2025 -
[Bug Fix] Always include tool calls in output of trim_messages
#11517 merged
Jul 17, 2025 -
[Refactor] Use Existing config structure for bedrock vector stores
#12672 merged
Jul 17, 2025 -
[Feat] Proxy - New LLM API Routes /v1/vector_stores and /v1/vector_stores/vs_abc123/search
#12699 merged
Jul 17, 2025 -
[Feat] Bedrock Guardrails - Allow disabling exception on 'BLOCKED' action
#12693 merged
Jul 17, 2025 -
Fix incorrect environment variable names in Claude code docs
#12686 merged
Jul 17, 2025 -
Add Claude Code LiteLLM tutorial
#12650 merged
Jul 17, 2025 -
[Liveness/Liveliness probe] add separate health app for liveness probes in files
#12669 merged
Jul 17, 2025 -
[Feat] New Vector Store - PG Vector
#12667 merged
Jul 17, 2025 -
feat: add input_fidelity parameter for OpenAI image generation
#12662 merged
Jul 16, 2025 -
[Bug Fix] SCIM - add GET /ServiceProviderConfig
#12664 merged
Jul 16, 2025 -
[Bug Fix] StandardLoggingPayload on cache_hits should track custom llm provider + DD LLM Obs span type
#12652 merged
Jul 16, 2025 -
[Feat] UI - Add
end_user
filter on UI#12663 merged
Jul 16, 2025 -
[Feat] Allow reading custom logger python scripts from s3
#12623 merged
Jul 16, 2025 -
[MCP Gateway] Allow MCP sse and http to have namespaced url for better segregation LIT-304
#12658 merged
Jul 16, 2025 -
[MCP Gateway] List tools from access list for keys
#12657 merged
Jul 16, 2025 -
[MCP Gateway] Allow MCP access groups to be added via the config LIT-312
#12654 merged
Jul 16, 2025 -
Fix unused imports in completion_extras transformation
#12655 merged
Jul 16, 2025 -
Add GitHub Copilot LiteLLM tutorial
#12649 merged
Jul 16, 2025 -
[Bug Fix] grok-4 does not support the
stop
param#12646 merged
Jul 16, 2025 -
Add groq/moonshotai-kimi-k2-instruct model configuration
#12648 merged
Jul 16, 2025 -
[New Model] add together_ai/moonshotai/Kimi-K2-Instruct
#12645 merged
Jul 16, 2025 -
Fix bedrock nova micro and lite info
#12619 merged
Jul 16, 2025 -
fix: Handle circular references in spend tracking metadata JSON serialization
#12643 merged
Jul 16, 2025 -
OpenAI deepresearch models via
.completion
support#12627 merged
Jul 16, 2025 -
fix(proxy_server.py): fixes for handling team only models on UI
#12632 merged
Jul 16, 2025 -
fix(router.py): use more descriptive error message + UI - enable team admins to update member role
#12629 merged
Jul 16, 2025 -
[Bug Fix] [Bug]: Knowledge Base Call returning error
#12628 merged
Jul 16, 2025 -
(#11794) use upsert for managed object table rather than create to avoid UniqueViolationError
#11795 merged
Jul 16, 2025 -
fix: role chaining with webauthentication for aws bedrock
#12607 merged
Jul 16, 2025 -
Add input_cost_per_pixel to values in ModelGroupInfo model
#12604 merged
Jul 16, 2025 -
Add token pricing for Together.ai Llama-4 and DeepSeek models
#12622 merged
Jul 16, 2025 -
Add "keys import" command to CLI
#12620 merged
Jul 16, 2025 -
rm claude instant 1 and 1.2 from model_prices_and_context_window.json
#12631 merged
Jul 16, 2025 -
[Feat] MCP Gateway - allow using MCPs with all LLM APIs when using /responses with LiteLLM
#12546 merged
Jul 15, 2025 -
[Docs] troubleshooting SSO configs
#12621 merged
Jul 15, 2025 -
[Bug Fix] Add swagger docs for LiteLLM /chat/completions, /embeddings, /responses
#12618 merged
Jul 15, 2025 -
refactor(mcp): Make MCP_TOOL_PREFIX_SEPARATOR configurable from env
#12603 merged
Jul 15, 2025 -
add azure blob cache support
#12587 merged
Jul 15, 2025 -
Add Copy-on-Click for IDs
#12615 merged
Jul 15, 2025 -
[Bug Fix] Include /mcp in list of available routes on proxy
#12612 merged
Jul 15, 2025 -
fix(anthropic): fix streaming + response_format + tools bug
#12463 merged
Jul 15, 2025 -
feat(gemini): Add custom TTL support for context caching (#9810)
#12541 merged
Jul 15, 2025 -
feat: Add envVars and extraEnvVars support to Helm migrations job
#12591 merged
Jul 15, 2025 -
refactor(prisma_migration.py): refactor to support use_prisma_migrate - for helm hook
#12600 merged
Jul 15, 2025 -
Claude 4 Bedrock /invoke route support + Bedrock application inference profile tool choice support
#12599 merged
Jul 15, 2025 -
Control Plane + Data Plane support
#12601 merged
Jul 15, 2025 -
[Bug fix] [Bug]: Verbose log is enabled by default
#12596 merged
Jul 15, 2025 -
Wildcard model filter
#12597 merged
Jul 15, 2025 -
[Feat] Vector Stores - Add Vertex RAG Engine API as a provider
#12595 merged
Jul 15, 2025 -
Updated release notes
#12594 merged
Jul 15, 2025 -
fix: add implicit caching cost calculation for Gemini 2.x models
#12585 merged
Jul 14, 2025 -
[Feat] Add ai21/jamba-1.7 model family pricing
#12593 merged
Jul 14, 2025 -
[Feat] New LLM API Integration - Add Moonshot API (Kimi) (#12551)
#12592 merged
Jul 14, 2025 -
[Feat] New LLM API Integration - Add Moonshot API (Kimi)
#12551 merged
Jul 14, 2025 -
[Feat] Add Moonshot AI / Kimi
#12589 merged
Jul 14, 2025 -
Add Bytez to the list of providers in the docs
#12588 merged
Jul 14, 2025 -
Add pricing information for Moonshot AI's kimi-k2 model
#12566 merged
Jul 14, 2025 -
Litellm release notes 07 12 2025
#12563 merged
Jul 13, 2025 -
[MCP Gateway] Ensure we use the same param for specifying groups
#12561 merged
Jul 12, 2025 -
UI - v1.74.3-stable QA fixes
#12559 merged
Jul 12, 2025 -
fix: handle missing 'env' attribute in MCP server Prisma model
#12560 merged
Jul 12, 2025 -
[MCP Gateway] add access group documentation
#12557 merged
Jul 12, 2025 -
Fix: Output github copilot verification uri immediately when running in docker.
#12558 merged
Jul 12, 2025 -
UI - Model Hub - refactor 'Make Public' to have a select + confirm form
#12555 merged
Jul 12, 2025 -
UI - Model Hub Page - minor fixes + improvements (+ Make Model Hub OSS 🚀)
#12553 merged
Jul 12, 2025 -
[MCP Gateway] access group fixes on UI for keys and teams
#12556 merged
Jul 12, 2025 -
[CI/CD fix] test_redis_caching_multiple_namespaces
#12552 merged
Jul 12, 2025 -
Integration: Bytez as a model provider
#12121 merged
Jul 12, 2025 -
Fix e2e test
#12549 merged
Jul 12, 2025 -
[MCP Gateway] UI headers groups example on connect tab
#12550 merged
Jul 12, 2025 -
[MCP Gateway] Allow mcp access groups on test key and tool calls
#12529 merged
Jul 12, 2025 -
[Bug Fix] xai/ translation fix - ensure finish_reason includes tool calls when xai responses with tool calls
#12545 merged
Jul 12, 2025 -
chore: Update Vertex AI Model Garden LiteLLM integration tutorial
#12428 merged
Jul 12, 2025 -
Align Show Password with Checkbox
#12538 merged
Jul 12, 2025 -
Fix e2e test
#12544 merged
Jul 12, 2025 -
Consistent layout for Create and Back buttons on all the pages
#12542 merged
Jul 12, 2025 -
Team Members - reset budget, if duration set + Prometheus - support tag based metrics
#12534 merged
Jul 12, 2025 -
🐛 Remove deprecated pydantic class Config
#12528 merged
Jul 12, 2025 -
docs: Update github.md
#12509 merged
Jul 12, 2025 -
[MCP Gateway] Allow using stdio MCPs with LiteLLM
#12530 merged
Jul 12, 2025 -
[MCP Gateway] UI Quality check fixes
#12521 merged
Jul 12, 2025 -
[MCP Gateway] access group UI object permission fix
#12523 merged
Jul 12, 2025 -
[Bug Fix] - QA for MCP Gateway - show the cost config on the root of MCP Settings
#12526 merged
Jul 12, 2025 -
[Security Fix] - Dont show pure JWT in "Logs" page on UI
#12524 merged
Jul 11, 2025 -
[MCP Gateway] QA - MCP Tool Testing Playground
#12520 merged
Jul 11, 2025 -
Litellm mcp groups UI
#12522 merged
Jul 11, 2025 -
[Feat] - New guardrail - OpenAI Moderations API
#12519 merged
Jul 11, 2025 -
Validation to mcp server name
#12515 merged
Jul 11, 2025 -
Add
Build and push litellm-non_root
todocker-hub-deploy
CICD workflow#12413 merged
Jul 11, 2025 -
Litellm mcp access group
#12514 merged
Jul 11, 2025 -
[Enterprise] Support tag based mode for guardrails
#12508 merged
Jul 11, 2025 -
Litellm mcp access group on UI
#12470 merged
Jul 11, 2025 -
Guardrails AI - pre-call + logging only guardrail (pii detection/competitor names) support
#12506 merged
Jul 11, 2025 -
Fix tool call handling in Anthropic pass-through adapter
#12473 merged
Jul 11, 2025 -
fix bedrock cost calculation for cached tokens
#12488 merged
Jul 11, 2025
79 Pull requests opened by 51 people
-
[Feature: Added the support to add api_url in custom_guardrail]
#12512 opened
Jul 11, 2025 -
Fix: Handle missing test runner script in LLM translation workflow
#12516 opened
Jul 11, 2025 -
[Draft PR] Allow setting disabled callbacks on UI
#12527 opened
Jul 11, 2025 -
Improved Provider and Model Name Handling
#12536 opened
Jul 12, 2025 -
Added json_schema to optional parameters to include a json schema for structured responses
#12572 opened
Jul 14, 2025 -
Show failed token on key deletion
#12577 opened
Jul 14, 2025 -
[Bug]: LiteLLM logs certain error messages to stdout instead of stderr
#12580 opened
Jul 14, 2025 -
[Docs] Update Hugging Face documentation
#12582 opened
Jul 14, 2025 -
Restrict Team settings on Add Model page
#12617 opened
Jul 15, 2025 -
[WIP] chore: lazy load the model costs
#12633 opened
Jul 16, 2025 -
fix(embeddings): Avoid check for openai if the api base is not the official url
#12637 opened
Jul 16, 2025 -
fix: ensure Gemini calls include a user message (fixes #9733)
#12651 opened
Jul 16, 2025 -
Make a prompt_id optional for custom_prompt:get_chat_completion_prompt
#12656 opened
Jul 16, 2025 -
Convert <think> to reasoning_content
#12692 opened
Jul 17, 2025 -
Fix Cohere embedding error when base64 encoding_format is requested
#12696 opened
Jul 17, 2025 -
fix(bedrock-converse): cache control not applied to messages with assistant or tool role
#12697 opened
Jul 17, 2025 -
ci(lint): black not running with check
#12698 opened
Jul 17, 2025 -
fix(model_management_endpoints.clear_cache): only clear database models from cache on model update
#12709 opened
Jul 17, 2025 -
Fix internal users table overflow
#12736 opened
Jul 18, 2025 -
Add Vertex AI supervised fine-tuning and online prediction integration
#12758 opened
Jul 19, 2025 -
Fix RuntimeWarning: coroutine was never awaited in async_client_cleanup
#12759 opened
Jul 19, 2025 -
feat: Built-in MCP Server Integration - Server-to-Server MCP with Client Abstraction
#12790 opened
Jul 20, 2025 -
#12794 Commit the codes for fixing the team budget max limitation bug.
#12796 opened
Jul 21, 2025 -
Fix additional anyOf corner cases for Vertex AI Gemini tool calls - issue #11164
#12797 opened
Jul 21, 2025 -
#12800 Commit the codes for fixing the no model_group information in …
#12801 opened
Jul 21, 2025 -
Feature/vertex ai fine tuning
#12835 opened
Jul 21, 2025 -
[QA] Allow viewing redacted standard callback dynamic params
#12853 opened
Jul 22, 2025 -
fix: make gemini and openai responses return reasoning by default
#12865 opened
Jul 22, 2025 -
fix: streaming finish_reason returns tool_calls for o1-pro and Cohere models
#12866 opened
Jul 22, 2025 -
Remove vector store methods from global scope
#12885 opened
Jul 23, 2025 -
feat: Add 'All Org Models' option for API key model access
#12907 opened
Jul 23, 2025 -
build(deps-dev): bump form-data from 4.0.1 to 4.0.4 in /tests/proxy_admin_ui_tests/ui_unit_tests
#12914 opened
Jul 23, 2025 -
Move Team Selection for a Model to Advance Settings
#12927 opened
Jul 24, 2025 -
[QA] Fixes for using Auto Router + UI fixes
#12983 opened
Jul 25, 2025 -
honor OLLAMA_API_KEY for ollama_chat
#12984 opened
Jul 25, 2025 -
feat: Add Tracing Support For Anthropic/Claude Code With Arize/Phoenix Integration
#12987 opened
Jul 25, 2025 -
fix(proxy): fix GCP Model Armor guardrail detection and circular reference issue
#12991 opened
Jul 25, 2025 -
Heroku llms
#12992 opened
Jul 25, 2025 -
fix: enable metadata parameter for OpenAI API calls
#12999 opened
Jul 25, 2025 -
Fix Add Model
#13010 opened
Jul 26, 2025 -
Add OpenRouter Qwen models
#13019 opened
Jul 26, 2025 -
feat: add streaming tool call filtering
#13021 opened
Jul 27, 2025 -
Fix div by zero
#13025 opened
Jul 27, 2025 -
fix: Add custom httpx.Client support
#13054 opened
Jul 28, 2025 -
Update Railway deploy links and new guide
#13091 opened
Jul 29, 2025 -
Add Inference.net provider: docs update, provider config, and test suite
#13104 opened
Jul 29, 2025 -
Optionally enable GenAI telemetry following semantic conventions (#1…
#13105 opened
Jul 29, 2025 -
change openai api us endpoint
#13123 opened
Jul 30, 2025 -
feat(helm): Enhance Helm chart for ArgoCD integration and external resources
#13125 opened
Jul 30, 2025 -
Add Vercel AI Gateway provider
#13144 opened
Jul 30, 2025 -
Team Member Permissions Page - Access Column Changes
#13145 opened
Jul 30, 2025 -
[Bug]: Set user from token user_id for OpenMeter integration
#13152 opened
Jul 30, 2025 -
Fix incorrect message index handling on assistant message merging.
#13153 opened
Jul 30, 2025 -
Update Pangea Guardrail to support new AIDR endpoint
#13160 opened
Jul 31, 2025 -
feat(bedrock): Add S3 file upload and batch processing integration to llm_http_handler
#13167 opened
Jul 31, 2025 -
update: remove +1 from comp in rate limiter
#13176 opened
Jul 31, 2025 -
Fix: Handle missing 'choices' field in Azure OpenAI responses
#13201 opened
Aug 1, 2025 -
Add OAuth2 SSO endpoint for external application integration
#13227 opened
Aug 2, 2025 -
Ensure that `function_call_prompt` extends system messages following its current schema
#13243 opened
Aug 3, 2025 -
fix: improve Gemini API key masking in debug logs
#13272 opened
Aug 4, 2025 -
🆕 [FEATURE]: Add WandB by Coreweave Inference Endpoints as a hub.
#13290 opened
Aug 5, 2025 -
Feat/sambanova embeddings
#13308 opened
Aug 5, 2025 -
Fix/gcs cache docs missing for proxy mode
#13328 opened
Aug 6, 2025 -
Support Gemini CLI as provider of LiteLLM
#13331 opened
Aug 6, 2025 -
[Feat] Add Streaming support for bedrock gpt-oss model family
#13346 opened
Aug 6, 2025 -
feat: add parasail provider
#13349 opened
Aug 6, 2025 -
Fix unclosed aiohttp client session warnings during concurrent requests
#13371 opened
Aug 7, 2025 -
Fix token_counter with special token input
#13374 opened
Aug 7, 2025 -
Fix Ollama GPT-OSS streaming with 'thinking' field
#13375 opened
Aug 7, 2025 -
Enhance logging for containers to log on files both with usual format and json format
#13394 opened
Aug 7, 2025 -
Logging Callback - Langfuse Host param adde
#13409 opened
Aug 8, 2025 -
Make sure that no debug information is leaked by router through errors in Cooldown list
#13413 opened
Aug 8, 2025 -
Allow unsetting TPM and RPM - Teams Settings
#13430 opened
Aug 8, 2025 -
Display Error from Backend on the UI - Keys Page
#13435 opened
Aug 8, 2025 -
Fix OCI streaming
#13437 opened
Aug 8, 2025 -
feat(usage): aggregated tag daily activity endpoint + UI fallback; MCP optional; tests included
#13446 opened
Aug 8, 2025 -
feat: add CometAPI provider support with chat completions and streaming
#13458 opened
Aug 9, 2025 -
[LLM Translation] Fix Realtime API endpoint for no intent
#13476 opened
Aug 9, 2025
435 Issues closed by 39 people
-
[Bug]: sync with redis for budget / spend happens too infrequently
#6288 closed
Aug 11, 2025 -
[Bug]: MonthlyGlobalSpend doesn't exist, Last30dKeysBySpend relation doesn't exist
#6419 closed
Aug 11, 2025 -
[Bug]: Does x-ai not support the response_format parameter?
#6610 closed
Aug 11, 2025 -
[Bug]: Azure ad token provider does not work
#6790 closed
Aug 11, 2025 -
[Bug]: Anthropic usage prompt cache details missing from logging callbacks when streaming
#7790 closed
Aug 11, 2025 -
[Bug]: NotFoundError (404) with Custom API Base for Gemini
#8772 closed
Aug 11, 2025 -
Self-Hosted DeepSeek Model used in OpenAI-Codex - Endpoint Issue
#10309 closed
Aug 11, 2025 -
Critical vulnerabilities identified by PrismaCloud
#10513 closed
Aug 11, 2025 -
[Bug]: Somehow optional column is not working in function calling for o4-mini
#10552 closed
Aug 11, 2025 -
Getting 'Exception' object has no attribute 'request' in every step of my agent
#10560 closed
Aug 11, 2025 -
[Bug]: Small floats in scientific notation get interpreted as strings
#10564 closed
Aug 11, 2025 -
[Question] Guardrails not working as expected
#10565 closed
Aug 11, 2025 -
[Feature]: `babbage:2023-07-21-v2` in cost tracking
#10571 closed
Aug 11, 2025 -
[Bug]: `litellm>=1.65.5` not propagating `model` during streaming
#10572 closed
Aug 11, 2025 -
[Bug]: r1 metadata incorrectly marks "supports_function_calling":true,"supports_tool_choice":true
#10574 closed
Aug 11, 2025 -
[Bug]: openai does not support parameters: ['reasoning_effort'], for model=gpt-5
#13402 closed
Aug 10, 2025 -
[Bug]: Open WebUI auto generated chat titles prevented by Emoji in model name
#13481 closed
Aug 10, 2025 -
[Bug]: Bedrock PII Masking issue
#13180 closed
Aug 10, 2025 -
[Bug]: Failure in fallback to gpt-4o for gemini requests with image url
#9816 closed
Aug 10, 2025 -
[Bug]: gpt-5 - add support for new OpenAI params
#13391 closed
Aug 9, 2025 -
[Bug]: JWTs access not working with model groups
#13453 closed
Aug 9, 2025 -
[Bug]: MCP Tools are not persisted
#13461 closed
Aug 9, 2025 -
[Feature]: Proxy - Allow a button / periodic trigger for reloading pricing data
#13383 closed
Aug 9, 2025 -
[Feature]: Support for Chat Completions and Dynamic Guardrails in Swagger
#13457 closed
Aug 9, 2025 -
Comet: LiteLLM.Info
#13467 closed
Aug 9, 2025 -
[Bug]: `temperature` not dropped in proxy
#13327 closed
Aug 9, 2025 -
[Bug]: DBRX - Signature Error with OSS GPT models
#13304 closed
Aug 9, 2025 -
[Bug]: Internal Server Error with GET request for /routes litellm 1.74.12 and several other releases
#13184 closed
Aug 9, 2025 -
[Bug]: Responses API failed if input containing ResponseReasoningItem
#13420 closed
Aug 9, 2025 -
[Bug]: Generic SSO send user to /sso/key/generate
#13335 closed
Aug 9, 2025 -
[Bug]: Impossible to choose models in Team
#13414 closed
Aug 9, 2025 -
[Feature]: Support `cerebras/gpt-oss` and `openrouter/gpt-oss`
#13428 closed
Aug 9, 2025 -
[Feature]: Support for IAM role credentials for scaling Bedrock usage to multiple accounts
#13450 closed
Aug 9, 2025 -
[Feature]: Support for GPT-OSS models from litellm for faster inference with Cerebras stack
#13451 closed
Aug 9, 2025 -
[Feature]: Support for IAM role credentials for scaling Bedrock usage to multiple accounts
#13449 closed
Aug 9, 2025 -
[Feature]: Support for IAM role credentials for scaling Bedrock usage to multiple accounts
#13448 closed
Aug 9, 2025 -
[Feature]: Integrating user information generated by LiteLLM with Langfuse
#7238 closed
Aug 9, 2025 -
[Bug]: drop_params not working for (image only?) vertex_ai generation
#9936 closed
Aug 9, 2025 -
[Feature]: Slack message redaction
#10233 closed
Aug 9, 2025 -
[Bug]: async more than 2000 not work
#10526 closed
Aug 9, 2025 -
[Bug]: Incorrect GPT-5 Input Token Limit
#13439 closed
Aug 8, 2025 -
[Bug]: [since v1.75.2] Error creating standard logging object - can't register atexit after shutdown
#13424 closed
Aug 8, 2025 -
[Bug]: After removing a user from a team via the API not able to add them back to that team
#13422 closed
Aug 8, 2025 -
[Bug]: Empty list of MCP tools returned on streamable connection
#13298 closed
Aug 8, 2025 -
[Bug]: OpenAI gpt-5 responses API caps streaming responses
#13393 closed
Aug 8, 2025 -
[Bug]: OpenAI models health check fails
#13248 closed
Aug 8, 2025 -
[Bug]: files created without target_model_names are not tracked by batch cost scheduler
#13364 closed
Aug 8, 2025 -
[Bug]: 'Prisma' object has no attribute 'litellm_prompttable'
#13404 closed
Aug 8, 2025 -
[Bug]: Job "update_spend" raised an exception
#13204 closed
Aug 8, 2025 -
[Bug]: Azure gpt-5 response error
#13398 closed
Aug 8, 2025 -
[Bug]: Azure OpenAI gpt-5 Tempature Value
#13397 closed
Aug 8, 2025 -
[Feature]: Poll ollama for new endpoints
#979 closed
Aug 8, 2025 -
[Bug]: Installing litellm[proxy] failed uvloop does not support Windows
#7731 closed
Aug 8, 2025 -
[Bug]: Improperly handled exception (misassumed exception has request body)
#10475 closed
Aug 8, 2025 -
[Feature]: putting status code in `litellm.APIError.__str__`
#10511 closed
Aug 8, 2025 -
[Bug]: OpenAI GPT-5 series does not support "max_tokens" parameter
#13381 closed
Aug 7, 2025 -
[Bug]: Migration using non-root image failed
#13282 closed
Aug 7, 2025 -
[Bug]: Bedrock Guardrails - Sensitive Data Leaked to Logs When Using SensitiveInformationPolicyConfig
#12152 closed
Aug 7, 2025 -
[Bug]: Multi-instance rate limiting not working even in version 1.74.9
#13202 closed
Aug 7, 2025 -
[Bug]: openai OSS through Groq is not mapped in litellm
#13351 closed
Aug 7, 2025 -
[Bug]: Create, Search Vector Store fail
#13284 closed
Aug 7, 2025 -
[Bug]: Total tokens do not count cache create input tokens (LiteLLM Proxy)
#13207 closed
Aug 7, 2025 -
cleanup - migrate ollama calls to `ollama_chat`
#5048 closed
Aug 7, 2025 -
[Bug]: x-litellm-cache-key header not being returned on cache hit
#8570 closed
Aug 7, 2025 -
[Bug]: Error when testing the connection
#10429 closed
Aug 7, 2025 -
[Bug]: stream handler completion throwing exception with watsonx
#10457 closed
Aug 7, 2025 -
[Feature]: Allow specifying deployment id on passthrough
#10466 closed
Aug 7, 2025 -
Submitting a file is not working due to
#10472 closed
Aug 7, 2025 -
[Bug]: LiteLLM raises 500 when create/update a team member budget
#10477 closed
Aug 7, 2025 -
[Feature]: Improved Handling of Langfuse Trace Metadata from Request Headers
#10480 closed
Aug 7, 2025 -
[Feature]: Cache tokens for vertex-ai/gemini models!
#10481 closed
Aug 7, 2025 -
[Bug]: Editable install fails with pip<21.3
#10486 closed
Aug 7, 2025 -
[Bug]: Failed listing MCP tools of a server as a non-admin user
#13341 closed
Aug 6, 2025 -
[Bug]: LiteLLM_MCPServerTable.server_name does not exist (missing migration?)
#13288 closed
Aug 6, 2025 -
[Bug]: Failed to get available MCP tools - MCP Auth
#13199 closed
Aug 6, 2025 -
[Bug]: Tool-Calling of Anthropic via Bedrock not working: Expected 'id' to be a string
#13124 closed
Aug 6, 2025 -
[Bug]: tool_calls coming back as null while they should be Array
#13055 closed
Aug 6, 2025 -
[Bug]: Listing models should work even when end-user budget is exceeded
#13286 closed
Aug 6, 2025 -
[Feature]: Return x-litellm-response-cost header when streaming with include_usage: true
#12689 closed
Aug 6, 2025 -
[Bug]: Mimetype Resolution Error in Bedrock Document Understanding
#12260 closed
Aug 6, 2025 -
[Feature]: `function_to_dict` supporting type unions
#4249 closed
Aug 6, 2025 -
[Feature]: `function_to_dict` supporting defaulted arguments
#4250 closed
Aug 6, 2025 -
[Bug]: inability to use `DeploymentTypedDict` in Pydantic `TypeAdapter` with Python<3.12
#5664 closed
Aug 6, 2025 -
[Bug]: Router not respecting TPM limits in concurrent async calls
#5783 closed
Aug 6, 2025 -
[Feature]: convenience `Enum` for `tool_choice`
#6091 closed
Aug 6, 2025 -
[Bug]: `litellm.text_completion` not respecting `model_list`
#6157 closed
Aug 6, 2025 -
[Feature]: automated nesting when using litellm sdk within langfuse observe() decorated function
#8423 closed
Aug 6, 2025 -
Google Authentication Issue
#8424 closed
Aug 6, 2025 -
API Keys not displaying after creation
#8446 closed
Aug 6, 2025 -
[Feature]: Add Qwen2.5-3B-Instrcut
#10437 closed
Aug 6, 2025 -
[Bug]: Azure Audio Transcription & DALL-E (Access denied)
#10440 closed
Aug 6, 2025 -
[Bug]: Random problems with docker images using OpenSSL 3.5
#10444 closed
Aug 6, 2025 -
[Bug]: gemini request body is not logged
#10449 closed
Aug 6, 2025 -
[Feature]: Add "claude-opus-4-1-20250805" in "model_prices_and_context_window.json"
#13305 closed
Aug 5, 2025 -
[Bug]: Claude Code count_tokens API is not implemented in the LiteLLM proxy.
#13252 closed
Aug 5, 2025 -
[Bug]: Tool calling broken from expecting "keys" on OpenAITool (v1.74.9)
#13064 closed
Aug 5, 2025 -
[Bug]: api request works even if DISABLE_LLM_API_ENDPOINTS=true
#13095 closed
Aug 5, 2025 -
[Feature]: (litellm proxy) support /v1/models/{model_id} retrieval
#13128 closed
Aug 5, 2025 -
[Bug]: Enforcement of Admin-Only Route Not Working
#13127 closed
Aug 5, 2025 -
[Feature]: `litellm --version` not requiring `proxy` extra
#7975 closed
Aug 5, 2025 -
[Feature]: Improve Lago integration to better support token based usage billing
#8243 closed
Aug 5, 2025 -
[Feature]: adding `ollama/llama3.2` to cost tracking
#9644 closed
Aug 5, 2025 -
[Bug]: when new user login to console , there is not have any model to select
#9678 closed
Aug 5, 2025 -
litellm.BadRequestError: OpenAIException
#10083 closed
Aug 5, 2025 -
[Feature]: Assign Users team role at account creation
#10109 closed
Aug 5, 2025 -
[Bug]: Litellm proxy, return empty completion content but rejects to fail
#10144 closed
Aug 5, 2025 -
[Bug]: Cost stays zero in the UI and nothing in response headers for ollama custom models
#10155 closed
Aug 5, 2025 -
[Bug]: Future attached to a different loop in DualCache.async_batch_get_cache()
#10376 closed
Aug 5, 2025 -
[Bug]: dashboard cann't add vllm provider model
#10400 closed
Aug 5, 2025 -
Deepseek connection error -- from wren
#10403 closed
Aug 5, 2025 -
[Bug]: Gemini response_format list of unions fails with LiteLLM
#10405 closed
Aug 5, 2025 -
[Bug]: scheduler keeps creating duplicated pass through route
#10408 closed
Aug 5, 2025 -
[Bug]: litellm --health outputs auth_error even the LITELLM_MASTER_KEY envvar is set
#10412 closed
Aug 5, 2025 -
Fix "gpt-4o-mini-2024-07-18" entry in "model_prices_and_context_window.json"
#12913 closed
Aug 5, 2025 -
[Bug]: Proxy server responses API throws "Invalid content type: <class 'NoneType'>"
#12670 closed
Aug 4, 2025 -
[Bug]: Gemini CLI JSON parse error unexpected character "`"
#12496 closed
Aug 4, 2025 -
[Bug]: <meta> tag in CLAUDE.md causes 403 when using LiteLLM.
#12421 closed
Aug 4, 2025 -
[Bug]: Add service_tier support for /responses API
#13257 closed
Aug 4, 2025 -
[Bug]: Versions after 1.73.2 do not automatically route vendors
#12531 closed
Aug 4, 2025 -
[Bug]: Missing 'voyage' and 'jina' providers on UI
#12574 closed
Aug 4, 2025 -
Organization: is it an enterprise feature?
#12727 closed
Aug 4, 2025 -
[Bug]: Deleting a team via UI does not warn or block when related keys exist
#12947 closed
Aug 4, 2025 -
[Bug]: Usage.completion_tokens_details.text_tokens isn't serialized
#13223 closed
Aug 4, 2025 -
Add "bge-reranker-v2-m3" in "model_prices_and_context_window.json"
#13121 closed
Aug 4, 2025 -
[Bug]: MCP server on local docker network is not allowed
#13132 closed
Aug 4, 2025 -
[Feature]: Set fallback model for database model in UI
#13253 closed
Aug 4, 2025 -
[Bug]: Unable to Remove Datadog Active Logging Callback Configuration
#10375 closed
Aug 4, 2025 -
Issue with LLm lite
#10377 closed
Aug 4, 2025 -
[Bug]: How to get Gemini 2.5 cache_read and reasoning_token count while using acompletion with Stream ?
#10387 closed
Aug 4, 2025 -
[Bug]: Can't seem to get any Gemini example to work?
#13241 closed
Aug 3, 2025 -
[Bug]: ModuleNotFoundError: No module named 'enterprise' When Using litellm==1.48.1 in Google Colab
#10354 closed
Aug 3, 2025 -
[Feature]: Support multiple headers inside litellm_key_header_name
#10355 closed
Aug 3, 2025 -
[Feature]: Support X-Forwarded-Authorization as well as Authorization
#10356 closed
Aug 3, 2025 -
[Bug]: missing enterprise module
#10361 closed
Aug 3, 2025 -
[Bug]: ollama_chat not working as expected
#10362 closed
Aug 3, 2025 -
OpenAI Rate Limit error on my very first call
#10365 closed
Aug 3, 2025 -
[Bug]: Inconsistent behavior with "from google.adk.models.lite_llm import LiteLlm"
#10369 closed
Aug 3, 2025 -
[Bug]: Unable to delete Active Logging Callbacks (success_callback issue)
#10371 closed
Aug 3, 2025 -
[Bug]: 1.174.12-nightly Auth issue with HTTPS, 305 Errors
#13211 closed
Aug 2, 2025 -
[Feature]: Add GitHub Copilot as model provider
#6564 closed
Aug 2, 2025 -
ratelimit
#13214 closed
Aug 2, 2025 -
It seems that this PR changes the behavior of `/key/(un)?block`
#13136 closed
Aug 2, 2025 -
[Bug]: Tools streaming broken with ollama_chat
#6135 closed
Aug 2, 2025 -
[Feature]: Support Edition methods for Image Generation Models
#6772 closed
Aug 2, 2025 -
[Bug]: Gemini tool call result issues
#13169 closed
Aug 1, 2025 -
[Bug]: UI is broken starting from v1.74.9.rc.1
#13200 closed
Aug 1, 2025 -
[Bug]: Redis Cluster mode is not support and other issues
#8159 closed
Aug 1, 2025 -
[Bug]: litellm.APIError: Error building chunks for logging/streaming usage calculation
#9983 closed
Aug 1, 2025 -
[Bug]: OpenRouter Grok Model Exception In Proxy Method
#10303 closed
Aug 1, 2025 -
[Feature]: support Streaming STT from fireworks
#10304 closed
Aug 1, 2025 -
[Bug]: incorrect prompt_tokens
#10311 closed
Aug 1, 2025 -
[Feature]: support service_tier for o3 and o4-mini
#10307 closed
Aug 1, 2025 -
Fix "together-ai-21.1b-41b" entry in "model_prices_and_context_window.json"
#10315 closed
Aug 1, 2025 -
[Bug]: Image Model custom cost computation failing
#10316 closed
Aug 1, 2025 -
[Bug]: Incorrect cleanup of DB connections when using AWS IAM authentication
#13120 closed
Jul 31, 2025 -
[Bug]: Gemini cli not able to access file system to write or create files with LiteLLM
#12609 closed
Jul 31, 2025 -
[Bug]: Cache control injection points for Anthropic/Bedrock
#10226 closed
Jul 31, 2025 -
[Feature]: Supporting 'thinking'/'reasoning content' output for models from OpenAI
#13183 closed
Jul 31, 2025 -
[Bug]: Usage in UI not working for large data
#13182 closed
Jul 31, 2025 -
[Bug]: HuggingfaceException - {"error":"Template error: template not found"}
#12967 closed
Jul 31, 2025 -
[Bug]: Integration with Langfuse >= 3.0.0 is broken
#13137 closed
Jul 31, 2025 -
Grok AI (xAI) support
#3233 closed
Jul 31, 2025 -
[Bug]: cannot import name 'get_valid_models' from 'litellm'
#9755 closed
Jul 31, 2025 -
[Feature]: Ability to override huggingface endpoint
#10230 closed
Jul 31, 2025 -
[Bug]: gemini-2.5-pro-preview-03-25 is incorrectly tagged with supports_reasoning: false
#10254 closed
Jul 31, 2025 -
[Bug]: fix caching soft_budget
#10265 closed
Jul 31, 2025 -
[Feature]: Model creation should support oauth2 support to be enterprise friendly.
#10268 closed
Jul 31, 2025 -
[Feature]: DB Index Selection for Redis Sentinel
#10276 closed
Jul 31, 2025 -
[Feature]: Support Passing Lakera Project ID
#10277 closed
Jul 31, 2025 -
[Feature]: Apply Guardrails to specific models
#10279 closed
Jul 31, 2025 -
[Feature]: Support for gpt-4o-mini-tts parameters and cost tracking in Azure
#10285 closed
Jul 31, 2025 -
"litellm.APIConnectionError: 'name' & "litellm.APIConnectionError: OllamaException
#10291 closed
Jul 31, 2025 -
[Bug]: Error: Too many pages of data (>10). Please select a smaller date range.
#13083 closed
Jul 30, 2025 -
[Feature]: Add mcp server health check
#13068 closed
Jul 30, 2025 -
[Bug]: API model not available error while generating data
#13041 closed
Jul 30, 2025 -
[Bug]: Entire database wipe when deploying proxy
#13046 closed
Jul 30, 2025 -
[Bug]: /v1 not working for model discovery
#13078 closed
Jul 30, 2025 -
[Bug]: Bedrock Still generate tool call even when no tools are provided
#13053 closed
Jul 30, 2025 -
[Bug]: prisma_client is none
#10032 closed
Jul 30, 2025 -
[Bug]: UI Bug: Model Filter dropdown appears behind model list on Models page
#10217 closed
Jul 30, 2025 -
[Feature]: could you add the llm into project
#10219 closed
Jul 30, 2025 -
Question: Is it possible to change location to europe for Google Gemini provider?
#10220 closed
Jul 30, 2025 -
[Bug]: Keys Table UI Issues After Creating a Key
#10222 closed
Jul 30, 2025 -
[Feature]: Configuration syntax checker
#10223 closed
Jul 30, 2025 -
[Feature]:Integration of Local Ollama Embedding With LiteLLM
#10225 closed
Jul 30, 2025 -
[Bug]: New Gemini Models Have "0" as the token costs
#10242 closed
Jul 30, 2025 -
[Bug]: openrouter llama-4-scout
#10246 closed
Jul 30, 2025 -
No error message, but error?
#10248 closed
Jul 30, 2025 -
[Bug]: Global usage stopped working
#13086 closed
Jul 29, 2025 -
[Feature]: Support for Computer tool use for via bedrock APIs.
#12884 closed
Jul 29, 2025 -
[Bug]: token counter does not expect prefix
#11791 closed
Jul 29, 2025 -
[Feature]: Add MCP Client Header version
#11966 closed
Jul 29, 2025 -
MCP dependency version is fixed?
#13080 closed
Jul 29, 2025 -
Add "openrouter/grok-4" in "model_prices_and_context_window.json"
#13017 closed
Jul 29, 2025 -
[Bug]: The Gemini Custom API request has an incorrect authorization format
#13074 closed
Jul 29, 2025 -
[Bug]: Error when attempting to use /chat/completions when using Azure's `v1-preview` API Version
#12945 closed
Jul 29, 2025 -
Unable to load gemini 2.0 flash in litellm
#13075 closed
Jul 29, 2025 -
Handling `mycustom/*` by `{provider}/*`
#10357 closed
Jul 29, 2025 -
[Feature]: Support AI Studio Imagen4 models
#13058 closed
Jul 29, 2025 -
[Bug]: No scrollbar in the "MCP Tools" if there are more tools from the MCP servers
#12785 closed
Jul 29, 2025 -
[Feature]: Ollama schema structured output
#7131 closed
Jul 29, 2025 -
[Bug]: Presidio integration failing and making inference return 500
#10137 closed
Jul 29, 2025 -
[Feature]: Compatible with fastapi-users
#10200 closed
Jul 29, 2025 -
Reduce Premium Copilot requests
#12859 closed
Jul 28, 2025 -
[Feature]: add support for vllm rerank by litellm server proxy
#13044 closed
Jul 28, 2025 -
[Bug]: Litellm times out after a couple of tasks
#10180 closed
Jul 28, 2025 -
Error in LLM response
#10182 closed
Jul 28, 2025 -
Integration with watsonX failing
#10183 closed
Jul 28, 2025 -
litellm API connection issue
#10184 closed
Jul 28, 2025 -
[Bug]: Can't Connect to Postgres Database with Client Certificate and Key
#10187 closed
Jul 28, 2025 -
[Bug]: Spurious `role: assistant` for Anthropic Vertex tool call streaming in OpenAI Chat Completions format
#12616 closed
Jul 27, 2025 -
[Feature]: MCP add support for multiple custom headers
#12895 closed
Jul 26, 2025 -
[Bug]: Able to get the response headers from OpenAI but returns None for GCP
#12986 closed
Jul 26, 2025 -
[Bug]: cannot pickle 'coroutine' object (again in v1.74.7)
#12921 closed
Jul 26, 2025 -
[Bug]: Error committing spend updates
#13000 closed
Jul 26, 2025 -
[Feature]: Add design policy document for huge utils.py maintenance
#3345 closed
Jul 26, 2025 -
[Bug]: gemini 2.0 Flash GA is erroring due to context caching not being supported
#8296 closed
Jul 26, 2025 -
[Feature]: Allow creating dynamic url's as passthrough endpoints
#9950 closed
Jul 26, 2025 -
[Bug]: default_team_disabled setting is not work
#10151 closed
Jul 26, 2025 -
Ollama Remoto: Chamada /api/show ignora OLLAMA_HOST, exigindo OLLAMA_API_BASE redundante
#10158 closed
Jul 26, 2025 -
北京
#10159 closed
Jul 26, 2025 -
[Feature]: Anthropic native Responses API Support
#10160 closed
Jul 26, 2025 -
[Bug]: CVE SQL Injection
#12402 closed
Jul 25, 2025 -
[Bug]: Users not deleted from team
#12971 closed
Jul 25, 2025 -
[Bug]: modifying `retry_policy` in config.yaml does not reflect changes
#12855 closed
Jul 25, 2025 -
[Bug]: User Budget Not Enforced for Virtual Keys Belonging to Teams
#12905 closed
Jul 25, 2025 -
[Feature]: add gemini 2.5 GA models
#11801 closed
Jul 25, 2025 -
[Bug]: ensure cached_token_details is always an int - if set
#12924 closed
Jul 25, 2025 -
[Bug]: chat completions api gives 500 when llm provider is aiohttp_openai
#8708 closed
Jul 25, 2025 -
[Bug]: litellm /v1/files error
#10117 closed
Jul 25, 2025 -
Fix "ollama/llama2:70b" entry in "model_prices_and_context_window.json"
#10128 closed
Jul 25, 2025 -
[Bug]: HTTPX dependency outdated
#10133 closed
Jul 25, 2025 -
[Bug]: LiteLLM Router carries over completion parameters across requests
#10136 closed
Jul 25, 2025 -
[Bug]: Evaluate Multiple LLM Providers with LiteLLM cookbook
#10143 closed
Jul 25, 2025 -
[Bug]: ProxyException Unknown Error Occurred
#10145 closed
Jul 25, 2025 -
LiteLLM Trademark violators/squatters?
#10146 closed
Jul 25, 2025 -
[Feature]: allow for DATABASE_URL being read from a secret instead of username/password/etc.
#10147 closed
Jul 25, 2025 -
issue
#12954 closed
Jul 24, 2025 -
[Bug]: Vertex AI image generation ignores aspect_ratio parameter
#12690 closed
Jul 24, 2025 -
[Bug]: Gemini API 404 Error Due to Incorrect URL Generation
#12427 closed
Jul 24, 2025 -
[Bug]: Docker buildx manifest attestations produces `unknown/unknown` image arch in GHCR
#12412 closed
Jul 24, 2025 -
[Bug]: SSL CERTIFICATE_VERIFY_FAILED issue with Sentry integration
#11180 closed
Jul 24, 2025 -
[Bug] User Header from OpenWebUI Not Appearing in Spend Logs
#12893 closed
Jul 24, 2025 -
Request to release a tagged helm chart version
#10135 closed
Jul 24, 2025 -
Will there be a leak issue if the stream output is terminated early?
#12861 closed
Jul 24, 2025 -
[Feature]: Support assuming an AWS IAM Role for S3 Bucket logging
#10278 closed
Jul 24, 2025 -
[Bug]: AttributeError: 'Choices' object has no attribute 'logprobs'
#12860 closed
Jul 24, 2025 -
Add "llama3.2:1b" in "model_prices_and_context_window.json"
#6170 closed
Jul 24, 2025 -
[Bug]: Windows Compatibility Issue with uvloop
#7783 closed
Jul 24, 2025 -
[Bug]: An empty streaming response from GROQ without any error raised.
#9296 closed
Jul 24, 2025 -
[Bug]:
#9511 closed
Jul 24, 2025 -
[Bug]: Error when using custom model_name in spend/calculate
#10050 closed
Jul 24, 2025 -
[Feature]: Support for AzureAI Image Generation Models
#10057 closed
Jul 24, 2025 -
[Bug]: o4-mini and o3 don't support stop_sequences parameter
#10080 closed
Jul 24, 2025 -
[Feature]: Please support gemini-2.5-pro-exp-03-25 on Google's AI Studio
#10074 closed
Jul 24, 2025 -
[Feature]: Requests Per Day (rpd) proxy setting (useful for Google models)
#10090 closed
Jul 24, 2025 -
[Feature]: Flex Processing cost tracking
#10091 closed
Jul 24, 2025 -
[Feature]: Skip credential check using vertex_ai
#10092 closed
Jul 24, 2025 -
[Bug]: UX: Admin GUI: Page scrolling / mouse behavior can cause settings corruption
#10097 closed
Jul 24, 2025 -
[Bug]: Missing model mapping for eu.anthropic.claude-3-7-sonnet-20250219-v1:0
#10100 closed
Jul 24, 2025 -
[Feature]: Automatically prepend bedrock region
#10105 closed
Jul 24, 2025 -
[Feature]: Add dark mode to UI
#10110 closed
Jul 24, 2025 -
litellm auto-downloads files from github
#12896 closed
Jul 23, 2025 -
[Bug]: Can't edit Default user settings via the UI (500 internal server error, read-only filesystem)
#12857 closed
Jul 23, 2025 -
[Bug]: rm claude 2, 2.1, claude-3-sonnet-20240229 from model_prices_and_context_window.json
#12863 closed
Jul 23, 2025 -
Display user defined model name in usage report
#12887 closed
Jul 23, 2025 -
Failed to use deepseek
#7646 closed
Jul 23, 2025 -
[Feature]: How to batch configure multiple openrouter models in config.yaml for litellm proxy server?
#8358 closed
Jul 23, 2025 -
[Bug]: Gemini tool call "default" errors
#9793 closed
Jul 23, 2025 -
[Bug]: supports_vision function returns false for gemini/gemini-2.5-pro-exp-03-25
#9842 closed
Jul 23, 2025 -
[Feature]: Llama 4 Scout and Maverick API Service vertex_ai
#10016 closed
Jul 23, 2025 -
[Bug]: Organization admin can't do much
#10036 closed
Jul 23, 2025 -
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
#10041 closed
Jul 23, 2025 -
test
#10046 closed
Jul 23, 2025 -
【Feature Request】Smart Model Routing Based on Query Content
#10049 closed
Jul 23, 2025 -
[Bug]: Default Credentials and Internal User Admin Rights differences
#10078 closed
Jul 23, 2025 -
[Feature]: Support Computer Control from Anthropic claude-3-5-sonnet-20240620
#6391 closed
Jul 22, 2025 -
[Feature]: Recraft API - Add support for Image Edits endpoint
#12834 closed
Jul 22, 2025 -
[Feature]: Infra - DB Migration should use `prisma migrate`
#12104 closed
Jul 22, 2025 -
[Bug]: Azure KeyVault not in images
#12846 closed
Jul 22, 2025 -
[Bug]: Multiple HIGH/CRITICAL CVEs in the LiteLLM non-root image
#12165 closed
Jul 22, 2025 -
严格按照自动化部署的视频中的操作,但视频末尾在实际操作时redis报错
#12444 closed
Jul 22, 2025 -
[Feature]: Add support for Hyperbolic as an LLM provider
#12014 closed
Jul 22, 2025 -
[Bug]: PII Masking with bedrock
#10971 closed
Jul 22, 2025 -
[Feature]: Add cache_control parameter filtering for OpenRouter provider to prevent 404 errors
#12787 closed
Jul 22, 2025 -
[Bug]: `litellm.acompletion` retrying does not work
#12830 closed
Jul 22, 2025 -
[Bug]: Virtual Key Budget Limit Not Blocking AWS Bedrock Pass-through Requests
#12789 closed
Jul 22, 2025 -
[Bug]: Sonnet 3.5 incorrectly reporting support of `reasoning_effort`
#12833 closed
Jul 22, 2025 -
[Feature]: End User Tracking with Vertex AI.
#12799 closed
Jul 22, 2025 -
[Bug]: IBM Watsonx - invalid tool choice parameter name #9979
#12803 closed
Jul 22, 2025 -
[Bug]: fallbacks on another provider are not working
#12820 closed
Jul 22, 2025 -
I got this message when i start the ditto
#12844 closed
Jul 22, 2025 -
[Feature]: add library stubs to LiteLLM
#1540 closed
Jul 22, 2025 -
[Bug]: When restarting my docker container, no more data or Models
#6634 closed
Jul 22, 2025 -
[Bug]: connects to langfuse even it is not configured
#7732 closed
Jul 22, 2025 -
[Feature]: Support Embeddings in Infinity
#8764 closed
Jul 22, 2025 -
[Bug]: anthropic-beta header not being forwarded on litellm-proxy
#9016 closed
Jul 22, 2025 -
[Bug]: Bedrock guardrails analysing (and blocking) old+system messages
#9882 closed
Jul 22, 2025 -
[Feature]: integrate with default opentelemetry exporter configured by `opentelemetry-instrument`
#9901 closed
Jul 22, 2025 -
[Feature]: config infinity rerank on UI
#10007 closed
Jul 22, 2025 -
[Bug]: Get error 400 when testing the multiagentworkflow example code from the Docs,
#10015 closed
Jul 22, 2025 -
[Feature]: respect http_proxy, https_proxy env variables
#10014 closed
Jul 22, 2025 -
[Bug]: Databricks claims supports_vision=False but actually supports vision
#10017 closed
Jul 22, 2025 -
Add "qwen-vl-plus" in "model_prices_and_context_window.json"
#12398 closed
Jul 21, 2025 -
[Feature]: ReCraft api support
#12822 closed
Jul 21, 2025 -
[Feature]: Group MCP servers into namespaces and serve in different endpoints
#12374 closed
Jul 21, 2025 -
[Bug]: groq/qwen/qwen-qwq-32b is deprecated by groq
#12825 closed
Jul 21, 2025 -
[Bug]: API Endpoint for GitHub Copilot should not be hard-coded
#12726 closed
Jul 21, 2025 -
[Bug]: All mcp servers are getting listed for key generated to a specific team
#12568 closed
Jul 21, 2025 -
[Query] litellm-proxy support for "Mistral Small 3.1" model via Vertex AI provider
#12535 closed
Jul 21, 2025 -
[Bug]: no dropdown list for guardrails when editing a team
#12786 closed
Jul 21, 2025 -
[Bug]: Unable to route from Anthropic's `/v1/messages` to custom handler
#12816 closed
Jul 21, 2025 -
Inconsistent custom_llm_provider in _hidden_params for vertex_ai models
#10181 closed
Jul 21, 2025 -
[Feature]: Allow extending `litellm.custom_provider_map` using `entrypoints`
#7733 closed
Jul 21, 2025 -
[Bug]: DailyTagSpend view missing error
#8117 closed
Jul 21, 2025 -
[Bug]: If a request is received where max_tokens is less than thinking budget
#9001 closed
Jul 21, 2025 -
[Feature]: Batch API Functionality for OpenAI, Anthropic
#9680 closed
Jul 21, 2025 -
[Bug]: no github model is supported
#9740 closed
Jul 21, 2025 -
[Bug]: Unable to update public model name of Azure models.
#9973 closed
Jul 21, 2025 -
[Feature]: Work with cloudflare gateway on google-ai-studio API
#9975 closed
Jul 21, 2025 -
[Bug]: 'Exception' object has no attribute 'request'
#9977 closed
Jul 21, 2025 -
[Bug]: IBM Watsonx - invalid tool choice parameter name
#9979 closed
Jul 21, 2025 -
[Bug]: Incompatibility with Open WebUI (model list shows embedding models; hard-coded owned_by)
#9982 closed
Jul 21, 2025 -
[Bug]: Stream + tools broken with Ollama
#7094 closed
Jul 20, 2025 -
litellm.exceptions.BadRequestError
#9944 closed
Jul 20, 2025 -
Add `openrouter/switchpoint/router` in "model_prices_and_context_window.json"
#12613 closed
Jul 19, 2025 -
[Bug]: Google Cloud Model Armor error
#12757 closed
Jul 19, 2025 -
[Bug]: Azure gpt-4.1 doesn't support json_schema response format
#12705 closed
Jul 19, 2025 -
[Bug]: No grounding metadata on gemini using stream:true
#10237 closed
Jul 19, 2025 -
[Bug]: Response format support for Azure models (> 4.1)
#12708 closed
Jul 19, 2025 -
[Bug]: Error when calling Gemini model
#9519 closed
Jul 19, 2025 -
Potential bugs in python Optimization mode: Use of assert
#9879 closed
Jul 19, 2025 -
[Bug]: Can't edit fallbacks on the UI. Changes not saved.
#9899 closed
Jul 19, 2025 -
[Feature]: support 'auth: true' on passthrough api endpoint
#9951 closed
Jul 19, 2025 -
[Bug]: Sending system messages to GitHub Copilot causes Internal Server Errors
#12724 closed
Jul 18, 2025 -
[Bug]: ZeroDivisionError in lowest_latency.py: float division by zero
#12641 closed
Jul 18, 2025 -
[Bug]: GCP Service Account Credentials not refreshing appropriately when set via env var
#9863 closed
Jul 18, 2025 -
LLM issue while using gpt-4o-mini
#12723 closed
Jul 18, 2025 -
Add "azure_ai/jais-30b-chat" in "model_prices_and_context_window.json"
#12712 closed
Jul 18, 2025 -
[Bug]: Usage issues w/ gemini-cli + VLLM
#12562 closed
Jul 18, 2025 -
[Issue]: The new `rich` dependency hard pin to 13.7.1 makes it hard to work with other libraries
#10673 closed
Jul 18, 2025 -
[Bug]: S3 success-log uploader crashes with “Circular reference detected”
#12731 closed
Jul 18, 2025 -
Add "azure_ai/grok-3" in "model_prices_and_context_window.json"
#12713 closed
Jul 18, 2025 -
Add "azure_ai/grok-3-mini" in "model_prices_and_context_window.json"
#12714 closed
Jul 18, 2025 -
[Bug]: vertex ai deepseek r1 does not work with proxy
#12717 closed
Jul 18, 2025 -
[Bug]: `DeprecationWarning: Use 'content=<...>' to upload raw bytes/text content`
#5986 closed
Jul 18, 2025 -
[Bug]: Inconsistent response_format handling between Fireworks AI models
#7533 closed
Jul 18, 2025 -
[Bug]: Client Hangs as Transcription errors are not passed to client
#9712 closed
Jul 18, 2025 -
[Bug]: /key/info fails on litellm master key
#9861 closed
Jul 18, 2025 -
[Feature]: Context Caching for Vertex AI
#6898 closed
Jul 17, 2025 -
[Bug]: Took too long to import litellm
#6175 closed
Jul 17, 2025 -
[Bug]: #7594 broke typing on `Router.acompletion`
#7641 closed
Jul 17, 2025 -
[Feature]: Add supports_response_schema for deepseek/deepseek-chat
#7951 closed
Jul 17, 2025 -
[Bug]: bedrock guardrails always on
#8239 closed
Jul 17, 2025 -
[Bug]: turn_off_message_logging sometimes not redacting output
#9507 closed
Jul 17, 2025 -
[Bug]: S3 Config Fails When Using Proxy Custom Hooks
#12614 closed
Jul 16, 2025 -
[Feature]: set `modify_params` via environments and drop empty messages
#9946 closed
Jul 16, 2025 -
[Bug]: Custom Adapter in passthrough does not work
#12653 closed
Jul 16, 2025 -
[Bug]: grok-4 in litellm proxy raises: Argument not supported on this model: stop
#12635 closed
Jul 16, 2025 -
[Feature]: Tool Calling support of kimi-k2 via together_ai
#12639 closed
Jul 16, 2025 -
not working
#12638 closed
Jul 16, 2025 -
[Bug]: circular reference
#12634 closed
Jul 16, 2025 -
[Feature]: Add support for OpenAI Deep Research in Bridge for /chat/completion → /responses API
#12105 closed
Jul 16, 2025 -
[Bug]: Bedrock inference profiles not mapped for token counter
#12269 closed
Jul 16, 2025 -
[Bug]: Knowledge Base Call returning error
#11404 closed
Jul 16, 2025 -
[Bug]: remove claude instant 1 & 1.2 from model_prices_and_context_window.json
#12630 closed
Jul 16, 2025 -
[Bug]: Cross account webauthn not working
#12583 closed
Jul 16, 2025 -
[Bug]: LiteLLM fails to keep model provider
#9495 closed
Jul 16, 2025 -
[Bug]: Failures (and preceding prompts) not reported in langfuse
#9846 closed
Jul 16, 2025 -
pr_agent.algo.ai_handlers.litellm_ai_handler:chat_completion:319 - Error during LLM inference: litellm
#9849 closed
Jul 16, 2025 -
[Feature]: UI - Allow filtering keys by key alias
#9866 closed
Jul 16, 2025 -
[Bug]: /mcp missing from the output of /routes
#12608 closed
Jul 15, 2025 -
[Feature]: Custom TTL for Gemini Context Caching
#9810 closed
Jul 15, 2025 -
[Bug]: Structured Outputs on Bedrock Haiku 3.5 breaks often !!
#11751 closed
Jul 15, 2025 -
[Bug]: Bedrock /invoke route doesn't work with claude 4
#12366 closed
Jul 15, 2025 -
[Bug]: enable_json_schema_validation not documented on config page
#12518 closed
Jul 15, 2025 -
[Bug]: Verbose log is enabled by default
#12539 closed
Jul 15, 2025 -
[Bug]: aws_bearer_token not work in proxy model
#12537 closed
Jul 15, 2025 -
[Bug]: dashscope/qwen: LLM Provider NOT provided
#12505 closed
Jul 15, 2025 -
[Feature]: Add API v2 support for Cohere
#6980 closed
Jul 15, 2025 -
[Bug]: calling completion with fallbacks multiple times cause `RuntimeError: Event loop is closed`
#9133 closed
Jul 15, 2025 -
[Feature]: Databricks claude-3-7-sonnet Support passing thinking blocks back to claude
#9790 closed
Jul 15, 2025 -
[Bug]: `get_litellm_model_info` looks for `base_model` under the wrong key for Azure deployed models
#9818 closed
Jul 15, 2025 -
[Bug]: Deepinfra setup via UI
#9819 closed
Jul 15, 2025 -
[Feature]: VoyageAI reranking
#9823 closed
Jul 15, 2025 -
[Bug]: Wrong Gemini 2.5 cost calculation
#11156 closed
Jul 14, 2025 -
[Bug]: API key error for new AI21 Jamba 1.7 models
#12517 closed
Jul 14, 2025 -
[Feature]: Add Moonshot AI platform support.
#12547 closed
Jul 14, 2025 -
12
#12565 closed
Jul 14, 2025 -
test
#12567 closed
Jul 14, 2025 -
[Feature]: Error: This model isn't mapped yet. model=us.deepseek.r1-v1:0
#9126 closed
Jul 14, 2025 -
[Bug]: json_schema gets converted to tool calling from azure 2025-01-01-preview versions
#9806 closed
Jul 14, 2025 -
[Feature]: Add support for assistants API cost tracking on /azure passthrough routes
#9613 closed
Jul 13, 2025 -
[Bug]: More efficient healthcheck prompt
#9788 closed
Jul 13, 2025 -
[Bug]: incorrect `tool_result` block transform in Anthropic unified endpoint
#12404 closed
Jul 12, 2025 -
[Bug]: `JSONDecodeError` from Anthropic unified endpoint
#12403 closed
Jul 12, 2025 -
[Bug]: Pydantic Deprecation Warnings
#12025 closed
Jul 12, 2025 -
[Bug]: Support bedrock cost tracking with unknown region
#9306 closed
Jul 12, 2025 -
[Bug]: "litellm.BadRequestError: LLM Provider NOT provided" despite openai/ prefix
#9408 closed
Jul 12, 2025 -
[Bug]: Docker error with litellm:main-v1.65.4-nightly
#9777 closed
Jul 12, 2025 -
Add boolean in API to turn off logs
#12510 closed
Jul 11, 2025 -
[Bug]: Wrong cost calculation for caching with Sonnet 4 from Bedrock
#12258 closed
Jul 11, 2025 -
[Bug]: Tracked AWS Bedrock spend seems incorrect
#12313 closed
Jul 11, 2025
232 Issues opened by 205 people
-
[Bug]: Responses API failed if input containing ResponseReasoningItem
#13484 opened
Aug 10, 2025 -
[Feature]: Ability To define Auth information when creating an MCP Server
#13483 opened
Aug 10, 2025 -
AuthenticationError on macOS M1 with Gemini - Same Config Works on Windows 正文:
#13478 opened
Aug 10, 2025 -
[Bug]: Broken openai realtime endpoint
#13471 opened
Aug 9, 2025 -
[Bug]: bug in /health/services?service=langfuse route
#13462 opened
Aug 9, 2025 -
[Bug]: Responses API does not log to Langfuse properly
#13460 opened
Aug 9, 2025 -
[Feature]: Support for Sending Tags with Litellm Python SDK Calls to Proxy
#13455 opened
Aug 9, 2025 -
[Feature]: Support for use_litellm_proxy as a Dynamic Parameter
#13454 opened
Aug 9, 2025 -
[Feature]: Support OpenRouter `response_format`
#13438 opened
Aug 8, 2025 -
[Bug]: Azure OpenAI GPT-5 max_tokens not allowed
#13432 opened
Aug 8, 2025 -
[Bug]: Prisma migrate error v1.75.2
#13426 opened
Aug 8, 2025 -
[Bug]: Using `/v1/chat/completion` with `gpt-5-mini` does not work while other `api`s do work
#13421 opened
Aug 8, 2025 -
[Bug]: OpenAI gpt-5 not showing thinking outputs when using OpenWebUI
#13419 opened
Aug 8, 2025 -
[Feature]: Support for IAM role credentials for scaling Bedrock usage to multiple accounts
#13417 opened
Aug 8, 2025 -
[Bug]: Add support for Mistral/Magistral new reasoning data structure
#13416 opened
Aug 8, 2025 -
[Bug]: litellm import modifies Python sys.path, causing module import behavior changes
#13415 opened
Aug 8, 2025 -
[Bug]: azure openai GPT-5 APITimeoutError
#13411 opened
Aug 8, 2025 -
[Bug]: Gemini CLI execution failed in headless mode with LiteLLM.
#13410 opened
Aug 8, 2025 -
[Bug]: Gemini CLI can not write or create file with LiteLLM
#13408 opened
Aug 8, 2025 -
[Bug]: request GPT-5 by azure provider get params error
#13407 opened
Aug 8, 2025 -
[Bug]: Issue with decrypting BRAINTRUST_API_KEY
#13406 opened
Aug 8, 2025 -
[Feature]: Support for Priority-Based Request Handing via API Keys
#13405 opened
Aug 8, 2025 -
[Bug]: Error introduced by logo customization
#13403 opened
Aug 8, 2025 -
Feature: Currency Selector for Usage/Budgets/Alerts
#13399 opened
Aug 8, 2025 -
[Bug]: `litellm.APIConnectionError: GeminiException – Server disconnected`
#13388 opened
Aug 7, 2025 -
[Feature]: Support for pass-through OAuth for Anthropic
#13380 opened
Aug 7, 2025 -
[Bug]: StandardLoggingPayload missing important fields across providers/models
#13376 opened
Aug 7, 2025 -
[Bug]: Claude Code with an OpenAI model throws “Error: Streaming fallback triggered”
#13373 opened
Aug 7, 2025 -
[Feature]: Langfuse OTEL at Team/key Level
#13370 opened
Aug 7, 2025 -
[Bug]: /api/v1/openai/models only working with models in config.yaml not with UI
#13366 opened
Aug 7, 2025 -
[Bug]: how to cache claude code system promot using litellm?
#13365 opened
Aug 7, 2025 -
[Bug]: Strands Agent Integration with Litellm
#13361 opened
Aug 7, 2025 -
[Bug]:The Config Model are lost when I delete or modefy a DB Model
#13359 opened
Aug 7, 2025 -
verfolgung
#13358 opened
Aug 7, 2025 -
[Bug]: Mistral provider: Empty assistant messages causing errors in subsequest API calls
#13355 opened
Aug 6, 2025 -
[Bug]: 404 error when calling OpenAI compatible models using Responses API
#13352 opened
Aug 6, 2025 -
[Bug]: finish_reason inconsistency in Async + Streaming
#13348 opened
Aug 6, 2025 -
[Feature]: Add Universal Claude Code Logging Support Across All Integrations
#13344 opened
Aug 6, 2025 -
[Bug]: ollama gpt-oss not working
#13340 opened
Aug 6, 2025 -
[Bug]: updating supported_openai_params within 'Model Info' does not seem to update the model.
#13338 opened
Aug 6, 2025 -
[Feature]: Alternative Token Types for Snowflake
#13337 opened
Aug 6, 2025 -
[Bug]: Pass-through endpoints don't forward query parameters for GET requests
#13334 opened
Aug 6, 2025 -
[Bug]: litellm.APIConnectionError: Unable to parse ollama chunk
#13333 opened
Aug 6, 2025 -
The request ID displayed in the UI logs is inaccurate
#13330 opened
Aug 6, 2025 -
[Security Issue]: Data Isolation issue when prompt is returned in error & retry is enabled
#13329 opened
Aug 6, 2025 -
[Bug]: Unexpected behavior while creating missing views
#13326 opened
Aug 6, 2025 -
disk cache not available in default image from docker compose
#13325 opened
Aug 6, 2025 -
[Bug]: /health route put Access tokens in response
#13324 opened
Aug 6, 2025 -
[Bug]: Add New MCP Server : Connection Status shows Connection Failed - Failed to fetch
#13322 opened
Aug 6, 2025 -
[Bug]: LiteLLM Bedrock Guardrail Error Response Incomplete
#13321 opened
Aug 6, 2025 -
[Feature]: Support for routing requests to the Cohere `v2/chat` api when submitted through OpenAI SDK
#13311 opened
Aug 5, 2025 -
[Bug]: Model Alias Resolution Causes Permission Check Failure
#13310 opened
Aug 5, 2025 -
[Feature]: SambanNova embeddings
#13307 opened
Aug 5, 2025 -
[Feature]: Support reasoning in harmony response format for gpt-oss models
#13300 opened
Aug 5, 2025 -
[Bug]: APIConnectionError - negative file descriptor with Claude models
#13283 opened
Aug 5, 2025 -
Changes to the Bitnami Chart and Image Catalog
#13281 opened
Aug 5, 2025 -
[Bug]: Missing spend logs
#13280 opened
Aug 5, 2025 -
[Feature]: Load balancing support for multiple credentials in passthrough endpoint
#13277 opened
Aug 5, 2025 -
[Feature]: Add WandB Inference Endpoint
#13273 opened
Aug 4, 2025 -
[Bug]: Unable to use DATABASE_* environment variables for configuring DB
#13266 opened
Aug 4, 2025 -
[Bug]: No cost on client side for streaming
#13264 opened
Aug 4, 2025 -
[Bug]: Librechat outputs gibberish with LiteLLM
#13263 opened
Aug 4, 2025 -
[Feature]: integrate with LiveKit for voice agent
#13262 opened
Aug 4, 2025 -
[Bug]: Missing Editor-Version Header for IDE Auth in GitHub Copilot provider in Proxy Mode
#13256 opened
Aug 4, 2025 -
[Bug]: Http 500 error when trying to call dall-e-3
#13254 opened
Aug 4, 2025 -
[Bug]: Unclosed aiohttp client session when using acompletion with concurrent requests
#13251 opened
Aug 4, 2025 -
[Bug]: Embedding query returns null values on Redis cache retrieval
#13250 opened
Aug 4, 2025 -
[Bug]: litellm.APIConnectionError: OllamaException - Server disconnected without sending a response.
#13249 opened
Aug 4, 2025 -
[Bug]: Litellm Not supporting the model gpt-35-turbo-instruct text model
#13247 opened
Aug 4, 2025 -
[Bug]: Migration failed
#13246 opened
Aug 4, 2025 -
[Bug]: Incomplete spend tracking for non-streaming Bedrock calls when client disconnects
#13245 opened
Aug 4, 2025 -
[Bug]: No apparent way to delete an Auto Router entry ... or maybe they are not created
#13236 opened
Aug 2, 2025 -
[Bug]: Cohere provider appears to ignore system prompt when user prompt is also present.
#13235 opened
Aug 2, 2025 -
[Feature]: Add fal.ai as an AI provider (support for text, image, and video generation)
#13229 opened
Aug 2, 2025 -
[Bug]: post_call is not triggering async_post_call_success_hook
#13222 opened
Aug 2, 2025 -
[Bug]: OSError: [Errno 24] Too many open files
#13220 opened
Aug 2, 2025 -
[Bug] ResetBudgetJob loads entire tables into memory → OOM & DB connect storm (≈ 250 k keys)
#13210 opened
Aug 1, 2025 -
[Bug]: Gpt-images and dall-e-3 problem with cost
#13209 opened
Aug 1, 2025 -
[Bug]: missing tzdata in docker images
#13197 opened
Aug 1, 2025 -
[Bug]: Model Hub - Invalid models appear when using custom prefixes for wildcard models.
#13190 opened
Jul 31, 2025 -
[Feature]: Track LiteLLM Sessions/Threads/Conversations/Chats also into Opik via the integration
#13179 opened
Jul 31, 2025 -
[Bug]: Failed to create Argilla record
#13177 opened
Jul 31, 2025 -
[Bug]: Docs on prompt-caching incomplete/wrong
#13175 opened
Jul 31, 2025 -
[Bug]: Infinite Logs - After inserting non existing model
#13173 opened
Jul 31, 2025 -
[Bug]: OpenAI_Compatible LLM credentials
#13161 opened
Jul 31, 2025 -
[Bug]: OLLAMA_API_KEY does not work for completion (i.e. in Aider)
#13154 opened
Jul 30, 2025 -
[Bug]: `batch_completion` not working, sometimes?
#13139 opened
Jul 30, 2025 -
[Feature]: Add Input/Output TPM to `Router`
#13138 opened
Jul 30, 2025 -
[Bug]: broken postgre connection after some time
#13133 opened
Jul 30, 2025 -
[Feature]: support enabling "/responses to /chat/completions Bridge" on openai (llama.cpp) models
#13130 opened
Jul 30, 2025 -
[Bug]: Model Info / Additional properties update creates issue on model list
#13129 opened
Jul 30, 2025 -
[Feature]: Support MCP Server instructions in the gateway
#13119 opened
Jul 30, 2025 -
[Feature]: DeepInfra new reranking Endpoint
#13097 opened
Jul 29, 2025 -
[Bug]: Non-root image error
#13090 opened
Jul 29, 2025 -
[Bug]: Admin UI: Remove Add Model
#13089 opened
Jul 29, 2025 -
[Bug]: Gemini-cli - map the google response schema to openapi format
#13084 opened
Jul 29, 2025 -
[Bug]: Edited `Model Info` fields via UI are not persisted
#13082 opened
Jul 29, 2025 -
Why does OTel tracing require `litellm[proxy]`?
#13081 opened
Jul 29, 2025 -
[Bug]: proxy_admin_viewer Role Missing Access to Essential Routes
#13077 opened
Jul 29, 2025 -
[Bug]: (PROXY) aws_secret_manager_v2 will fail at startup trying to read DATABASE_URL
#13076 opened
Jul 29, 2025 -
[Bug]: Gemini API Key is visible in the logs
#13073 opened
Jul 29, 2025 -
[Feature]: Request for Support: Integration of ZhiPu GLM-4.5 Model
#13059 opened
Jul 28, 2025 -
[Bug]: MCP Cost Management 🤑
#13057 opened
Jul 28, 2025 -
[Bug]: Azure Models using Responses API do not respect auth config
#13056 opened
Jul 28, 2025 -
[Feature]: Vector Store Management Integration (S3 Vectors / Cloudflare Vectorize) Motivation
#13052 opened
Jul 28, 2025 -
[Bug]: Team budget is not removed from Team's metadata when deleting a Budget
#13051 opened
Jul 28, 2025 -
[Bug]: Passing a custom httpx.Client
#13049 opened
Jul 28, 2025 -
[Bug]: Cache of provider_specific_fields does not work
#13048 opened
Jul 28, 2025 -
[Feature]: Litellm Proxy support to mimic support for other providers
#13047 opened
Jul 28, 2025 -
Add "qwen/qwen3-14b" in "model_prices_and_context_window.json"
#13043 opened
Jul 28, 2025 -
[Feature]: Support file_search tool type for OpenAI provider
#13042 opened
Jul 28, 2025 -
[Bug][Minor]: Misleading Error Message Where Content-Type not configured on /chat/completions
#13040 opened
Jul 28, 2025 -
[Bug]: thinking can't be disabled for volcengine provider
#13039 opened
Jul 28, 2025 -
[Bug]: User not assigned to team when using /user/new when setting user_email.
#13035 opened
Jul 28, 2025 -
[Bug]: Langfuse Reporting Fails with "Cannot send a request, as the client has been closed"
#13034 opened
Jul 28, 2025 -
[Bug]: List of teams not returned by /user/new
#13032 opened
Jul 28, 2025 -
[Feature]: SSO for self-hosters
#13031 opened
Jul 28, 2025 -
[Bug]: Model list shouldn't allow editing model that has come from the config file (i.e. not database)
#13030 opened
Jul 28, 2025 -
Helm chart in a repo has very old appVersion
#13028 opened
Jul 27, 2025 -
Add "Phi-4-reasoning" in "model_prices_and_context_window.json"
#13026 opened
Jul 27, 2025 -
[Bug]:
#13020 opened
Jul 27, 2025 -
[Bug]: Model Prices Entry for Models Hosted Through VLLM
#13009 opened
Jul 26, 2025 -
ERROR: Exception in ASGI application
#13004 opened
Jul 25, 2025 -
[Bug]: Issue when redacted logs are sent to Langfuse
#13002 opened
Jul 25, 2025 -
[Bug]: LiteLLM SDK Client does not send out metadata upstream
#12997 opened
Jul 25, 2025 -
[Feature]: streaming llm with tool calls - ignore responses with tool calls and only stream final response
#12996 opened
Jul 25, 2025 -
[Bug]: Model health is not showing properly in the dashboard
#12993 opened
Jul 25, 2025 -
[Bug]: OpenRouter thinking tokens not counted
#12982 opened
Jul 25, 2025 -
[Feature]: Support injecting Langfuse prompt with trace metadata during LLM calls
#12981 opened
Jul 25, 2025 -
[Bug]: Invalid Stream Timeout Encoding After Reset in LiteLLM Proxy UI
#12979 opened
Jul 25, 2025 -
[Bug]: Budget Limitations Can Be Bypassed Using AzureOpenAI Library
#12977 opened
Jul 25, 2025 -
Using multiple router objects duplicates callbacks
#12975 opened
Jul 25, 2025 -
[Bug]: AWS Bedrock Claude tool call index incorrect for tool calls with empty arguments
#12973 opened
Jul 25, 2025 -
[Feature]: Support for voyage-context-3 embedding model
#12965 opened
Jul 25, 2025 -
Fix "text-bison" entry in "model_prices_and_context_window.json"
#12963 opened
Jul 25, 2025 -
[Bug]: `make test-unit` doesn't work in a fresh Ubuntu container
#12951 opened
Jul 24, 2025 -
[Bug]: 404 error attempting to use alibaba/dashscope provider
#12943 opened
Jul 24, 2025 -
[Bug]: /responses endpoint does not work with Azure OpenAI LLM
#12938 opened
Jul 24, 2025 -
[Feature]: Update get_valid_models to respect litellm.use_litellm_proxy and auto-set custom_llm_provider
#12937 opened
Jul 24, 2025 -
`ssl_verify=False` is not passed down to OpenAI completion from the `completion` function
#12936 opened
Jul 24, 2025 -
[Bug]: Early return in stream mode prevents CustomStreamWrapper logic from completing
#12935 opened
Jul 24, 2025 -
[Feature]: about mapping of vLLM/hosted_vllm modeles
#12934 opened
Jul 24, 2025 -
AI
#12932 opened
Jul 24, 2025 -
[Bug]: litellm Connection Error with ADK
#12931 opened
Jul 24, 2025 -
[Feature]: Need the ability to set context size (num_ctx) while creating ChatLiteLLM instance
#12930 opened
Jul 24, 2025 -
[Feature]: lite llm support source grpah
#12929 opened
Jul 24, 2025 -
[Bug]: Watsonx provider doens't support space_id (not the same as deployment_id)
#12904 opened
Jul 23, 2025 -
[Bug]: Loading config from GCS is not working for me
#12903 opened
Jul 23, 2025 -
[Bug]: Not able to control tools part in gemini model
#12902 opened
Jul 23, 2025 -
[Bug]: CachePointBlock in Bedrock converse API
#12900 opened
Jul 23, 2025 -
[Bug]: Images URLs are converted to Base64 for Anthropic provider
#12899 opened
Jul 23, 2025 -
[Bug]: Presidio Guardrail giving litellm type validation error in Litellm Proxy
#12898 opened
Jul 23, 2025 -
Audio transcription request routed to /chat/completions instead of /audio/transcriptions (Azure Whisper)
#12897 opened
Jul 23, 2025 -
[Feature]: Add SiliconFlow to Providers
#12888 opened
Jul 23, 2025 -
[Bug]: shadowing of `litellm.acreate()` in `__init__.py`
#12881 opened
Jul 22, 2025 -
[Bug]: Bad defaults / incorrect jitter logic leads to tiny delays between retries
#12877 opened
Jul 22, 2025 -
[Feature]: Knowledge bases - Allow configuring context size, match score
#12876 opened
Jul 22, 2025 -
[Bug]: LiteLLM_Config table is overwriting newly deployed config
#12875 opened
Jul 22, 2025 -
[Bug]: File descriptors causing parallel async calls to fail
#12872 opened
Jul 22, 2025 -
[Bug]: column LiteLLM_DailyTagSpend.mcp_namespaced_tool_name does not exist after downgrade upgrade
#12869 opened
Jul 22, 2025 -
[Bug]: Pentest security fixes - admin viewer privilege escalation
#12868 opened
Jul 22, 2025 -
[Bug]: o1-pro and Cohere models returns "finish_reason": "stop" when using streaming
#12862 opened
Jul 22, 2025 -
[Bug]: Can't add model with team and reports failed to add model: Error: {"detail":"Not Found"}
#12858 opened
Jul 22, 2025 -
[Bug]: Incorrect redirect on login success when SERVER_ROOT_PATH is set
#12856 opened
Jul 22, 2025 -
[Bug]: Invalid/empty API response in fake stream using claude code
#12854 opened
Jul 22, 2025 -
[Bug]: Issues with logging requests with Guardrails to Langsmith
#12819 opened
Jul 21, 2025 -
[Bug]: GCP Model armor always Success
#12818 opened
Jul 21, 2025 -
[Bug]: UI not accounting for Azure Text Completion (azure_text)
#12813 opened
Jul 21, 2025 -
[Bug]: Vulnerability - CWE-343 Predictable Value Range from Previous Values
#12810 opened
Jul 21, 2025 -
[Bug]: Unknown team (litellm-dashboard) generate thousands of Auth error requests
#12806 opened
Jul 21, 2025 -
[Bug]: Metadata/Sessions/Tags does not show up when using Open AI Agents SDK
#12802 opened
Jul 21, 2025 -
[Bug]: There is no model_group information in responses, image edit API, but chat/completions
#12800 opened
Jul 21, 2025 -
[Bug]: In Batch, there is no max_budget checking in team budget max limitation.
#12794 opened
Jul 21, 2025 -
[Bug]: Pip install crash at start when url_database is define
#12793 opened
Jul 21, 2025 -
[Feature]: Update Credential Used on Existing model
#12792 opened
Jul 20, 2025 -
[Bug]: MCP Servers issues
#12784 opened
Jul 20, 2025 -
[Feature]: Built-in MCP Server Integration - Server-to-Server MCP with Client Abstraction
#12783 opened
Jul 20, 2025 -
[Bug]: cannot proxy to openai server
#12761 opened
Jul 19, 2025 -
[Bug]: Problem with context id in watsonx.ai integration
#12756 opened
Jul 19, 2025 -
[Feature]: Support MongoDB Vector Store
#12748 opened
Jul 18, 2025 -
[Bug]: Azure OpenAI o3 Models with Different Names Behave Differently
#12744 opened
Jul 18, 2025 -
[Bug]: openai.BadRequestError: Error code: 400 - BedrockException
#12743 opened
Jul 18, 2025 -
[Bug]: See multiple pillars for the same day in usage tab daily spend chart
#12735 opened
Jul 18, 2025 -
[Bug]: after updating a model via `/model/{model_id}/update`, non-database models disappear
#12711 opened
Jul 17, 2025 -
[Bug]: recursion issue with langfuse
#12710 opened
Jul 17, 2025 -
Docs request: expanding Reliability to mention global retry configuration
#12706 opened
Jul 17, 2025 -
[Bug: Anthropic Bedrock Converse]: assistant & tool messages dropping cache points
#12695 opened
Jul 17, 2025 -
[Bug]: Client-side fallback model isn't work if model list presented by Wildcard
#12691 opened
Jul 17, 2025 -
[Feature]: Allow for Dynamic Trace Routing from LiteLLM proxy for Arize AI
#12688 opened
Jul 17, 2025 -
[Feature]: Arize Updated Tracing for (Gemini and Anthropic)
#12687 opened
Jul 17, 2025 -
[Bug]: Heavy RAM Usage over time
#12685 opened
Jul 17, 2025 -
Creating Files with Vertex_AI results in Error 500 - 'GCS bucket_name is required'
#12682 opened
Jul 17, 2025 -
[Feature]: Batch Inference support for AWS Bedrock
#12681 opened
Jul 17, 2025 -
[Bug]: Link to documentation fallback for context doest' not exists
#12680 opened
Jul 17, 2025 -
[Bug]: Tool call fails with KIMI K2
#12679 opened
Jul 17, 2025 -
Tool calling fails with session continuity (previous_response_id) on Gemini models
#12677 opened
Jul 17, 2025 -
[Feature]: Add GuardrailConverseContent Support for Bedrock Guardrails to supportive selective Guarding
#12676 opened
Jul 17, 2025 -
[Bug]: Google AI generateContent endpoints require a different request body shape than the native API
#12671 opened
Jul 17, 2025 -
[Bug]: NonRoot Image - serverRootPath Configuration Not Working for LiteLLM UI Assets
#12665 opened
Jul 16, 2025 -
[Bug]: acompletion with streaming enabled for Groq LLMs is broken
#12660 opened
Jul 16, 2025 -
[Bug]: LiteLLM Proxy Server deployed Ollama models don't work in ReAct agent mode
#12642 opened
Jul 16, 2025 -
[Bug]: Invalid URL constructed for Google Gemini models when using api_base (proxy)
#12611 opened
Jul 15, 2025 -
[Bug]: Wrong Price Calculation
#12605 opened
Jul 15, 2025 -
[Bug]: responses API on groq causes a NoneType on tool triggering
#12602 opened
Jul 15, 2025 -
[Bug]: Permissions bug trying to read Default User Settings
#12598 opened
Jul 15, 2025 -
[Bug]: deepseek/ showing no usage on text completion
#12586 opened
Jul 14, 2025 -
[Feature]: Add Character-Level Prefix Routing for Cache-Aware Load Balancers (SGLang)
#12584 opened
Jul 14, 2025 -
[Bug]: LiteLLM Router Initializes Clients with aiohttp Transport Regardless of Config
#12581 opened
Jul 14, 2025 -
[Feature]: Use priority instead of fallback
#12579 opened
Jul 14, 2025 -
Don't load `.env` file on import
#12576 opened
Jul 14, 2025 -
[Feature]: ElevenLabs text to speech
#12575 opened
Jul 14, 2025 -
[Feature]: Local Text to Speech Models
#12573 opened
Jul 14, 2025 -
[Bug]: Dashscope valid API key AuthenticationError
#12571 opened
Jul 14, 2025 -
[Bug]: Documentation link not found - https://docs.litellm.ai/docs/hosted
#12570 opened
Jul 14, 2025 -
[Bug]: OpenAI-compatible endpoints doesn't support extra_query
#12569 opened
Jul 14, 2025 -
🐛 Anthropic Tool Calling: Arguments incorrectly serialized to JSON string
#12554 opened
Jul 12, 2025 -
[Bug]: Error code 400 accessing MCP server
#12543 opened
Jul 12, 2025 -
[Bug]: LiteLLM logs certain error messages to stdout instead of stderr
#12525 opened
Jul 11, 2025 -
[Bug]: Tool arguments chunks have the JSON string `"null"` as function name.
#12513 opened
Jul 11, 2025 -
Add boolean in API to turn off logs
#12511 opened
Jul 11, 2025
223 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Fix 500 error in `/customer/update` endpoint when updating with `budget_id`
#12438 commented on
Jul 30, 2025 • 5 new comments -
Add support for configurable CORS origins/security headers via ENV params
#11744 commented on
Aug 10, 2025 • 3 new comments -
fix: enable native streaming for Azure o4 models
#12167 commented on
Jul 23, 2025 • 3 new comments -
feat: add Cloudflare Llama-4 and 3.3 multimodel with advanced features + BGE embeddings
#11037 commented on
Aug 9, 2025 • 2 new comments -
Fix duplicate teams in list_team endpoint
#12142 commented on
Jul 19, 2025 • 2 new comments -
feat: add a health_check_voice parameter in model_info
#12416 commented on
Jul 30, 2025 • 2 new comments -
[Draft] Draft cloud zero integration
#12490 commented on
Jul 15, 2025 • 2 new comments -
fix issue with parsing assistant messages
#10917 commented on
Jul 22, 2025 • 1 new comment -
fix: merge team callbacks with config callbacks during request processing
#12172 commented on
Jul 19, 2025 • 1 new comment -
fix(proxy): Enable partial matching for User ID filter in virtual keys page
#12205 commented on
Jul 29, 2025 • 1 new comment -
Add sorting to `models list` command
#10630 commented on
Jul 25, 2025 • 1 new comment -
[Bug]: self-signed certificate error depending on the request
#10717 commented on
Aug 9, 2025 • 0 new comments -
[Bug]: Bulk Invite Users is not working
#10671 commented on
Aug 9, 2025 • 0 new comments -
[Feature]: Allow include block in config to work with S3/GCS buckets
#10632 commented on
Aug 9, 2025 • 0 new comments -
[Bug]: groq/whisper-large-v3 returns 400 BadRequestError with OPENAI_TRANSCRIPTION_PARAMS
#11402 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: UserWarning: Pydantic serializer warnings: PydanticSerializationUnexpectedValue(Expected 9 fields but got 6: ...
#11759 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: Request/Response data is not displaying in the LiteLLM UI interface
#11758 commented on
Aug 8, 2025 • 0 new comments -
[Feature]: Add supported call types for langfuse
#8936 commented on
Jul 19, 2025 • 0 new comments -
[Feature]: Add new Deepseek-v3-0324 model (Fireworks)
#9841 commented on
Aug 9, 2025 • 0 new comments -
[Possible causes of bugs] Mutable default arguments in this project
#9827 commented on
Aug 9, 2025 • 0 new comments -
[Feature]: UI - Allow openai-compatible model to be typed instantly
#8396 commented on
Aug 9, 2025 • 0 new comments -
[Bug]: Key alias are unique between all the users
#8328 commented on
Aug 9, 2025 • 0 new comments -
[Bug]: Custom Favicon
#8323 commented on
Aug 9, 2025 • 0 new comments -
Integrating Not Diamond with LiteLLM
#4971 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: UnsupportedParamsError for reasoning_effort with OpenAI o4-mini model
#10108 commented on
Aug 9, 2025 • 0 new comments -
🎅 I WISH LITELLM HAD...
#361 commented on
Aug 9, 2025 • 0 new comments -
Solving the return value format issue during multiple function calls with the LLaMA 3 model.
#4636 commented on
Jul 24, 2025 • 0 new comments -
add Lite llm docker proxy (Gemini ver)
#2574 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Usage, legacy usage and logs empty
#11152 commented on
Aug 9, 2025 • 0 new comments -
[Bug]: WebSocket issues with Open AI Realtime API in the browser
#6825 commented on
Aug 9, 2025 • 0 new comments -
[Bug]: OpenRouter not handling 524 from Google AI Studio
#10487 commented on
Aug 10, 2025 • 0 new comments -
[Bug]: AWS Lambda times out when importing Router
#10018 commented on
Aug 10, 2025 • 0 new comments -
[Bug]: Anthropic messages provider config not found for model: us.anthropic.claude-3-7-sonnet-20250219-v1:0
#10281 commented on
Aug 11, 2025 • 0 new comments -
[Bug]: enforce_user_param - LiteLLM UI throws errors on its own internal requests
#9497 commented on
Aug 11, 2025 • 0 new comments -
[Bug]: Message Redaction on OTEL impacts logging output on s3
#8489 commented on
Aug 11, 2025 • 0 new comments -
[Feature]: allow disabling tokenizer usage on completion requests
#9145 commented on
Aug 11, 2025 • 0 new comments -
[Bug]: <think> opening tag is not emitted in model output when using nvidia_nim/fireworks via LiteLLM Proxy
#10848 commented on
Aug 11, 2025 • 0 new comments -
[Bug]: anthropic extended thinking not working with extended thinking + "output-128k-2025-02-19"
#9020 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: Need to ability to select models by access groups.
#10672 commented on
Aug 7, 2025 • 0 new comments -
Rasa model cannot complete training due to the error below
#10669 commented on
Aug 7, 2025 • 0 new comments -
What is the PR procedure for getting fixes in?
#10663 commented on
Aug 7, 2025 • 0 new comments -
Private endpoint support for Azure Open AI
#10655 commented on
Aug 7, 2025 • 0 new comments -
Persistent Cost Tracking Error Despite NO_COST_TRACKING=True
#10647 commented on
Aug 7, 2025 • 0 new comments -
[Feature]: Use cohere v2 endpoint whenever possible
#10289 commented on
Aug 7, 2025 • 0 new comments -
[Bug]: Long Azure OpenAI streaming errors not logged on Langfuse (`incomplete chunked read`)
#4312 commented on
Aug 7, 2025 • 0 new comments -
[Bug]: acompletion() Ignores model_alias_map
#2351 commented on
Aug 7, 2025 • 0 new comments -
[Bug]: Redis cache timeout causes message content to leak into logs
#11157 commented on
Aug 7, 2025 • 0 new comments -
[Feature]: Add Support for REDIS_SSL and REDIS_USERNAME as Environment Variables
#11318 commented on
Aug 7, 2025 • 0 new comments -
[Bug]: OpenAI assistant requests are not available in UI logs
#10695 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: Multi instance rate limit issue with cache namespace.
#10692 commented on
Aug 8, 2025 • 0 new comments -
[Feature]: Please add functionality to display latency of each deployment model in lowest_latency_logger method.
#10689 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: "finish_reason" disappears if logprobs=true and stream=true
#10686 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: VertexGemini audio tokens usage parse error in _calculate_usage
#10684 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: Strange Message on Litellm CLI (Health Check)
#10683 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: JSON schema (without Pydantic) does not conform
#10674 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: /file Unable to use files_settings with Azure config unless env vars are explicitly set
#10561 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: Invalid model name passed in model=None
#9663 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: function_to_dict() no arrays/list types
#9323 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: Issue with Environment Variable Substitution in config.yaml for Azure Model Configuration
#8919 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: Router's async completion don't trigger CustomLogger callbacks
#8842 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: uvicorn dependency version too low
#11484 commented on
Aug 8, 2025 • 0 new comments -
[Bug]: Upgrade boto3
#11542 commented on
Aug 8, 2025 • 0 new comments -
Add Feature 10109 Assign Users Team Role at Account Creation
#10579 commented on
Aug 11, 2025 • 0 new comments -
Fix AzureChatCompletion adding stream_options when stream is False
#10594 commented on
Aug 6, 2025 • 0 new comments -
fix(litellm/caching/caching_handler.py): fix kwargs[litellm_params][p…
#10612 commented on
Aug 8, 2025 • 0 new comments -
Add support for AWS IAM Role authentication for S3 bucket logging
#10631 commented on
Aug 7, 2025 • 0 new comments -
Upgrade langfuse sdk and enable passing generation params thru metada…
#10639 commented on
Aug 7, 2025 • 0 new comments -
Initial commit with project files
#10740 commented on
Aug 10, 2025 • 0 new comments -
[Feat] Add Qdrant Vector Store to supported Vector Stores
#11468 commented on
Jul 30, 2025 • 0 new comments -
fix(test_exceptions.py): move exception tests
#11501 commented on
Jul 21, 2025 • 0 new comments -
Implement GPT-image-1 token-based cost tracking
#11540 commented on
Jul 19, 2025 • 0 new comments -
fix jitter add instead of mult
#11706 commented on
Jul 19, 2025 • 0 new comments -
Add Client-Side Pagination to Models Table
#11714 commented on
Jul 29, 2025 • 0 new comments -
add mistral-large-2411 to model_prices_and_context_window.json
#12028 commented on
Jul 29, 2025 • 0 new comments -
Fix ssl verify string to bool from env
#12052 commented on
Jul 22, 2025 • 0 new comments -
fix(batch): prevent duplicate spend updates on batch retrieve
#12170 commented on
Jul 18, 2025 • 0 new comments -
Fix: Responses API now respects global request_timeout
#12211 commented on
Jul 19, 2025 • 0 new comments -
Updated factory.py
#12236 commented on
Jul 24, 2025 • 0 new comments -
gemini model cost updates
#12248 commented on
Jul 11, 2025 • 0 new comments -
[Feat] Add support for Vertex dedicated endpoint
#12259 commented on
Jul 14, 2025 • 0 new comments -
Fix: correct user_id validation logic in Anthropic…
#12278 commented on
Aug 7, 2025 • 0 new comments -
Update GitHub provider base url and docs
#12357 commented on
Jul 15, 2025 • 0 new comments -
fix(proxy): Fix health check UI display issue (LIT-276)
#12379 commented on
Jul 18, 2025 • 0 new comments -
fix(ui): enable editing of existing logging callback settings (LIT-288)
#12380 commented on
Jul 14, 2025 • 0 new comments -
fix(proxy_cli): handle special characters in database passwords
#12422 commented on
Jul 21, 2025 • 0 new comments -
fix(streaming): ensure last chunk delta contains content empty string
#12423 commented on
Jul 22, 2025 • 0 new comments -
Lasso Security Guardrail: Add v3 API Support
#12452 commented on
Jul 29, 2025 • 0 new comments -
add custom health probes in helm chart
#6851 commented on
Aug 5, 2025 • 0 new comments -
1215同步
#7243 commented on
Aug 8, 2025 • 0 new comments -
Set the ollama_models list based on locally available models when litellm starts.
#8116 commented on
Jul 16, 2025 • 0 new comments -
Fix litellm.add_function_to_prompt
#8379 commented on
Jul 23, 2025 • 0 new comments -
feat(helm): add db.secret.enabled key
#8540 commented on
Aug 11, 2025 • 0 new comments -
Fix : When user is deleted, user not removed from the teams he is member of (resolves : #6556)
#8580 commented on
Jul 28, 2025 • 0 new comments -
fix(ui): set label for ViewUserSpend
#8589 commented on
Jul 24, 2025 • 0 new comments -
Dockerfile improvements
#8791 commented on
Jul 24, 2025 • 0 new comments -
[ui] Fix tagsSpendLogsCall
#8793 commented on
Jul 24, 2025 • 0 new comments -
Don't fail for `global_max_parallel_requests` = 1
#9173 commented on
Aug 8, 2025 • 0 new comments -
Optional `labels` field in Vertex AI request
#9175 commented on
Jul 23, 2025 • 0 new comments -
Add Hosted VLLM rerank provider integration
#9249 commented on
Jul 11, 2025 • 0 new comments -
fix: use custom_llm_provider from kwargs if provided
#9698 commented on
Jul 23, 2025 • 0 new comments -
fix bedrock embedding invocations with app inference profiles
#9902 commented on
Aug 5, 2025 • 0 new comments -
fixbug(router.py): add original_messages handling in Router class
#9912 commented on
Aug 7, 2025 • 0 new comments -
Fix side effects on `params` instance variables.
#9929 commented on
Aug 10, 2025 • 0 new comments -
fix(watsonx.ai): Allows calling completions api with only space_id
#9963 commented on
Jul 23, 2025 • 0 new comments -
fix: Fix passing scope id for watsonx inferencing
#10012 commented on
Jul 21, 2025 • 0 new comments -
Remove files generated by FE builds
#10067 commented on
Aug 6, 2025 • 0 new comments -
Update Model Pricing Information for Groq Llama 4 models
#10273 commented on
Aug 6, 2025 • 0 new comments -
Attempt to avoid the encoding error when loading the json file.
#10380 commented on
Jul 31, 2025 • 0 new comments -
adding cache delete keys for in-memory cache
#10438 commented on
Aug 11, 2025 • 0 new comments -
Expanded the _validate_anthropic_response function to include comprehensive checks
#10505 commented on
Aug 8, 2025 • 0 new comments -
Implement reranking for Voyage Models
#10521 commented on
Aug 8, 2025 • 0 new comments -
docs: add openrouter/qwen/qwen3-235b-a22b model & update openrouter/qwen/qwen-2.5-coder-32b-instruct
#10551 commented on
Aug 7, 2025 • 0 new comments -
[Bug]: Error parsing chunk: Expecting property name enclosed in double quotes
#5650 commented on
Jul 20, 2025 • 0 new comments -
[Bug]: Pass-through creation and deletion not working
#11238 commented on
Jul 21, 2025 • 0 new comments -
[Bug]: Prometheus metrics aren't shared across Uvicorn workers
#10595 commented on
Jul 21, 2025 • 0 new comments -
[Bug]: Support Mistral OCR from Azure AI foundry
#11478 commented on
Jul 21, 2025 • 0 new comments -
[Bug]: In-memory Prompt Injection Detection not working despite being activated in config
#11480 commented on
Jul 21, 2025 • 0 new comments -
[Bug]: ValueError: Could not resolve project_id
#12199 commented on
Jul 21, 2025 • 0 new comments -
[Bug]: AzureOpenAI Reasoning Model(o1-mini,o3-mini): No Error Returned When Content Filter is Triggered
#9303 commented on
Jul 22, 2025 • 0 new comments -
[Bug]: ToolCall not working properly when stream=true with Anthropic provider with Vercel AI SDK
#8066 commented on
Jul 22, 2025 • 0 new comments -
[Bug]: #11097 broke HTTP request caching via `vcrpy`
#11724 commented on
Jul 22, 2025 • 0 new comments -
[Bug]: User's max budget is not enforced when team is set for API key
#11962 commented on
Jul 22, 2025 • 0 new comments -
[Bug]: Files uploaded to Google Files API are not accessible
#9968 commented on
Jul 22, 2025 • 0 new comments -
[Bug]: Last Chunk in Stream Response contains empty object
#12417 commented on
Jul 22, 2025 • 0 new comments -
[Bug]: LLM Provider not provided vs. invalid model name prefix
#12457 commented on
Jul 22, 2025 • 0 new comments -
How to Enable enable_thinking with Qwen3 Model via OpenAI-Compatible API ,litellm proxy?
#10938 commented on
Jul 23, 2025 • 0 new comments -
[Bug]: Litellm + Ollama and Gemini cannot get "finish_reason: tool_calls" but "finish_reason: end_turn"
#12481 commented on
Jul 23, 2025 • 0 new comments -
[Feature]: Add 4 indexes to the database setup
#9201 commented on
Jul 23, 2025 • 0 new comments -
[Bug]: Frequent "Requests are hanging" false alarms after upgrading to v1.73.6-stable
#12355 commented on
Jul 24, 2025 • 0 new comments -
[Bug]: logs blowing up with `Cannot add callback - would exceed MAX_CALLBACKS limit of 30.`
#9792 commented on
Jul 25, 2025 • 0 new comments -
[Bug]: IndexError: list index out of range in completion()
#10864 commented on
Jul 25, 2025 • 0 new comments -
[Bug]: Bug about model management
#10967 commented on
Jul 26, 2025 • 0 new comments -
[Bug]: MCP not working with more than 1 replica
#12359 commented on
Jul 26, 2025 • 0 new comments -
[Bug]: custom_llm_provider
#10287 commented on
Jul 28, 2025 • 0 new comments -
[Bug]: trim_messages() breaks tool_calls message chain integrity
#10696 commented on
Jul 28, 2025 • 0 new comments -
[Feature]: Support reranker with vllm provider
#11415 commented on
Jul 28, 2025 • 0 new comments -
[Bug]: ValueError: invalid literal for int() with base 10: 'generateContent' (gemini)
#7830 commented on
Jul 28, 2025 • 0 new comments -
[Bug]: metadata.api_base metric no longer emitted by otel after v1.49.0
#10389 commented on
Jul 28, 2025 • 0 new comments -
[Enhancement]: Support background in responses api to chat completions bridge
#12027 commented on
Jul 28, 2025 • 0 new comments -
[Feature]: Support for Black Forest Labs' flux-kontext-pro model in LiteLLM's /images/edits endpoint
#11401 commented on
Jul 11, 2025 • 0 new comments -
[Bug]: SSLCertificate error going from 1.71.1 to 1.71.2
#11259 commented on
Jul 11, 2025 • 0 new comments -
[Bug]: Vertex AI (Claude) JSON response_model
#10270 commented on
Jul 11, 2025 • 0 new comments -
[Bug]:Redis semantic caching error: “redis semantic caching requires redis-py client >= 4.2.0 and the redisearch module to be loaded on redis-server”
#12401 commented on
Jul 11, 2025 • 0 new comments -
[Feature]: Team-based logging not respected when using passthrough endpoints (e.g., Google AI Studio)
#9967 commented on
Jul 11, 2025 • 0 new comments -
[Bug] [litellm proxy]: Gemini second requests never works until proxy is restarted
#11322 commented on
Jul 11, 2025 • 0 new comments -
[Bug]: Bedrock Passthrough endpoints returns 500 error when ARNs are passed in as modelID value
#11312 commented on
Jul 11, 2025 • 0 new comments -
[Bug]: Budget reset for users does not work through the UI where default budgets are present
#11636 commented on
Jul 11, 2025 • 0 new comments -
[Feature]: Support Alibaba Cloud Provider and Qwen Model
#9198 commented on
Jul 12, 2025 • 0 new comments -
[Bug]: Error creating spendlogs object - Object of type _Span is not JSON serializable
#12354 commented on
Jul 14, 2025 • 0 new comments -
How to set separate network proxy for different provider, both in SDK and litellm proxy service?
#10827 commented on
Jul 14, 2025 • 0 new comments -
[Bug]: langfuse_otel (python v3 SDK) callback does not nest calls properly
#11742 commented on
Jul 14, 2025 • 0 new comments -
Azure resource not found?
#9406 commented on
Jul 14, 2025 • 0 new comments -
[Bug]: on litellm import, an http request to model_prices_and_context_window.json is executed, even if cost is not used
#10293 commented on
Jul 14, 2025 • 0 new comments -
[Bug]: Custom Root Path - Admin UI login failing with http status code- 303 see other
#11951 commented on
Jul 14, 2025 • 0 new comments -
[Bug]: Usage Dashboard: Two Issues with Spend Reporting and Failed Request Attribution
#11929 commented on
Jul 15, 2025 • 0 new comments -
[Bug]: Imagen-3 Cost tracking does not work
#10013 commented on
Jul 16, 2025 • 0 new comments -
[Bug]: livenessProbe(readiness) fails when calculating cost for large batch files
#12186 commented on
Jul 16, 2025 • 0 new comments -
[Bug]: turn_off_message_logging isn't respected for streaming responses
#9664 commented on
Jul 17, 2025 • 0 new comments -
[Bug]: GroqException - list index out of range
#8710 commented on
Jul 17, 2025 • 0 new comments -
[Bug]: Cannot use mcp, no matter how to change the prompt word
#9960 commented on
Jul 17, 2025 • 0 new comments -
[Bug]: Unknown Premium error
#11656 commented on
Jul 17, 2025 • 0 new comments -
[Bug]: Audio Transcriptions Hang
#9961 commented on
Jul 18, 2025 • 0 new comments -
[Bug]: TPM value not taking into effect
#11998 commented on
Jul 18, 2025 • 0 new comments -
[Bug]: Azure OpenAI Endpoint AttributeError (`str` object has no attribute 'model_dump')
#12070 commented on
Jul 18, 2025 • 0 new comments -
[Bug]: LiteLLM Proxy with OpenAI SDK (Responses API) does not take `extra_body`
#12483 commented on
Jul 18, 2025 • 0 new comments -
Custom server root path error
#12091 commented on
Jul 18, 2025 • 0 new comments -
[Feature]: Show provider model info
#10096 commented on
Jul 19, 2025 • 0 new comments -
TPM Limit Not Enforced as Expected with LiteLLM API
#10555 commented on
Aug 5, 2025 • 0 new comments -
[Feature]: The Model Hub should show the TPM/RPM quota, and also models health info
#10461 commented on
Aug 5, 2025 • 0 new comments -
UnicodeDecodeError when importing LiteLlm
#10340 commented on
Aug 5, 2025 • 0 new comments -
IndexError: list index out of range
#9682 commented on
Aug 5, 2025 • 0 new comments -
[Feature]: Support dual stack (IPv4/IPv6)
#9563 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Running the embedding function of ollama multiple times or in parallel raises Exception.
#9487 commented on
Aug 5, 2025 • 0 new comments -
[BUG]: Bedrock streaming responses buffer in 1024-byte chunks causing choppy output
#11747 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Error when using bind_tools() with a fine-tuned gemini-2.0-flash-001 model hosted on GCP
#12001 commented on
Aug 5, 2025 • 0 new comments -
[Feature]: Add support for custom OAuth2-based LLM provider
#12367 commented on
Aug 5, 2025 • 0 new comments -
Metadata dict unsupported in `token_counter` since v1.68.0
#10623 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: OTEL authentication header for `arize_phoenix` is missing
#10621 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: Add Cost Tracking for Mistral OCR Models
#10620 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: HF inference provider Cohere `huggingface/cohere/CohereLabs/aya-expanse-32b` calls incorrect route
#10619 commented on
Aug 6, 2025 • 0 new comments -
How to config 3rd part API provider use tokenizer base on model name ?
#10614 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: AttributeError: 'function' object has no attribute 'create' [client]
#10610 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: Run openHands with bedrock
#10609 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: DynamoDB integration error on embedding model
#10426 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: Container fails when redis semantic cache is configured
#10352 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: Supprt for Vertex AI with global region
#9234 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: error while accesing usage section in UI
#6348 commented on
Aug 6, 2025 • 0 new comments -
New Models/Endpoints/Providers
#4922 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: Add support for Llamafile provider
#3225 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: gemini/gemma-3-27b-it function calling is not enabled exception
#10313 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: Copy to Clipboard does not work
#12474 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: support an array of types in tool JSON schema for Gemini
#12031 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: gemini-2.5-pro doesn't take reasoning parameter error
#11557 commented on
Aug 6, 2025 • 0 new comments -
[Feature]: Support `think` parameter for Ollama models
#11680 commented on
Aug 6, 2025 • 0 new comments -
Docker Database connection Issue
#7450 commented on
Aug 6, 2025 • 0 new comments -
[Bug]: Cannot pass dimensions to openai compatible embedding model
#11940 commented on
Jul 29, 2025 • 0 new comments -
[Bug]: Gemini context caching with cached_content fails with "contents is not specified"
#10755 commented on
Jul 29, 2025 • 0 new comments -
[Bug]: Wrong cost for Anthropic models, cached tokens cost not being correctly considered.
#11364 commented on
Jul 29, 2025 • 0 new comments -
[Feature]: Support DeepInfra embeddings with parameters
#12110 commented on
Jul 29, 2025 • 0 new comments -
[Bug]: OpenMeter integration fails due to incorrect CloudEvent subject field format
#7569 commented on
Jul 29, 2025 • 0 new comments -
[Bug]: Welcome email is never sent
#9420 commented on
Jul 30, 2025 • 0 new comments -
[Feature]: Unknown context window size and costs can be determined at runtime (enhancement)
#5639 commented on
Jul 30, 2025 • 0 new comments -
[Bug]: `model_group_alias`ed model does not respect fallbacks
#10317 commented on
Jul 30, 2025 • 0 new comments -
[Bug]: litellm fails to block requests over end-user budget when user header used
#11083 commented on
Jul 30, 2025 • 0 new comments -
[Bug]: ui.txt endpoint return 404
#8057 commented on
Jul 31, 2025 • 0 new comments -
[Bug]: strict response format example is incorrect
#10320 commented on
Aug 1, 2025 • 0 new comments -
[Bug]: JSON: TypeError: 'MockValSer' object cannot be converted to 'SchemaSerializer'
#9345 commented on
Aug 1, 2025 • 0 new comments -
[Bug]: Swagger UI doesn't load in offline environment
#11698 commented on
Aug 1, 2025 • 0 new comments -
[Bug]: [Helm chart] Migration job pod cannot run on OpenShift and behind https proxy
#7173 commented on
Aug 1, 2025 • 0 new comments -
[Bug]: Bedrock pass-through endpoint fails with "Too little data for declared Content-Length" error
#10010 commented on
Aug 1, 2025 • 0 new comments -
[Bug]: Bedrock Sonnet 3.7 with thinking model - temperature should always be set to 1
#9524 commented on
Aug 1, 2025 • 0 new comments -
aiohttp.ClientSession not closed when using Gemini with LitellmModel — causes persistent warnings even with async usage
#12443 commented on
Aug 2, 2025 • 0 new comments -
使用 deepseek-chat 模型报错,貌似是 401 权限问题
#9514 commented on
Aug 3, 2025 • 0 new comments -
[Bug]: APIConnectionError parsing Tool call response from Ollama
#11267 commented on
Aug 3, 2025 • 0 new comments -
[Feature]: Semantic MCP tool auto-filtering
#12079 commented on
Aug 3, 2025 • 0 new comments -
[Bug]: hosted_vllm: merge_reasoning_content_in_choices: unsupported operand type(s) for +=: 'NoneType' and 'str'
#9578 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Gemini CLI (v0.1.9) Fails with model_group_alias Setup in LiteLLM v1.73.6.rc.1
#12275 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: The "Responses API" does not maintain conversation context for 10 seconds
#12364 commented on
Aug 4, 2025 • 0 new comments -
Anthropic claude returning no content
#9412 commented on
Aug 4, 2025 • 0 new comments -
[Bug]: Litellm always gives a 200 code status!
#10593 commented on
Aug 5, 2025 • 0 new comments -
不會解
#10592 commented on
Aug 5, 2025 • 0 new comments -
[Bug]: Credentials not saved correctly ?
#10587 commented on
Aug 5, 2025 • 0 new comments -
Fix "claude-3-7-sonnet-latest" entry in "model_prices_and_context_window.json"
#10586 commented on
Aug 5, 2025 • 0 new comments