Skip to content

issues Search Results · repo:BerriAI/litellm language:Python

Filter by

5k results
 (67 ms)

5k results

inBerriAI/litellm (press backspace or delete to remove)

What happened? I checked the normal operation in config.yml. - model_name: gemma-3-27b litellm_params: model: ISTA-DASLab/gemma-3-27b-it-GPTQ-4b-128g base_url: http://0.0.0.0:8100/v1 However, ...
bug
  • hanil-jihun
  • 1
  • Opened 
    51 minutes ago
  • #11689

What happened? When using Image edit with Azure OpenAI Service, if We are using API key authentication, We need to set it in the api-key header. However, the current implementation appears to set the ...
bug
  • yamada-kazuhiro-tm6
  • Opened 
    5 hours ago
  • #11681

The Feature Ollama now supports thinking with the new think parameter in the ollama.generate and ollama.chat functions. This doesn t seem to be implemented in LiteLLM yet, however. Motivation, pitch ...
enhancement
  • saattrupdan
  • Opened 
    6 hours ago
  • #11680

What happened? Given the following config: - model_name: google-gemini/* litellm_params: model: gemini/* api_key: litellm_settings: check_provider_endpoint: true Will not ...
bug
  • matthid
  • Opened 
    9 hours ago
  • #11673

The Feature In the standard logging payload, log a list of StandardLoggingGuardrailInformation for guardrail_information instead of single instance. Motivation, pitch If multiple guardrails are engaged ...
enhancement
  • thetonus
  • Opened 
    9 hours ago
  • #11671

What happened? As you try to add a vertex model, you are requested to upload the credentials for a GCP SA or use an existing credential. You should be allowed to reference a path on the local filesystem ...
bug
  • mrh-chain
  • Opened 
    9 hours ago
  • #11670

What happened? When adding a new model via the interface, if the provider is Bedrock, then one is required to add a set of AWS credentials. If LiteLLM is running on an AWS instance with an instance profile ...
bug
  • mrh-chain
  • 1
  • Opened 
    10 hours ago
  • #11669

What happened? For the Gemini Flash 2.5 model (gemini/gemini-2.5-flash-preview-05-20), the cost is listed as this: output_cost_per_token : 6e-7, output_cost_per_reasoning_token : 0.0000035, This is not ...
bug
mlops user request
  • wwwillchen
  • Opened 
    12 hours ago
  • #11667

What happened? Trying to integrate Azure OpenAI into our chat app using a gpt-4.1-mini model. When i try to call it in my code litellm._turn_on_debug() result = litellm.completion( model= azure/gpt-4.1-mini ...
bug
mlops user request
  • leakb
  • Opened 
    12 hours ago
  • #11666

What happened? Reopening with python example. I try to call different models with tool calls. Working on openai but not on some other ( sonar / claude ) Using : litellm 1.71.1 Is this planned to be ...
bug
  • simonljn
  • Opened 
    12 hours ago
  • #11665
Issue origami icon

Learn how you can use GitHub Issues to plan and track your work.

Save views for sprints, backlogs, teams, or releases. Rank, sort, and filter issues to suit the occasion. The possibilities are endless.Learn more about GitHub Issues
ProTip! 
Press the
/
key to activate the search input again and adjust your query.
Issue origami icon

Learn how you can use GitHub Issues to plan and track your work.

Save views for sprints, backlogs, teams, or releases. Rank, sort, and filter issues to suit the occasion. The possibilities are endless.Learn more about GitHub Issues
ProTip! 
Press the
/
key to activate the search input again and adjust your query.
Issue search results · GitHub