You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you give Gemini a tool to call, and the model's response doesn't use the tool, LiteLLM error. This issue occurs with litellm.acompletion, but not litellm.completion. It also happens in the Proxy. Here's an example:
importlitellmtools= [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["location"],
},
},
}
]
messages= [
{"role": "user", "content": "Hello"}
]
# replacing with `litellm.completion` doesn't throw an errorcompletion=awaitlitellm.acompletion(
model="vertex_ai/gemini-pro", messages=messages, tools=tools
)
completion.choices[0]
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/llms/vertex_ai.py](https://localhost:8080/#) in async_completion(llm_model, mode, prompt, model, model_response, logging_obj, request_str, encoding, messages, print_verbose, client_options, instances, vertex_project, vertex_location, **optional_params)
829 args_dict = {}
--> 830 for k, v in function_call.args.items():
831 args_dict[k] = v
AttributeError: 'NoneType' object has no attribute 'items'
During handling of the above exception, another exception occurred:
VertexAIError Traceback (most recent call last)
7 frames
VertexAIError: 'NoneType' object has no attribute 'items'
During handling of the above exception, another exception occurred:
APIError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs)
7887 if original_exception.status_code == 500:
7888 exception_mapping_worked = True
-> 7889 raise APIError(
7890 message=f"VertexAIException - {error_str}",
7891 status_code=500,
APIError: VertexAIException - 'NoneType' object has no attribute 'items'
model: vertex_ai/gemini-pro
Cause
The problem is caused by how LiteLLM checks if a function call is included in the model response. If LiteLLM thinks a function call is included, it will try to iterate the arguments. Here, faulty check logic results in iterating None (see above for loop in the error trace).
This is how LiteLLM checks if a function call is included in the response:
res: text: "Hello! It\'s nice to meet you. What would you like to talk about?"
hasAttr: True
type(function_call): <class 'google.cloud.aiplatform_v1beta1.types.tool.FunctionCall'>
function_call.name:
(Using the non-async Gemini SDK function, model.generate_content, results in the same output. I'm not sure why LiteLLM's sync call works, but not the async)
So, if tools is not None is true, then LiteLLM will always attempt to iterate the function args, even when they don't exist. I think this is an issue with
Solution
I'm not sure why LiteLLM's sync call works, but not the async. Still looking into this.
Update: litellm.completion uses a different check:
Reproduction
If you give Gemini a tool to call, and the model's response doesn't use the tool, LiteLLM error. This issue occurs with
litellm.acompletion
, but notlitellm.completion
. It also happens in the Proxy. Here's an example:Output (here is the verbose output):
Cause
The problem is caused by how LiteLLM checks if a function call is included in the model response. If LiteLLM thinks a function call is included, it will try to iterate the arguments. Here, faulty check logic results in iterating
None
(see abovefor
loop in the error trace).This is how LiteLLM checks if a function call is included in the response:
https://github.com/BerriAI/litellm/blob/main/litellm/llms/vertex_ai.py#L825-L827
The
hasattr
call will return true whether function_call exists or not. This issue can be reproduced with the Gemini SDK:Output:
(Using the non-async Gemini SDK function,
model.generate_content
, results in the same output. I'm not sure why LiteLLM's sync call works, but not the async)So, if
tools is not None
is true, then LiteLLM will always attempt to iterate the function args, even when they don't exist. I think this is an issue withSolution
I'm not sure why LiteLLM's sync call works, but not the async. Still looking into this.
Update:
litellm.completion
uses a different check:litellm/litellm/llms/vertex_ai.py
Lines 526 to 528 in 409bd5b
This is probably the solution. I'll make a PR.
The text was updated successfully, but these errors were encountered: