Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Gemini 1.5 Pro erroring when tools are provided but model response doesn't use tools, because of faulty function_call logic #3097

Closed
hi019 opened this issue Apr 17, 2024 · 0 comments · Fixed by #3102
Labels
bug Something isn't working

Comments

@hi019
Copy link

hi019 commented Apr 17, 2024

Reproduction

If you give Gemini a tool to call, and the model's response doesn't use the tool, LiteLLM error. This issue occurs with litellm.acompletion, but not litellm.completion. It also happens in the Proxy. Here's an example:

import litellm
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                    },
                },
                "required": ["location"],
            },
        },
    }
]
messages = [
    {"role": "user", "content": "Hello"}
]
# replacing with `litellm.completion` doesn't throw an error
completion = await litellm.acompletion(
    model="vertex_ai/gemini-pro", messages=messages, tools=tools
)
completion.choices[0]

Output (here is the verbose output):

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/llms/vertex_ai.py](https://localhost:8080/#) in async_completion(llm_model, mode, prompt, model, model_response, logging_obj, request_str, encoding, messages, print_verbose, client_options, instances, vertex_project, vertex_location, **optional_params)
    829                 args_dict = {}
--> 830                 for k, v in function_call.args.items():
    831                     args_dict[k] = v

AttributeError: 'NoneType' object has no attribute 'items'

During handling of the above exception, another exception occurred:

VertexAIError                             Traceback (most recent call last)
7 frames
VertexAIError: 'NoneType' object has no attribute 'items'

During handling of the above exception, another exception occurred:

APIError                                  Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/litellm/utils.py](https://localhost:8080/#) in exception_type(model, original_exception, custom_llm_provider, completion_kwargs)
   7887                     if original_exception.status_code == 500:
   7888                         exception_mapping_worked = True
-> 7889                         raise APIError(
   7890                             message=f"VertexAIException - {error_str}",
   7891                             status_code=500,

APIError: VertexAIException - 'NoneType' object has no attribute 'items'
model: vertex_ai/gemini-pro

Cause

The problem is caused by how LiteLLM checks if a function call is included in the model response. If LiteLLM thinks a function call is included, it will try to iterate the arguments. Here, faulty check logic results in iterating None (see above for loop in the error trace).

This is how LiteLLM checks if a function call is included in the response:

https://github.com/BerriAI/litellm/blob/main/litellm/llms/vertex_ai.py#L825-L827

            if tools is not None and hasattr(
                response.candidates[0].content.parts[0], "function_call"
            ):

The hasattr call will return true whether function_call exists or not. This issue can be reproduced with the Gemini SDK:

from vertexai.generative_models import GenerativeModel
model = GenerativeModel("gemini-pro")
res = await model._generate_content_async("Hello")

print("res: " + str(res.candidates[0].content.parts[0]))
print("hasAttr: " + str(hasattr(res.candidates[0].content.parts[0], "function_call")))
print("type(function_call): " + str(type(res.candidates[0].content.parts[0].function_call)))
print("function_call.name: " + res.candidates[0].content.parts[0].function_call.name)

Output:

res: text: "Hello! It\'s nice to meet you. What would you like to talk about?"

hasAttr: True
type(function_call): <class 'google.cloud.aiplatform_v1beta1.types.tool.FunctionCall'>
function_call.name:

(Using the non-async Gemini SDK function, model.generate_content, results in the same output. I'm not sure why LiteLLM's sync call works, but not the async)

So, if tools is not None is true, then LiteLLM will always attempt to iterate the function args, even when they don't exist. I think this is an issue with

Solution

I'm not sure why LiteLLM's sync call works, but not the async. Still looking into this.

Update: litellm.completion uses a different check:

if tools is not None and bool(
getattr(response.candidates[0].content.parts[0], "function_call", None)
):

This is probably the solution. I'll make a PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant