Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: fix the issues when using tools in gemini #1969

Merged

Conversation

kan-bayashi
Copy link
Contributor

@kan-bayashi kan-bayashi commented Feb 14, 2024

This PR fixes the following two issues:

  • Error when using multiple tools
  • Unexpected behavior when response.candidates[0].content.parts[0].function_call exists but empty (not None)

Error reproducible code

import litellm
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "get_current_stock_price",
            "description": "Get the current stock price",
            "parameters": {
                "type": "object",
                "properties": {},
                "required": [],
            },
        },
    },
]
messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
completion = litellm.completion(
    model="gemini-pro", messages=messages, tools=tools, tool_choice="auto"
)
print(messages)
print(completion)
messages = [{"role": "user", "content": "Please tell me the stock price."}]
completion = litellm.completion(
    model="gemini-pro", messages=messages, tools=tools, tool_choice="auto"
)
print(messages)
print(completion)

Before

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

---------------------------------------------------------------------------
_InactiveRpcError                         Traceback (most recent call last)
File ~/test-litellm/.venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:79, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
     78 try:
---> 79     return callable_(*args, **kwargs)
     80 except grpc.RpcError as exc:

File ~/test-litellm/.venv/lib/python3.10/site-packages/grpc/_channel.py:1160, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
   1154 (
   1155     state,
   1156     call,
   1157 ) = self._blocking(
   1158     request, timeout, metadata, credentials, wait_for_ready, compression
   1159 )
-> 1160 return _end_unary_response_blocking(state, call, False, None)

File ~/test-litellm/.venv/lib/python3.10/site-packages/grpc/_channel.py:1003, in _end_unary_response_blocking(state, call, with_call, deadline)
   1002 else:
-> 1003     raise _InactiveRpcError(state)

_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
        status = StatusCode.INVALID_ARGUMENT
        details = "At most one tool is supported."
        debug_error_string = "UNKNOWN:Error received from peer ipv4:172.217.25.170:443 {grpc_message:"At most one tool is supported.", grpc_status:3, created_time:"2024-02-14T13:07:52.970947+09:00"}"
>

The above exception was the direct cause of the following exception:

InvalidArgument                           Traceback (most recent call last)
File ~/test-litellm/.venv/lib/python3.10/site-packages/litellm/llms/vertex_ai.py:428, in completion(model, messages, model_response, print_verbose, encoding, logging_obj, vertex_project, vertex_location, optional_params, litellm_params, logger_fn, acompletion)
    427 ## LLM Call
--> 428 response = llm_model.generate_content(
    429     contents=content,
    430     generation_config=GenerationConfig(**optional_params),
    431     safety_settings=safety_settings,
    432     tools=tools,
    433 )
    435 if tools is not None and hasattr(
    436     response.candidates[0].content.parts[0], "function_call"
    437 ):

File ~/test-litellm/.venv/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:353, in _GenerativeModel.generate_content(self, contents, generation_config, safety_settings, tools, stream)
    352 else:
--> 353     return self._generate_content(
    354         contents=contents,
    355         generation_config=generation_config,
    356         safety_settings=safety_settings,
    357         tools=tools,
    358     )

File ~/test-litellm/.venv/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:434, in _GenerativeModel._generate_content(self, contents, generation_config, safety_settings, tools)
    428 request = self._prepare_request(
    429     contents=contents,
    430     generation_config=generation_config,
    431     safety_settings=safety_settings,
    432     tools=tools,
    433 )
--> 434 gapic_response = self._prediction_client.generate_content(request=request)
    435 return self._parse_response(gapic_response)

File ~/test-litellm/.venv/lib/python3.10/site-packages/google/cloud/aiplatform_v1beta1/services/prediction_service/client.py:2075, in PredictionServiceClient.generate_content(self, request, model, contents, retry, timeout, metadata)
   2074 # Send the request.
-> 2075 response = rpc(
   2076     request,
   2077     retry=retry,
   2078     timeout=timeout,
   2079     metadata=metadata,
   2080 )
   2082 # Done; return the response.

File ~/test-litellm/.venv/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:131, in _GapicCallable.__call__(self, timeout, retry, compression, *args, **kwargs)
    129     kwargs["compression"] = compression
--> 131 return wrapped_func(*args, **kwargs)

File ~/test-litellm/.venv/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:81, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
     80 except grpc.RpcError as exc:
---> 81     raise exceptions.from_grpc_error(exc) from exc

InvalidArgument: 400 At most one tool is supported.

During handling of the above exception, another exception occurred:

VertexAIError                             Traceback (most recent call last)
File ~/test-litellm/.venv/lib/python3.10/site-packages/litellm/main.py:1472, in completion(model, messages, timeout, temperature, top_p, n, stream, stop, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, deployment_id, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
   1466 vertex_ai_location = (
   1467     optional_params.pop("vertex_ai_location", None)
   1468     or litellm.vertex_location
   1469     or get_secret("VERTEXAI_LOCATION")
   1470 )
-> 1472 model_response = vertex_ai.completion(
   1473     model=model,
   1474     messages=messages,
   1475     model_response=model_response,
   1476     print_verbose=print_verbose,
   1477     optional_params=optional_params,
   1478     litellm_params=litellm_params,
   1479     logger_fn=logger_fn,
   1480     encoding=encoding,
   1481     vertex_location=vertex_ai_location,
   1482     vertex_project=vertex_ai_project,
   1483     logging_obj=logging,
   1484     acompletion=acompletion,
   1485 )
   1487 if (
   1488     "stream" in optional_params
   1489     and optional_params["stream"] == True
   1490     and acompletion == False
   1491 ):

File ~/test-litellm/.venv/lib/python3.10/site-packages/litellm/llms/vertex_ai.py:614, in completion(model, messages, model_response, print_verbose, encoding, logging_obj, vertex_project, vertex_location, optional_params, litellm_params, logger_fn, acompletion)
    613 except Exception as e:
--> 614     raise VertexAIError(status_code=500, message=str(e))

VertexAIError: 400 At most one tool is supported.

During handling of the above exception, another exception occurred:

RateLimitError                            Traceback (most recent call last)
Cell In[1], line 35
      2 tools = [
      3     {
      4         "type": "function",
   (...)
     32     },
     33 ]
     34 messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
---> 35 completion = litellm.completion(
     36     model="gemini-pro", messages=messages, tools=tools, tool_choice="auto"
     37 )

File ~/test-litellm/.venv/lib/python3.10/site-packages/litellm/utils.py:2474, in client.<locals>.wrapper(*args, **kwargs)
   2470         if (
   2471             liteDebuggerClient and liteDebuggerClient.dashboard_url != None
   2472         ):  # make it easy to get to the debugger logs if you've initialized it
   2473             e.message += f"\n Check the log in your dashboard - {liteDebuggerClient.dashboard_url}"
-> 2474 raise e

File ~/test-litellm/.venv/lib/python3.10/site-packages/litellm/utils.py:2377, in client.<locals>.wrapper(*args, **kwargs)
   2375         print_verbose(f"Error while checking max token limit: {str(e)}")
   2376 # MODEL CALL
-> 2377 result = original_function(*args, **kwargs)
   2378 end_time = datetime.datetime.now()
   2379 if "stream" in kwargs and kwargs["stream"] == True:

File ~/test-litellm/.venv/lib/python3.10/site-packages/litellm/main.py:1898, in completion(model, messages, timeout, temperature, top_p, n, stream, stop, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, deployment_id, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
   1895     return response
   1896 except Exception as e:
   1897     ## Map to OpenAI Exception
-> 1898     raise exception_type(
   1899         model=model,
   1900         custom_llm_provider=custom_llm_provider,
   1901         original_exception=e,
   1902         completion_kwargs=args,
   1903     )

File ~/test-litellm/.venv/lib/python3.10/site-packages/litellm/utils.py:7513, in exception_type(model, original_exception, custom_llm_provider, completion_kwargs)
   7511 # don't let an error with mapping interrupt the user from receiving an error from the llm api calls
   7512 if exception_mapping_worked:
-> 7513     raise e
   7514 else:
   7515     raise original_exception

File ~/test-litellm/.venv/lib/python3.10/site-packages/litellm/utils.py:6765, in exception_type(model, original_exception, custom_llm_provider, completion_kwargs)
   6760 elif (
   6761     "429 Quota exceeded" in error_str
   6762     or "IndexError: list index out of range"
   6763 ):
   6764     exception_mapping_worked = True
-> 6765     raise RateLimitError(
   6766         message=f"VertexAIException - {error_str}",
   6767         model=model,
   6768         llm_provider="vertex_ai",
   6769         response=httpx.Response(
   6770             status_code=429,
   6771             request=httpx.Request(
   6772                 method="POST",
   6773                 url=" https://cloud.google.com/vertex-ai/",
   6774             ),
   6775         ),
   6776     )
   6777 if hasattr(original_exception, "status_code"):
   6778     if original_exception.status_code == 400:

RateLimitError: VertexAIException - 400 At most one tool is supported.

After fix the first issue

[{'role': 'user', 'content': "What's the weather like in Boston today?"}]
ModelResponse(id='chatcmpl-050a4c91-f559-4af8-8e83-09ba2fd56132', choices=[Choices(finish_reason='STOP', index=0, message=Message(content='Message(content=None, role=\'assistant\', tool_calls=[ChatCompletionMessageToolCall(id=\'call_2794dccb-6346-4d3e-97ba-12bcf78fbf75\', function=Function(arguments=\'{"location": "Boston, MA"}\', name=\'get_current_weather\'), type=\'function\')])', role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_2794dccb-6346-4d3e-97ba-12bcf78fbf75', function=Function(arguments='{"location": "Boston, MA"}', name='get_current_weather'), type='function')]))], created=1707889900, model='gemini-pro', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=55, completion_tokens=9, total_tokens=64))
[{'role': 'user', 'content': 'Please tell me the stock price.'}]
ModelResponse(id='chatcmpl-037e7e96-acef-4d79-b3f8-9a15580da6dc', choices=[Choices(finish_reason='STOP', index=0, message=Message(content="Message(content=None, role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_e30ca22b-3eb5-4f3e-8152-c262ec088beb', function=Function(arguments='{}', name=''), type='function')])", role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_e30ca22b-3eb5-4f3e-8152-c262ec088beb', function=Function(arguments='{}', name=''), type='function')]))], created=1707889902, model='gemini-pro', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=52, completion_tokens=18, total_tokens=70))

After fix both issues

[{'role': 'user', 'content': "What's the weather like in Boston today?"}]
ModelResponse(id='chatcmpl-5c7f5560-6736-4654-83ad-58f7a7ea86eb', choices=[Choices(finish_reason='STOP', index=0, message=Message(content='Message(content=None, role=\'assistant\', tool_calls=[ChatCompletionMessageToolCall(id=\'call_32580eac-7ce5-411d-a30b-e622d2af38ad\', function=Function(arguments=\'{"location": "Boston, MA"}\', name=\'get_current_weather\'), type=\'function\')])', role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_32580eac-7ce5-411d-a30b-e622d2af38ad', function=Function(arguments='{"location": "Boston, MA"}', name='get_current_weather'), type='function')]))], created=1707899384, model='gemini-pro', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=55, completion_tokens=9, total_tokens=64))
[{'role': 'user', 'content': 'Please tell me the stock price.'}]
ModelResponse(id='chatcmpl-6f8ea342-cad2-4d95-a7b3-777b026ea799', choices=[Choices(finish_reason='STOP', index=0, message=Message(content='I am sorry, I cannot fulfill this request. The available tools lack the desired functionality.', role='assistant'))], created=1707899385, model='gemini-pro', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=52, completion_tokens=18, total_tokens=70))

Copy link

vercel bot commented Feb 14, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 14, 2024 8:40am
litellm-dashboard ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 14, 2024 8:40am

@krrishdholakia
Copy link
Contributor

@kan-bayashi could you upload a screenshot of this working with multiple tools for confirmation?

@kan-bayashi kan-bayashi changed the title fix: fix the issue when using multiple tools in gemini fix: fix the issues when using tools in gemini Feb 14, 2024
@kan-bayashi
Copy link
Contributor Author

kan-bayashi commented Feb 14, 2024

Hi @krrishdholakia. I updated the PR description, please check it.
During testing, I found the another issue when using tool without argument.
(See the following error)

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
File ~/work/litellm/litellm/llms/vertex_ai.py:440, in completion(model, messages, model_response, print_verbose, encoding, logging_obj, vertex_project, vertex_location, optional_params, litellm_params, logger_fn, acompletion)
    439 args_dict = {}
--> 440 for k, v in function_call.args.items():
    441     args_dict[k] = v

AttributeError: 'NoneType' object has no attribute 'items'

During handling of the above exception, another exception occurred:

VertexAIError                             Traceback (most recent call last)
File ~/work/litellm/litellm/main.py:1472, in completion(model, messages, timeout, temperature, top_p, n, stream, stop, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, deployment_id, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
   1466 vertex_ai_location = (
   1467     optional_params.pop("vertex_ai_location", None)
   1468     or litellm.vertex_location
   1469     or get_secret("VERTEXAI_LOCATION")
   1470 )
-> 1472 model_response = vertex_ai.completion(
   1473     model=model,
   1474     messages=messages,
   1475     model_response=model_response,
   1476     print_verbose=print_verbose,
   1477     optional_params=optional_params,
   1478     litellm_params=litellm_params,
   1479     logger_fn=logger_fn,
   1480     encoding=encoding,
   1481     vertex_location=vertex_ai_location,
   1482     vertex_project=vertex_ai_project,
   1483     logging_obj=logging,
   1484     acompletion=acompletion,
   1485 )
   1487 if (
   1488     "stream" in optional_params
   1489     and optional_params["stream"] == True
   1490     and acompletion == False
   1491 ):

File ~/work/litellm/litellm/llms/vertex_ai.py:614, in completion(model, messages, model_response, print_verbose, encoding, logging_obj, vertex_project, vertex_location, optional_params, litellm_params, logger_fn, acompletion)
    613 except Exception as e:
--> 614     raise VertexAIError(status_code=500, message=str(e))

VertexAIError: 'NoneType' object has no attribute 'items'

During handling of the above exception, another exception occurred:

RateLimitError                            Traceback (most recent call last)
Cell In[2], line 41
     39 print(completion)
     40 messages = [{"role": "user", "content": "Please tell me the stock price."}]
---> 41 completion = litellm.completion(
     42     model="gemini-pro", messages=messages, tools=tools, tool_choice="auto"
     43 )
     44 print(completion)

File ~/work/litellm/litellm/utils.py:2474, in client.<locals>.wrapper(*args, **kwargs)
   2470         if (
   2471             liteDebuggerClient and liteDebuggerClient.dashboard_url != None
   2472         ):  # make it easy to get to the debugger logs if you've initialized it
   2473             e.message += f"\n Check the log in your dashboard - {liteDebuggerClient.dashboard_url}"
-> 2474 raise e

File ~/work/litellm/litellm/utils.py:2377, in client.<locals>.wrapper(*args, **kwargs)
   2375         print_verbose(f"Error while checking max token limit: {str(e)}")
   2376 # MODEL CALL
-> 2377 result = original_function(*args, **kwargs)
   2378 end_time = datetime.datetime.now()
   2379 if "stream" in kwargs and kwargs["stream"] == True:

File ~/work/litellm/litellm/main.py:1898, in completion(model, messages, timeout, temperature, top_p, n, stream, stop, max_tokens, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, deployment_id, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
   1895     return response
   1896 except Exception as e:
   1897     ## Map to OpenAI Exception
-> 1898     raise exception_type(
   1899         model=model,
   1900         custom_llm_provider=custom_llm_provider,
   1901         original_exception=e,
   1902         completion_kwargs=args,
   1903     )

File ~/work/litellm/litellm/utils.py:7510, in exception_type(model, original_exception, custom_llm_provider, completion_kwargs)
   7508 # don't let an error with mapping interrupt the user from receiving an error from the llm api calls
   7509 if exception_mapping_worked:
-> 7510     raise e
   7511 else:
   7512     raise original_exception

File ~/work/litellm/litellm/utils.py:6762, in exception_type(model, original_exception, custom_llm_provider, completion_kwargs)
   6757 elif (
   6758     "429 Quota exceeded" in error_str
   6759     or "IndexError: list index out of range"
   6760 ):
   6761     exception_mapping_worked = True
-> 6762     raise RateLimitError(
   6763         message=f"VertexAIException - {error_str}",
   6764         model=model,
   6765         llm_provider="vertex_ai",
   6766         response=httpx.Response(
   6767             status_code=429,
   6768             request=httpx.Request(
   6769                 method="POST",
   6770                 url=" https://cloud.google.com/vertex-ai/",
   6771             ),
   6772         ),
   6773     )
   6774 if hasattr(original_exception, "status_code"):
   6775     if original_exception.status_code == 400:

RateLimitError: VertexAIException - 'NoneType' object has no attribute 'items'

I also fixed the issue in 917525e (#1969).

@kan-bayashi
Copy link
Contributor Author

kan-bayashi commented Feb 14, 2024

@krrishdholakia
I found the second completion in the above is failed due to the wrong LLM outputs.
(function_call = response.candidates[0].content.parts[0].function_call exists but empty)
Therefore, I think the following part

if tools is not None and hasattr(
response.candidates[0].content.parts[0], "function_call"
):

should be

            if tools is not None and bool(
                getattr(response.candidates[0].content.parts[0], "function_call", None)
            ):

Then second completion looks fine.

# Before
[{'role': 'user', 'content': 'Please tell me the stock price.'}]
ModelResponse(id='chatcmpl-037e7e96-acef-4d79-b3f8-9a15580da6dc', choices=[Choices(finish_reason='STOP', index=0, message=Message(content="Message(content=None, role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_e30ca22b-3eb5-4f3e-8152-c262ec088beb', function=Function(arguments='{}', name=''), type='function')])", role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_e30ca22b-3eb5-4f3e-8152-c262ec088beb', function=Function(arguments='{}', name=''), type='function')]))], created=1707889902, model='gemini-pro', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=52, completion_tokens=18, total_tokens=70))

# After
[{'role': 'user', 'content': 'Please tell me the stock price.'}]
ModelResponse(id='chatcmpl-6f8ea342-cad2-4d95-a7b3-777b026ea799', choices=[Choices(finish_reason='STOP', index=0, message=Message(content='I am sorry, I cannot fulfill this request. The available tools lack the desired functionality.', role='assistant'))], created=1707899385, model='gemini-pro', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=52, completion_tokens=18, total_tokens=70))

I will make a commit to fix this issue.

@kan-bayashi
Copy link
Contributor Author

Now everything works fine! Please let me know if you find wondering points.

@kan-bayashi
Copy link
Contributor Author

Another example works correctly with multiple tools.

import litellm

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "get_tomorrow_weather",
            "description": "Get the tomorrow weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        },
    },
    {
        "type": "function",
        "function": {
            "name": "get_yesterday_newyork_weather",
            "description": "Get the tomorrow weather in New York yesterday",
            "parameters": {
                "type": "object",
                "properties": {},
                "required": [],
            },
        },
    },
]
messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
completion = litellm.completion(
    model="gemini-pro", messages=messages, tools=tools, tool_choice="auto"
)
print(messages)
print(completion)

messages = [{"role": "user", "content": "What's the weather like in Boston tomorrow?"}]
completion = litellm.completion(
    model="gemini-pro", messages=messages, tools=tools, tool_choice="auto"
)
print(messages)
print(completion)

messages = [{"role": "user", "content": "What's the weather like in New York yesterday?"}]
completion = litellm.completion(
    model="gemini-pro", messages=messages, tools=tools, tool_choice="auto"
)
print(messages)
print(completion)
[{'role': 'user', 'content': "What's the weather like in Boston today?"}]
ModelResponse(id='chatcmpl-69d45e30-6b17-422f-8cf5-8c0c802e52b7', choices=[Choices(finish_reason='STOP', index=0, message=Message(content='Message(content=None, role=\'assistant\', tool_calls=[ChatCompletionMessageToolCall(id=\'call_78d33abc-b931-4eb7-b01f-14e58e1949c2\', function=Function(arguments=\'{"location": "Boston, MA"}\', name=\'get_current_weather\'), type=\'function\')])', role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_78d33abc-b931-4eb7-b01f-14e58e1949c2', function=Function(arguments='{"location": "Boston, MA"}', name='get_current_weather'), type='function')]))], created=1707955914, model='gemini-pro', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=92, completion_tokens=9, total_tokens=101))
[{'role': 'user', 'content': "What's the weather like in Boston tomorrow?"}]
ModelResponse(id='chatcmpl-09868eb0-b20c-4513-a354-fa7b25dcff2e', choices=[Choices(finish_reason='STOP', index=0, message=Message(content='Message(content=None, role=\'assistant\', tool_calls=[ChatCompletionMessageToolCall(id=\'call_96b762d1-81ff-49d1-86cc-8072c269ab48\', function=Function(arguments=\'{"location": "Boston, MA"}\', name=\'get_tomorrow_weather\'), type=\'function\')])', role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_96b762d1-81ff-49d1-86cc-8072c269ab48', function=Function(arguments='{"location": "Boston, MA"}', name='get_tomorrow_weather'), type='function')]))], created=1707955921, model='gemini-pro', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=92, completion_tokens=9, total_tokens=101))
[{'role': 'user', 'content': "What's the weather like in New York yesterday?"}]
ModelResponse(id='chatcmpl-21ede8d8-ec50-46fe-8b01-3ab1c02900c2', choices=[Choices(finish_reason='STOP', index=0, message=Message(content="Message(content=None, role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_94806bb6-0e1c-4956-bac7-e0c10b428c3f', function=Function(arguments='{}', name='get_yesterday_newyork_weather'), type='function')])", role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_94806bb6-0e1c-4956-bac7-e0c10b428c3f', function=Function(arguments='{}', name='get_yesterday_newyork_weather'), type='function')]))], created=1707955928, model='gemini-pro', object='chat.completion', system_fingerprint=None, usage=Usage(prompt_tokens=93, completion_tokens=8, total_tokens=101))

@themrzmaster
Copy link
Contributor

nice! i need that fix

@krrishdholakia krrishdholakia merged commit 851473b into BerriAI:main Feb 21, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants