Skip to content

Conversation

aniketmaurya
Copy link
Collaborator

@aniketmaurya aniketmaurya commented Aug 4, 2025

Before submitting
  • Was this discussed/agreed via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

What does this PR do?

Users will be able to execute tool along in the chat method.

from litai import tool, LLM

@tool
def get_weather(location: str) -> str:
    """Get the weather of a given city."""
    return f"Weather in {location} is sunny."

llm = LLM(model="openai/gpt-4")
llm.chat("how is the weather in London?", tools=[get_weather], call_tools=True)
# Output: Weather in London is sunny.

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in GitHub issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

… tool calling

- Updated the _format_tool_response method to accept additional parameters: call_tools (defaulting to True) and tools (optional).
- Modified the _model_call and other related methods to pass these parameters, allowing for conditional tool invocation based on the call_tools flag.
- Improved handling of tool responses to align with the updated functionality.
…calls are made

- Added a check in the LLM class to return the content of the response directly if no tool calls are present in the response. This improves the handling of responses when tools are not utilized, ensuring more straightforward output in such cases.
- Removed the unnecessary breakpoint for cleaner code.
…nagement

- Updated the _format_tool_response method to use lit_tools instead of tools for better clarity and functionality.
- Enhanced the _model_call method to pass lit_tools, ensuring consistent handling of tool calls.
- Removed unnecessary breakpoint and improved exception handling in the LLM class.
- Adjusted related methods to streamline the response processing and maintain compatibility with the updated tool structure.
…logic

- Added a check to return an empty string if no tool calls are present in the response.
- Updated the _model_call method to return a single result directly if only one result is found, enhancing the response handling.
- Introduced a new test to validate the behavior of the model_call method with tools and auto call functionality.
Copy link

codecov bot commented Aug 4, 2025

Codecov Report

❌ Patch coverage is 83.33333% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 88%. Comparing base (6c38b1f) to head (5fb29fa).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@        Coverage Diff         @@
##           main   #30   +/-   ##
==================================
- Coverage    89%   88%   -0%     
==================================
  Files         6     6           
  Lines       411   417    +6     
==================================
+ Hits        364   368    +4     
- Misses       47    49    +2     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

- Updated the _format_tool_response method to return a JSON string for tool responses, ensuring consistent output.
- Modified the call_tool method to return a string instead of a list, enhancing clarity in response handling.
- Adjusted the return logic in call_tool to always return a JSON string when multiple results are present, improving data consistency.
@aniketmaurya aniketmaurya enabled auto-merge (squash) August 4, 2025 13:21
@aniketmaurya aniketmaurya merged commit 294e985 into main Aug 4, 2025
31 checks passed
@aniketmaurya aniketmaurya deleted the call-tool branch August 4, 2025 15:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants