-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] OpenAI Function Calling #1224
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #1224 +/- ##
==========================================
+ Coverage 56.52% 56.75% +0.22%
==========================================
Files 146 146
Lines 5935 5929 -6
==========================================
+ Hits 3355 3365 +10
+ Misses 2580 2564 -16 ☔ View full report in Codecov by Sentry. |
The attached script can be used to test the functionality with different inputs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome. Thanks for upgrading the version and fixing the function calling.
Description
This pull request includes primarily the implementation of a solution for OpenAI function calling.
To achieve better consistency in the OpenAILlm code, the Langchain package has been upgraded to version 0.1.4. In this context, a set of imports across the code has been amended for compatbility and preparation for incoming deprecations. No impact could be identified on the package functionality.
The Function calling has been implemented to allow for the passing of a single function structure to an OpenAILlm instance. This instance will be limited to trying to map the query to the provided function structure.
Pydantic models, Python functions and OpenAI format dicts are accepted as input.
When queried, the model returns a string, with either a JSON-formatted output of the method arguments, or a default message documenting the failure.
A test has been developed for the OpenAILlm querying functionality, when passing a "tools" object representing the function structure. The documentation has been adjusted to reflect the feature as now implemented.
It must be noted that the previous implementation which had been merged into the repository is faulty. It does not even leverage OpenAIs Function calling interface (it simply sends a normal query), nor does it result in the output that is expected from this feature (no method argument matching). It also made use of a legacy Langchain interface (function_call instead of tools_call).
Fixes #796
Type of change
Please delete options that are not relevant.
How Has This Been Tested?
During development, a script running the functionality with different types of input was created and used. This also included negative tests. Furthermore, a pytest has been developed to make sure the function is executed as expected.
The functionality can be tested by running the code that has been added to the docs/components/llms.mdx file.
Please delete options that are not relevant.
Test script:
Checklist:
My code follows the style guidelines of this project
I have performed a self-review of my own code
I have made corresponding changes to the documentation
My changes generate no new warnings
I have added tests that prove my fix is effective or that my feature works
New and existing unit tests pass locally with my changes
I have checked my code and corrected any misspellings
Maintainer Checklist