-
-
Notifications
You must be signed in to change notification settings - Fork 656
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(instructor/patch.py, tests/openai/test_multitask.py): Correct model checking and add new tests #413
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes requested.
Something look wrong? You can customize Ellipsis by editing the ellipsis.yaml for this repository.
Generated with ❤️ by ellipsis.dev
@@ -213,8 +213,7 @@ def process_response( | |||
|
|||
# ? This really hints at the fact that we need a better way of | |||
# ? attaching usage data and the raw response to the model we return. | |||
if isinstance(response_model, IterableBase): | |||
#! If the response model is a multitask, return the tasks | |||
if isinstance(model, IterableBase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable 'model' is not defined in the scope of this function. This will cause a NameError at runtime. Please revert the change to 'response_model'.
@@ -267,7 +266,7 @@ | |||
|
|||
# ? This really hints at the fact that we need a better way of | |||
# ? attaching usage data and the raw response to the model we return. | |||
if isinstance(response_model, IterableBase): | |||
if isinstance(model, IterableBase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable 'model' is not defined in the scope of this function. This will cause a NameError at runtime. Please revert the change to 'response_model'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes requested.
Something look wrong? You can customize Ellipsis by editing the ellipsis.yaml for this repository.
Generated with ❤️ by ellipsis.dev
@@ -213,8 +214,7 @@ def process_response( | |||
|
|||
# ? This really hints at the fact that we need a better way of | |||
# ? attaching usage data and the raw response to the model we return. | |||
if isinstance(response_model, IterableBase): | |||
#! If the response model is a multitask, return the tasks | |||
if isinstance(model, IterableBase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable model
is not defined in the scope of this function. This will likely cause a NameError
at runtime. Did you mean to use response_model
instead?
tests/openai/test_multitask.py
Outdated
resp = [] | ||
async for user in await stream_extract(input="Jason is 20, Sarah is 30"): | ||
resp.append(user) | ||
print(resp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove the print
statement. It's not a good practice to have print statements in test cases. If you want to check the value of resp
, consider using assertions or logging.
fixes #412
Summary:
This PR fixes a bug in
instructor/patch.py
and adds new tests intests/openai/test_multitask.py
for streaming and non-streaming scenarios.Key points:
process_response
andprocess_response_async
ininstructor/patch.py
to checkmodel
instead ofresponse_model
.tests/openai/test_multitask.py
for streaming and non-streaming scenarios, both synchronous and asynchronous.Generated with ❤️ by ellipsis.dev