-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add max_tokens to BaseGenerator params #4168
Conversation
FYI, committed some minor lg changes |
@masci this PR should be gtg now; the impact is minimal though as we'll deprecate all these generators soon anyway. |
test/nodes/test_generator.py
Outdated
@@ -197,3 +197,22 @@ def test_build_prompt_within_max_length(): | |||
|
|||
assert len(prompt_docs) == 1 | |||
assert prompt_docs[0] == documents[0] | |||
|
|||
|
|||
@pytest.mark.integration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be unit at this point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, I wanted it, but when I tagged the test with the unit, it fails with
FAILED test/nodes/test_generator.py::test_openai_answer_generator_pipeline_max_tokens - AttributeError: module 'requests.sessions' has no attribute 'Session'
because I believe it disables any HTTP requests. I'll reverted it to the integration marker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That means the mocking was wrong, when this happens you should fix it instead of just marking it integration. I have fixed it for you, please have a look at my commit.
test/nodes/test_generator.py
Outdated
@@ -197,3 +197,22 @@ def test_build_prompt_within_max_length(): | |||
|
|||
assert len(prompt_docs) == 1 | |||
assert prompt_docs[0] == documents[0] | |||
|
|||
|
|||
@pytest.mark.integration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That means the mocking was wrong, when this happens you should fix it instead of just marking it integration. I have fixed it for you, please have a look at my commit.
Related Issues
max_tokens
in the answer_generatorrun
method #3797Proposed Changes:
Allow per pipeline run invocation max_tokens parameter to be set. Currently, we only allow per-component max_tokens setting. Add max_tokens implementation for OpenAIAnswerGenerator, RagGenerator, and Seq2SeqGenerator.
How did you test it?
Unit tests added for the bare component and the pipeline usage
Notes for the reviewer
Ready for review
Checklist
fix:
,feat:
,build:
,chore:
,ci:
,docs:
,style:
,refactor:
,perf:
,test:
.