-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement cancellation and timeout for all LLMs, allow passing LLM call options via LLMChain (like was already possible for stop) #1182
Conversation
…ll options via LLMChain (like was already possible for stop)
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
* Wraps _call and handles memory. | ||
*/ | ||
call( | ||
values: ChainValues & this["llm"]["CallOptions"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My ideal signature here (when v1 comes around?) would be:
call(
values: ChainValues
options: this["llm"]["CallOptions"] // Includes "callbacks?"
)
So as not to have any restricted/meta properties as I'm sure we'll keep adding more and more. Would that make sense long-term?
Otherwise, if this is the long-term solution, could it make sense to make just options
or callOptions
(and I guess stop
) a special value within the values
param here?
call(
values: ChainValues & {
options: this["llm"]["CallOptions"]
}
)
I see you have something like this below for openai-chat
but building it in lower level might make sense to me if this is the long-term fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it could make sense to add callbacks in call options
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, the thing is only language models have call options, whereas lots of other objects in the library have callbacks, so not sure about that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think overall we need a better solution for passing args to specific steps of a chain (eg. in an LLMChain the prompt is the first step and the LLM is the 2nd step (and the output parser is the 3rd step)). Not sure what the right approach is just yet
langchain/src/llms/base.ts
Outdated
@@ -229,20 +237,20 @@ export abstract class LLM extends BaseLLM { | |||
*/ | |||
abstract _call( | |||
prompt: string, | |||
stop?: string[] | this["CallOptions"], | |||
options?: Omit<this["CallOptions"], "timeout">, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait call
doesn't actually call _call
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is a bit confusing:
For LLMs, the main entrypoint is generate
, which calls eventually calls _generate
(which is abstract). call
is a convenience method in BaseLLM
that calls generate
when there is a single input/output expected.
To make it easier for people to implement their own LLMs, there is an simpler LLM base class LLM
where users have to implement a single _call
method (single input/output) that _generate
calls.
Hey @nfcampos, implemented several of the comments and changes I discussed here: jacoblee93@953f848 I didn't want to just directly merge into your branch before you looked at them. I added a new declared Let me know what you think! |
langchain/src/llms/base.ts
Outdated
@@ -229,20 +237,20 @@ export abstract class LLM extends BaseLLM { | |||
*/ | |||
abstract _call( | |||
prompt: string, | |||
stop?: string[] | this["CallOptions"], | |||
options?: Omit<this["CallOptions"], "timeout">, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is a bit confusing:
For LLMs, the main entrypoint is generate
, which calls eventually calls _generate
(which is abstract). call
is a convenience method in BaseLLM
that calls generate
when there is a single input/output expected.
To make it easier for people to implement their own LLMs, there is an simpler LLM base class LLM
where users have to implement a single _call
method (single input/output) that _generate
calls.
* Wraps _call and handles memory. | ||
*/ | ||
call( | ||
values: ChainValues & this["llm"]["CallOptions"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it could make sense to add callbacks in call options
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall looks good
…l support for chat models, add tests
No description provided.