New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose kwargs in LLMChainExtractor.from_llm
#3748
Expose kwargs in LLMChainExtractor.from_llm
#3748
Conversation
LLMChainExtractor.from_llm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi ravwojdyla, thank you for the proposal! Since there isn’t much context in the summary, do you mind sharing a little bit why do you need this parameter and under which scenario this would benefit other users too? Would like to learn more from you! Thank you!
@@ -69,9 +69,10 @@ def from_llm( | |||
llm: BaseLanguageModel, | |||
prompt: Optional[PromptTemplate] = None, | |||
get_input: Optional[Callable[[str, Document], str]] = None, | |||
**llm_chain_kwargs: dict[str, Any] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my understanding, this is for llm_chain creation, but the purpose of this function is to load from llm directly with no additional customization of llm_chain.
If my understanding is correct, I would suggest the following
- construct a LLMChain outside and then create LLMChainExtractor.
- if you need further convenience beyond step 1, I would suggest writing a static utility function with your preferred input and output in this file or another file in this folder.
I could be wrong. Will wait for maintainer to further comment here. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👋 @skcoirz see context in #3747, this is the same as in BaseQAWithSourcesChain
https://github.com/hwchase17/langchain/blob/72c5c15f7fdc1918880e3cfd0949199e5a0b5bda/langchain/chains/qa_with_sources/base.py#L40-L47
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, got you! Thank you for sharing the example! This is just my personal idea, please let me know if this doesn’t make sense to you. :)
In the example you shared, the kwargs
is passed to the class itself. So we can consider this function as an entry to initiate this class itself. Code below:
https://github.com/hwchase17/langchain/blob/72c5c15f7fdc1918880e3cfd0949199e5a0b5bda/langchain/chains/qa_with_sources/base.py#L39-L64
But in the PR here, the parameter kwargs
is used as input for LLMChain which is a intermediate layer between the function and the class. From my understanding, a more suitable solution is to build another entry function which build this extractor from LLMChain directly, which, however, is overlapping with the default creation method of this class. That’s why I think it’s easier to create a LLMChain outside and then build this class directly.
Please let me know if this makes sense to you. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, that sounds good, but then I/user would need to initialize the LLMChain much the same way it's being done in the from_llm
(which is a couple of extra lines of code, just to pass verbose=True
). I really appreciate the convenience of the from_llm
method (where I can reuse existing LLM object), so why not (in my opinion) improve it slightly and let the user pass kwargs to the internal LLMChain
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah that makes sense. If verbose is the only param which we need, maybe build a another wrapper function on top of from_LLM with name postfix _with_verbose ? In this way, we can get the verbose and we don't need to worry about other parameters in kwargs. How do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@skcoirz verbose
is just an example, a specialized _with_verbose
seems a bit smelly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, it's just my 2 cents. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, appreciate that :)
2ed62a0
to
32e40a2
Compare
Fixed up linting error ^ |
4dfa4c7
to
24571f9
Compare
@@ -69,9 +69,10 @@ def from_llm( | |||
llm: BaseLanguageModel, | |||
prompt: Optional[PromptTemplate] = None, | |||
get_input: Optional[Callable[[str, Document], str]] = None, | |||
llm_chain_kwargs: Union[Dict[str, Any], None] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can just be Optional[dict] = None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, sorry we are on py3.10, forgot all about Optional
... :)
24571f9
to
71cf30f
Compare
71cf30f
to
0d45b14
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome - thanks!
langchain-ai#3773) a simple follow up of langchain-ai#3748 - added test case - improve annotation when function return type is class itself.
Re: #3747