Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose kwargs in LLMChainExtractor.from_llm #3748

Merged

Conversation

ravwojdyla
Copy link
Contributor

Re: #3747

@ravwojdyla ravwojdyla changed the title Expose kwargs in LLMChainExtractor.from_llm Expose kwargs in LLMChainExtractor.from_llm Apr 28, 2023
Copy link
Contributor

@skcoirz skcoirz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi ravwojdyla, thank you for the proposal! Since there isn’t much context in the summary, do you mind sharing a little bit why do you need this parameter and under which scenario this would benefit other users too? Would like to learn more from you! Thank you!

@@ -69,9 +69,10 @@ def from_llm(
llm: BaseLanguageModel,
prompt: Optional[PromptTemplate] = None,
get_input: Optional[Callable[[str, Document], str]] = None,
**llm_chain_kwargs: dict[str, Any]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my understanding, this is for llm_chain creation, but the purpose of this function is to load from llm directly with no additional customization of llm_chain.

If my understanding is correct, I would suggest the following

  1. construct a LLMChain outside and then create LLMChainExtractor.
  2. if you need further convenience beyond step 1, I would suggest writing a static utility function with your preferred input and output in this file or another file in this folder.

I could be wrong. Will wait for maintainer to further comment here. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, got you! Thank you for sharing the example! This is just my personal idea, please let me know if this doesn’t make sense to you. :)

In the example you shared, the kwargs is passed to the class itself. So we can consider this function as an entry to initiate this class itself. Code below:
https://github.com/hwchase17/langchain/blob/72c5c15f7fdc1918880e3cfd0949199e5a0b5bda/langchain/chains/qa_with_sources/base.py#L39-L64

But in the PR here, the parameter kwargs is used as input for LLMChain which is a intermediate layer between the function and the class. From my understanding, a more suitable solution is to build another entry function which build this extractor from LLMChain directly, which, however, is overlapping with the default creation method of this class. That’s why I think it’s easier to create a LLMChain outside and then build this class directly.

Please let me know if this makes sense to you. Thank you!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, that sounds good, but then I/user would need to initialize the LLMChain much the same way it's being done in the from_llm (which is a couple of extra lines of code, just to pass verbose=True). I really appreciate the convenience of the from_llm method (where I can reuse existing LLM object), so why not (in my opinion) improve it slightly and let the user pass kwargs to the internal LLMChain.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah that makes sense. If verbose is the only param which we need, maybe build a another wrapper function on top of from_LLM with name postfix _with_verbose ? In this way, we can get the verbose and we don't need to worry about other parameters in kwargs. How do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@skcoirz verbose is just an example, a specialized _with_verbose seems a bit smelly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, it's just my 2 cents. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good, appreciate that :)

@ravwojdyla ravwojdyla force-pushed the llm_chain_extractor_from_llm_kwargs branch from 2ed62a0 to 32e40a2 Compare April 29, 2023 02:34
@ravwojdyla
Copy link
Contributor Author

Fixed up linting error ^

@ravwojdyla ravwojdyla force-pushed the llm_chain_extractor_from_llm_kwargs branch 2 times, most recently from 4dfa4c7 to 24571f9 Compare April 29, 2023 03:22
@@ -69,9 +69,10 @@ def from_llm(
llm: BaseLanguageModel,
prompt: Optional[PromptTemplate] = None,
get_input: Optional[Callable[[str, Document], str]] = None,
llm_chain_kwargs: Union[Dict[str, Any], None] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can just be Optional[dict] = None

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, sorry we are on py3.10, forgot all about Optional ... :)

@ravwojdyla ravwojdyla force-pushed the llm_chain_extractor_from_llm_kwargs branch from 24571f9 to 71cf30f Compare April 29, 2023 03:43
@ravwojdyla ravwojdyla force-pushed the llm_chain_extractor_from_llm_kwargs branch from 71cf30f to 0d45b14 Compare April 29, 2023 03:46
Copy link
Contributor

@hwchase17 hwchase17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome - thanks!

@hwchase17 hwchase17 merged commit 57e0285 into langchain-ai:master Apr 29, 2023
9 checks passed
hwchase17 pushed a commit that referenced this pull request Apr 29, 2023
#3773)

a simple follow up of #3748
- added test case
- improve annotation when function return type is class itself.
samching pushed a commit to samching/langchain that referenced this pull request May 1, 2023
samching pushed a commit to samching/langchain that referenced this pull request May 1, 2023
langchain-ai#3773)

a simple follow up of langchain-ai#3748
- added test case
- improve annotation when function return type is class itself.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants