-
Notifications
You must be signed in to change notification settings - Fork 14.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad request: The following model_kwargs
are not used by the model: ['return_full_text', 'stop', 'watermark', 'stop_sequences'] (note: typos in the generate arguments will also show up in this list)
#18321
Comments
I'm facing the same problem (with various other models as well). I believe it could be caused by this commit on the |
` os.environ["HUGGINGFACEHUB_API_TOKEN"] = ""
any help? |
I downgraded my requirements on the langchain library for now, and I can use the endpoint class. It's just a workaround, but for reference, I'm using Python 3.11, and I've pinned the following package versions:
|
I am also facing this issue, and it appears that @nicole-wright is correct. It is specifically these lines: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/huggingface_endpoint.py#L199-L215 which inject default params into the call to the HuggingFace API regardless of whether they are present on the HuggingFaceEndpoint instance or not. If a model/endpoint does not support these, it causes HuggingFace to throw back an error (even if the params are populated as None, e.g. Perhaps a simple fix would be to remove None values from the |
This is happening also when using local models. I got errors of type:
However, this is working when using local Huggingface gpt2 with "text-generation" |
Did anyone solve this issue? |
I'm fairly sure it's a bug and it needs a PR. I'm not sure I know the library enough to contribute but I could always give it a shot. For the prototyping I'm doing, downgrading to a previous release works and I can still access both the hub and the pipeline objects correctly. |
I agree I also see the problem with various model as microsoft/phi-1_5 |
I am trying to download the langchain-text-splitters library, but it is not suitable with these set of libraries, so I needed to upgrade the langchain library (but the main error kwargs shows again). |
I have environment like
I have code as below: from langchain_community.llms.huggingface_endpoint import HuggingFaceEndpoint
pipeline = HuggingFaceEndpoint(
huggingfacehub_api_token=os.getenv("HUGGINGFACE_API_KEY"),
repo_id="facebook/bart-large-cnn"
)
result = pipe.invoke(doc.page_content) And I am facing error as below:
I tried downgrading the langchain library, but it is causing issues with other packages (langchain-community etc.). What can I do? |
Solution: Using I found an effective way to use the Previous Method
Improved Method This new method utilizes the HuggingFaceHub from LangChain with more detailed model configurations.
|
@aymeric-roucher , @baskaryan - could provide a bug fix please, your commit 0d29476 breaks LangChain (see bug report above)! |
Hey, I'm also getting the same issue as mentioned above. I tried downgrading the packages with no success is there any other workaround? |
Same problem here, reverted back to HuggingFaceHub and enduring the deprecation warnings. Hopefully the bug gets fixed before the workaround is unsupported. |
Checked other resources
Example Code
from langchain.prompts import PromptTemplate
from langchain_community.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.chains import LLMChain
llm = HuggingFaceEndpoint(
repo_id="google/flan-t5-large",
temperature=0,
max_new_tokens=250,
huggingfacehub_api_token=HUGGINGFACE_TOKEN
)
prompt_tpl = PromptTemplate(
template="What is the good name for a company that makes {product}",
input_variables=["product"]
)
chain = LLMChain(llm=llm, prompt=prompt_tpl)
print(chain.invoke("colorful socks"))
Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/michaelchu/Documents/agent/agent.py", line 20, in
print(chain.invoke("colorful socks"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_core/language_models/llms.py", line 1177, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/langchain_community/llms/huggingface_endpoint.py", line 256, in _call
response = self.client.post(
^^^^^^^^^^^^^^^^^
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/huggingface_hub/inference/_client.py", line 242, in post
hf_raise_for_status(response)
File "/Users/michaelchu/Documents/agent/venv/lib/python3.12/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: AxsbrX3A4JxXuBdYC7fv-)
Bad request:
The following
model_kwargs
are not used by the model: ['return_full_text', 'stop', 'watermark', 'stop_sequences'] (note: typos in the generate arguments will also show up in this list)Description
Hi, folks. I'm just trying to run a simple LLMChain and getting the Bad Request due to model_kwargs checking. I found there are several same issue being raised, however it haven't fixed in the latest release of langchain. Please help to take a look, thanks!
Previous Issue being raised: #10848
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
The text was updated successfully, but these errors were encountered: