You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
llm.invoke('What is the difference between a duck and a goose? And why there are so many Goose in Canada?')
note that it gets all the way up to print("sending") before it gives the error
the following code does work
import openllm_client
client = openllm_client.HTTPClient('http://localhost:3000')
print(client.generate('What are large language models?'))
Logs
Traceback (most recent call last):
File "/mnt/c/Users/Public/Documents/PythonProjectsVenv1/help/plz.py", line 8, in<module>
llm.invoke('What is the difference between a duck and a goose? And why there are so many Goose in Canada?')
File "/home/ssikki/.virtualenvs/openllm_test/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 276, in invoke
self.generate_prompt(
File "/home/ssikki/.virtualenvs/openllm_test/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 633, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/home/ssikki/.virtualenvs/openllm_test/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 803, in generate
output = self._generate_helper(
File "/home/ssikki/.virtualenvs/openllm_test/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 670, in _generate_helper
raise e
File "/home/ssikki/.virtualenvs/openllm_test/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 657, in _generate_helper
self._generate(
File "/home/ssikki/.virtualenvs/openllm_test/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 1317, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/home/ssikki/.virtualenvs/openllm_test/lib/python3.10/site-packages/langchain_community/llms/openllm.py", line 273, in _call
self._client.generate(prompt, **config.model_dump(flatten=True))
TypeError: BaseModel.model_dump() got an unexpected keyword argument 'flatten'
Describe the bug
Connecting to OpenLLM isn't an issue from the look of it, but actually using it from langchain is
To reproduce
code
from langchain_community.llms import OpenLLM
print("connecting")
llm = OpenLLM(server_url='http://localhost:3000')
print("sending")
llm.invoke('What is the difference between a duck and a goose? And why there are so many Goose in Canada?')
note that it gets all the way up to print("sending") before it gives the error
the following code does work
import openllm_client
client = openllm_client.HTTPClient('http://localhost:3000')
print(client.generate('What are large language models?'))
Logs
Environment
bentoml env
Environment variable
System information
bentoml
: 1.2.17python
: 3.10.12platform
: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35uid_gid
: 1000:1000pip_packages
transformers-cli env
transformers
version: 4.41.2System information (Optional)
Processor AMD Ryzen 5 7600X 6-Core Processor 4.70 GHz
Installed RAM 32.0 GB (31.2 GB usable)
GPU Nvidia GeForce RTX 4060 Ti
The text was updated successfully, but these errors were encountered: