Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Langchain text_generation client hits: pydantic.error_wrappers.ValidationError: 1 validation error for Response #9146

Closed
14 tasks
htang2012 opened this issue Aug 11, 2023 · 2 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@htang2012
Copy link

System Info

langchain 0.0.262
text_generation_server. https://github.com/huggingface/text-generation-inference

Who can help?

No response

Information

  • The official example notebooks/scripts
  • My own modified scripts

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async

Reproduction

start text_generation_inference server on local host, verify it is working

===============
from langchain import PromptTemplate, HuggingFaceTextGenInference

llm = HuggingFaceTextGenInference(
inference_server_url="http://127.0.0.1:80",
max_new_tokens=64,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=1,
repetition_penalty=1.03,
)
output = llm("What is Machine Learning?")
print(output)

=================

root@0b801769b7bd:~/langchain_client# python langchain-client.py
Traceback (most recent call last):
File "/root/langchain_client/langchain-client.py", line 59, in
output = llm("What is Machine Learning?")
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 802, in call
self.generate(
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 598, in generate
output = self._generate_helper(
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 504, in _generate_helper
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 491, in _generate_helper
self._generate(
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 977, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/huggingface_text_gen_inference.py", line 164, in _call
res = self.client.generate(prompt, **invocation_params)
File "/usr/local/lib/python3.10/dist-packages/text_generation/client.py", line 150, in generate
return Response(**payload[0])
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for Response
details -> tokens -> 6 -> logprob
none is not an allowed value (type=type_error.none.not_allowed)

Expected behavior

output text from text generation inference server.

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Aug 11, 2023
@htang2012
Copy link
Author

close this issue because the root cause is on the text_generation inference client side. not in langchain.

@htang2012
Copy link
Author

see above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

1 participant