-
Notifications
You must be signed in to change notification settings - Fork 13.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ChatOpenai logprobs not reported despite being set to True in model_kwargs
#17101
Comments
🤖 Thank you for providing a detailed description of the issue you're encountering. It seems like the Could you please provide the following details:
This information will help me understand the issue better and provide a more accurate solution. Thank you for your cooperation. This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
There are no errors or warnings. There is an output, but it does not entail the probabilities. The issue is related to ChatOpenAI and the only different thing one can do is to change the |
Having the same problem with similar configuration. I can see the logprobs are processed using the debug mode, but they are neither returned by ChatOpenAI nor when used in chains. Is there a way to extract them? |
You'll have to use the |
@hinthornw , thank you for your reply. Do you mean to use If it is not what you meant, could you please give bit more detail? Thank you a lot. from langchain_openai import ChatOpenAI
llm = ChatOpenAI(temperature=0,
openai_api_key = api_key,
model_name="gpt-4",
model_kwargs = {"logprobs": True,
"top_logprobs":3})
llm.generate("Please categorize this text below into positive, negative or neutral: I had a good day") Error message (bit long): ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[44], line 12
1 from langchain_openai import ChatOpenAI
4 llm = ChatOpenAI(temperature=0,
5 openai_api_key = api_key,
6 model_name="gpt-4",
7 model_kwargs = {"logprobs": True,
8 "top_logprobs":3})
---> 12 llm.generate("Please categorize this text below into positive, negative or neutral: I had a good day")
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\language_models\chat_models.py:408, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\language_models\chat_models.py:398, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\language_models\chat_models.py:577, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_openai\chat_models\base.py:445, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
441 stream_iter = self._stream(
442 messages, stop=stop, run_manager=run_manager, **kwargs
443 )
444 return generate_from_stream(stream_iter)
--> 445 message_dicts, params = self._create_message_dicts(messages, stop)
446 params = {
447 **params,
448 **({"stream": stream} if stream is not None else {}),
449 **kwargs,
450 }
451 response = self.client.create(messages=message_dicts, **params)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_openai\chat_models\base.py:462, in ChatOpenAI._create_message_dicts(self, messages, stop)
460 raise ValueError("`stop` found in both the input and default params.")
461 params["stop"] = stop
--> 462 message_dicts = [_convert_message_to_dict(m) for m in messages]
463 return message_dicts, params
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_openai\chat_models\base.py:462, in <listcomp>(.0)
460 raise ValueError("`stop` found in both the input and default params.")
461 params["stop"] = stop
--> 462 message_dicts = [_convert_message_to_dict(m) for m in messages]
463 return message_dicts, params
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_openai\chat_models\base.py:150, in _convert_message_to_dict(message)
144 message_dict = {
145 "role": "tool",
146 "content": message.content,
147 "tool_call_id": message.tool_call_id,
148 }
149 else:
--> 150 raise TypeError(f"Got unknown type {message}")
151 if "name" in message.additional_kwargs:
152 message_dict["name"] = message.additional_kwargs["name"]
TypeError: Got unknown type P |
@emilmirzayev from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(temperature=0,
openai_api_key = api_key,
model_name="gpt-4",
model_kwargs = {"logprobs": True,
"top_logprobs":3})
llm.generate([[HumanMessage(content="Please categorize this text below into positive, negative or neutral: I had a good day")]]) we know this interface isn't intuitive and are actively brainstorming ways to improve it. If you'd like to understand the root issue and opine on potential solutions see #17189 |
@baskaryan thanks for the tip! it seems to solve the issue. Although I tried with OpenAI native API and the nesting of the fields is quite different. I will also try to see if it works with memory. Thanks for the work again! On a second thought, I think the error message could also be better: |
Yep message could definitely be improved. If you have time to open a PR would be much appreciated! |
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
Lopgprobs argument is again settable in OpenAi according to this official source (Openai docs):
However, when I try to use it via langchain, it does not exist in the output despite explicitly being set to
True
inmodel_kwargs
.If I put
logprobs
parameter outsidemodel_kwargs
it does show warning which gives me confidence that the place is right.System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
EDIT: added openai package version.
The text was updated successfully, but these errors were encountered: