-
Notifications
You must be signed in to change notification settings - Fork 14.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provide visibility into final prompt #912
Comments
this should be possible with tracing! have you tried it out? https://langchain.readthedocs.io/en/latest/tracing.html |
I kept an eye on tracing debug system documentation, nevertheless a minimal requirement could be to just have some "very verbose" flag for llms and/or chains, to print out the LLM prompts (+ completions). BTW, This is not an issue but a request. Consider the following chunk: llm = OpenAI(temperature=0)
template='''\
Please respond to the questions accurately and succinctly. \
If you are unable to obtain the necessary data after seeking help, \
indicate that you do not know.
'''
prompt = PromptTemplate(input_variables=[], template=template)
llm_weather_chain = LLMChain(llm=llm, prompt=prompt, verbose=True)
tools = [Weather, Datetime]
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) The output of the above program shows the agent behavior, in great colorized texts.
|
there is a verbose flag you can pass into the llm! |
Thanks Harrison,
$ cat agent.py #
# tools_agent.py
#
# zero-shot react agent that reply questions using available tools
# - Weater
# - Datetime
#
import sys
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain import LLMChain
from langchain.prompts import PromptTemplate
# import custom tools
from weather_tool import Weather
from datetime_tool import Datetime
llm = OpenAI(temperature=0, verbose=True)
template='''\
Please respond to the questions accurately and succinctly. \
If you are unable to obtain the necessary data after seeking help, \
indicate that you do not know.
'''
prompt = PromptTemplate(input_variables=[], template=template)
# Load the tool configs that are needed.
llm_weather_chain = LLMChain(
llm=llm,
prompt=prompt,
verbose=True
)
tools = [
Weather,
Datetime
]
# Construct the react agent type.
agent = initialize_agent(
tools,
llm,
agent="zero-shot-react-description",
verbose=True
)
if __name__ == '__main__':
if len(sys.argv) > 1:
question = ' '.join(sys.argv[1:])
print('question: ' + question)
# run the agent
agent.run(question)
else:
print('Agent answers questions using Weater and Datetime custom tools')
print('usage: py tools_agent.py <question sentence>')
print('example: py tools_agent.py what time is it?') $ py agent.py "how is the weather today in Genova?"
|
good catch - we need to fix this bug probably, but currently the wya to do it would actually be to set
|
Thanks. The workaround works, but yes I think it's a bug. |
I am studying the project and wanted to do some contributions, fixing some bugs/issues might be a good start, so I read through this issue and related code, I think the issue happens because there are actually two chains:
With the above analysis, I think there might be 2 ways to fix the issue:
@hwchase17 do you have some suggestions? |
Thanks!
so, i see the LLM verbosity as something different (at a lower level) from the agent verbosity. |
maybe adding a |
Well, it could be a way, but currently, when you set
So llm, chain, agent already have their own distinct |
Yep, now impossible to see final executed promst :( |
Is there any update on this? I think it is critical to be able to see the final prompt sent to the LLMs. Currently working with Langchain is too obscure, it makes it really difficult to build complex chains without making mistakes. |
having the same issue, I need to see the final prompt too. |
It looks like one possible workaround the get the final prompt is to attach a handler = StdOutCallbackHandler()
chain.run(... , callbacks=[handler]) |
Setting |
Modifying import langchain
langchain.debug=True
response = agent.run(prompt)
langchain.debug=False Output of this may not be as pretty as verbose. I think verbose is designed to be on higher level for individual queries but for debugging and granular control debug is more useful. |
Using langchain (0.0.256) Building on forin87. Bellow logs the message to the console:
If you want the prompt as a variable, I'd suggest using callbacks:
Hopefully this helps! |
Is there an update to this? On top of final prompt, i believe the final response coming from OpenAI would be helpful, things like prompt token count, completion token, stop reason, etc. |
Hi, @wskish I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue you raised requests a mechanism to provide visibility into the final prompt text sent to the completion model for debugging and traceability purposes. The comments discuss various workarounds and potential solutions, including setting the verbose flag for the LLM and agent instances, using callback handlers, and modifying the langchain debug setting. There is also a suggestion to add a Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you! |
Would be interesting to see if there are updates on this issue |
If anyone is looking for a simple string output of a single prompt, you can use the I struggled to find this as well. In my case I wanted the final formatted prompt string being used inside of the API call. Example usage: # Define a partial variable for the chatbot to use
my_partial_variable = """APPLE SAUCE"""
# Initialize your chat template with partial variables
prompt_messages = [
# System message
SystemMessage(content=("""You are a hungry, hungry bot""")),
# Instructions for the chatbot to set context and actions
HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="""Your life goal is to search for some {conversation_topic}. If you encounter food in the conversation below, please eat it:\n###\n{conversation}\n###\nHere is the food: {my_partial_variable}""",
input_variables=["conversation_topic", "conversation"],
partial_variables={"my_partial_variable": my_partial_variable},
)
),
# Placeholder for additional agent notes
MessagesPlaceholder("agent_scratchpad"),
]
prompt = ChatPromptTemplate(messages=prompt_messages)
prompt_as_string = prompt.format(
conversation_topic="Delicious food",
conversation="Nothing about food to see here",
agent_scratchpad=[],
) print(prompt_as_string)
|
I ended up using callbacks (like StdOut / self-implemented loguru-based / langfuse / arize-phoenix / mlflow / wandb) |
Wait, what is the final solution for this though? I can't wrap my head around making things complex for something that should have been basic. |
@krishna-praveen for me it is usage of community-provided / self-implemented langchain callback mechanism |
chain.prompt.format_prompt(**input) |
For debugging or other traceability purposes it is sometimes useful to see the final prompt text as sent to the completion model.
It would be good to have a mechanism that logged or otherwise surfaced (e.g. for storing to a database) the final prompt text.
The text was updated successfully, but these errors were encountered: