New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving Resilience of MRKL Agent #3269
Conversation
Updated formatting by running the "poetry run black ." command.
Solved the `langchain\agents\agent.py:703: error: Incompatible types in assignment (expression has type "Union[str, Dict[Any, Any]]", variable has type "str") [assignment]` error that was raised when running the `poetry run mypy .` command.
Updated test_bad_action_input_line() and test_bad_action_line() to expecting a self-correction prompt, instead of raising an exception. In the commit (langchain-ai@b48bb11) , I added ability to the MRKL agent to communicate back to the llm if the "Action: & Action Input:" format is not followed, and get it self corrected. This is very effective in terms of the number of calls since, an additional call to llm is made only if the format is not followed, which would otherwise raise OutputParserException.
The code is ready to be merged now. I made 3 new commits, ensuring that all tests & linting are passed (followed all instructions in the contributing guidelines). The last 3 commits included the following changes: (i) Corrected a lint formatting issue in the mrkl/output_parser.py file. As a Summary: In this pull request, I added the ability for the MRKL agent to communicate back to LLM if the "Action: & Action Input:" format is not followed, and get it self-corrected. This is very effective in terms of the number of calls since, an additional call to LLM is made only if the format is not followed, which would otherwise raise the OutputParserException. |
This is a badly-needed improvement! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would a more general solution be to try/catch on output parsing errors
eg
try:
output = self.agent.plan(intermediate_steps, **inputs)
except OutputParserException as e:
if self.catch_errors:
output = AgentAction("Error parsing", ....)
else:
raise e
@hwchase17 I could do that. But the only issue with that approach is, the "AgentAction" carries the 'Action_Input' and 'log' (which is a total 2 parameters). In Action_Input - I would be passing the error correction message such as So that would require me to pass a dictionary or tuple as Do you want me to do that, or do you think of a more effective way to do it? (to pass both parameters during a parsing error) Note: This wouldn't a problem if AgentAction required only either of action_input or log. But since it needs both parameters for the functioning of the code, the problem occurs with the template you provided. |
@hwchase17 |
I actually worked around this issue by subclassing |
@hwchase17, is there anything blocking this merge? |
This is a highly optimized update to the pull request #3269 Summary: 1) Added ability to MRKL agent to self solve the ValueError(f"Could not parse LLM output: `{llm_output}`") error, whenever llm (especially gpt-3.5-turbo) does not follow the format of MRKL Agent, while returning "Action:" & "Action Input:". 2) The way I am solving this error is by responding back to the llm with the messages "Invalid Format: Missing 'Action:' after 'Thought:'" & "Invalid Format: Missing 'Action Input:' after 'Action:'" whenever Action: and Action Input: are not present in the llm output respectively. For a detailed explanation, look at the previous pull request. New Updates: 1) Since @hwchase17 , requested in the previous PR to communicate the self correction (error) message, using the OutputParserException, I have added new ability to the OutputParserException class to store the observation & previous llm_output in order to communicate it to the next Agent's prompt. This is done, without breaking/modifying any of the functionality OutputParserException previously performs (i.e. OutputParserException can be used in the same way as before, without passing any observation & previous llm_output too). --------- Co-authored-by: Deepak S V <svdeepak99@users.noreply.github.com>
Reopened this PR & merged in #5014 |
Finally Solved the
ValueError(f"Could not parse LLM output: `{llm_output}`")
error, whenever llm (especially gpt-3.5-turbo) does not follow the format of MRKL Agent, while returning "Action:" & "Action Input:".Note: If this pull request gets approved, I can then apply this feature to react, self_ask_with_search & conversational agents too.
The way I am solving this error is by responding back to the llm with the messages
"Invalid Format: Missing 'Action:' after 'Thought:'"
&"Invalid Format: Missing 'Action Input:' after 'Action:'"
wheneverAction:
andAction Input:
are not present in the llm output respectively.The following are 2 errors that kept coming from the Pandas Dataframe Agent & VectorStore Agent respectively (both of them use the MRKL agent):
Error-1: (pandas dataframe agent - error message at the end)
df_agent.run("How many policies do Aaron Pope have, contain 'Homeowners'?")
Error-2: (VectorStore agent - error message at the end)
VectorStore_agent.run("Do condo units have pools? (as mentioned in the insurance docs)")
Successful run after making this pull request change: (pandas dataframe agent - after modifying mrkl agent):
df_agent.run("How many policies that Aaron Pope have, contain 'Homeowners'?")
I also ran the callback function and logged the final prompt sent into the llm model (gpt-3.5-turbo) + output:
As you can see from the prompt log above, the model made a mistake of outputting the code without the
Action Input:
keyword. But after sending the error messageInvalid Format: Missing 'Action Input:' after 'Action:'
as observation to the llm, it self-corrected the output format in it's next response, allowing the Agent to progress towards finding the final answer without errors.Let me know your thoughts, and I can apply this feature to the other 3 agents as well, if this pull request gets approved.