Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving Resilience of MRKL Agent #5014

Merged
merged 3 commits into from May 22, 2023
Merged

Conversation

svdeepak99
Copy link
Contributor

This is a highly optimized update to the pull request #3269

Summary:

  1. Added ability to MRKL agent to self solve the ValueError(f"Could not parse LLM output: {llm_output}") error, whenever llm (especially gpt-3.5-turbo) does not follow the format of MRKL Agent, while returning "Action:" & "Action Input:".
  2. The way I am solving this error is by responding back to the llm with the messages "Invalid Format: Missing 'Action:' after 'Thought:'" & "Invalid Format: Missing 'Action Input:' after 'Action:'" whenever Action: and Action Input: are not present in the llm output respectively.

For a detailed explanation, look at the previous pull request.

New Updates:

  1. Since @hwchase17 , requested in the previous PR to communicate the self correction (error) message, using the OutputParserException, I have added new ability to the OutputParserException class to store the observation & previous llm_output in order to communicate it to the next Agent's prompt. This is done, without breaking/modifying any of the functionality OutputParserException previously performs (i.e. OutputParserException can be used in the same way as before, without passing any observation & previous llm_output too).

@svdeepak99
Copy link
Contributor Author

@vowelparrot kindly check out this pull request.

Copy link
Contributor

@hwchase17 hwchase17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks pretty solid to me. lets maybe not change the derault value of handle_parsing_errors in this PR... im down to do in a future one but want to update docs

@svdeepak99
Copy link
Contributor Author

Sure, I have changed the default value back to False. You can merge this PR now (@vowelparrot, @hwchase17).

Also as I asked in the previous PR #3269, do you want me to apply this feature to react, self_ask_with_search & conversational agents too, in a future PR?

Copy link
Contributor

@hwchase17 hwchase17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@hwchase17 hwchase17 added the lgtm PR looks good. Use to confirm that a PR is ready for merging. label May 21, 2023
@dev2049 dev2049 merged commit 5cd1210 into langchain-ai:master May 22, 2023
12 checks passed
@danielchalef danielchalef mentioned this pull request Jun 5, 2023
hwchase17 pushed a commit that referenced this pull request Jun 10, 2023
Hi,

This is a fix for #5014. This
PR forgot to add the ability to self solve the ValueError(f"Could not
parse LLM output: {llm_output}") error for `_atake_next_step`.
Undertone0809 pushed a commit to Undertone0809/langchain that referenced this pull request Jun 19, 2023
…n-ai#5985)

Hi,

This is a fix for langchain-ai#5014. This
PR forgot to add the ability to self solve the ValueError(f"Could not
parse LLM output: {llm_output}") error for `_atake_next_step`.
This was referenced Jun 25, 2023
kacperlukawski pushed a commit to kacperlukawski/langchain that referenced this pull request Jun 29, 2023
…n-ai#5985)

Hi,

This is a fix for langchain-ai#5014. This
PR forgot to add the ability to self solve the ValueError(f"Could not
parse LLM output: {llm_output}") error for `_atake_next_step`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm PR looks good. Use to confirm that a PR is ready for merging.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants