-
Notifications
You must be signed in to change notification settings - Fork 15k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsupported early_stopping_method="generate" in AgentExecutor after reaching iteration limit #16263
Comments
🤖 Hey @thedavgar! Good to see you here again, navigating the tricky seas of code. Hope your coding journey is going well! 🚀 Based on the information you've provided, it seems like you're encountering an error due to the Here's the relevant code from the def return_stopped_response(
self,
early_stopping_method: str,
intermediate_steps: List[Tuple[AgentAction, str]],
**kwargs: Any,
) -> AgentFinish:
"""Return response when agent has been stopped due to max iterations."""
if early_stopping_method == "force":
# `force` just returns a constant string
return AgentFinish(
{"output": "Agent stopped due to iteration limit or time limit."}, ""
)
else:
raise ValueError(
f"Got unsupported early_stopping_method `{early_stopping_method}`"
) As you can see, if the To resolve this issue, you can change the agent = AgentExecutor(
agent=agent_system, # Sets the system to use openai functions
tools=tools, # Sets the tools visible to the LLM
return_intermediate_steps=True, # Get a list of traces of the trajectory of the chain
max_iterations=4, # Sets the number of intermediate steps
early_stopping_method="force", # Applies final pass to generate an output if max iterations is reached
verbose=verbose) # Makes internal states visible to debug This will cause the agent to return a constant string saying that it stopped due to iteration limit or time limit when the max iterations are reached. Alternatively, if you want to use the "generate" method, you would need to implement it in the I hope this helps! If you have any other questions, feel free to ask. Sources: SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Thank you @dosubot. Yes I have seen in the code that the "generate" method is not implemented. However, in previous versions it was already working properly. |
🤖 Based on the information you've provided, it seems like you're using an older version of LangChain ( In the latest version of LangChain ( def return_stopped_response(
self,
early_stopping_method: str,
intermediate_steps: List[Tuple[AgentAction, str]],
**kwargs: Any,
) -> AgentFinish:
"""Return response when agent has been stopped due to max iterations."""
if early_stopping_method == "force":
# `force` just returns a constant string
return AgentFinish(
{"output": "Agent stopped due to iteration limit or time limit."}, ""
)
elif early_stopping_method == "generate":
# Generate does one final forward pass
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += (
f"\n{self.observation_prefix}{observation}\n{self.llm_prefix}"
)
# Adding to the previous steps, we now tell the LLM to make a final pred
thoughts += (
"\n\nI now need to return a final answer based on the previous steps:"
)
new_inputs = {"agent_scratchpad": thoughts, "stop": self._stop}
full_inputs = {**kwargs, **new_inputs}
full_output = self.llm_chain.predict(**full_inputs)
# We try to extract a final answer
parsed_output = self.output_parser.parse(full_output)
if isinstance(parsed_output, AgentFinish):
# If we can extract, we send the correct stuff
return parsed_output
else:
# If we can extract, but the tool is not the final tool,
# we just return the full output
return AgentFinish({"output": full_output}, full_output)
else:
raise ValueError(
"early_stopping_method should be one of `force` or `generate`, "
f"got {early_stopping_method}"
) You can find this in the AgentExecutor class in the LangChain repository. To resolve the issue, I recommend updating to the latest version of LangChain ( pip install --upgrade langchain After updating, the SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
The current version of langchain stands at 0.1.16, however, the persisting issue remains unresolved as the "generate" method has not yet been implemented within the return_stopped_response functionality. It remains unclear as to when this will be rectified. |
Idem. |
Any updates on this one? |
facing same issue |
0.1.16 The return type of |
i have the same issue :/ |
Checked other resources
Example Code
The following code:
Description
This code if it reaches the max iterations, returns an error.
ERROR: Got unsupported early_stopping_method
generate
System Info
pip langchain==0.1.0 langchain-openai==0.0.2.post1
Using as LLM GPT-3.5 Turbo 0613
Related Components
The text was updated successfully, but these errors were encountered: