Feature Area
Core functionality
Is your feature request related to a an existing bug? Please link it here.
Problem
When planning=True, AgentPlanner generates a step-by-step plan before the crew starts executing — which is great. However, the plan is static: once generated, it is never updated regardless of what actually happens during execution.
This becomes a problem when a task returns results that contradict the plan's assumptions. For example:
- A research task finds no data where the plan assumed data would be available
- An API call returns an unexpected format or error
- An early task reveals that a planned approach is infeasible
In all these cases, the remaining agents continue following the original (now incorrect) plan, leading to compounding errors and poor final outputs.
Describe the solution you'd like
Proposed Solution
Add an optional replan_on_failure flag to the Crew class that enables adaptive re-planning during execution.
New API (fully backwards compatible — defaults to False):
crew = Crew(
agents=[...],
tasks=[...],
planning=True, # existing flag, required
replan_on_failure=True, # new flag
max_replans=2, # new flag, prevents infinite loops
)
How it would work:
- After each task completes, a lightweight
ReplanningEvaluator makes a structured LLM call asking: "Does this result deviate significantly from what the plan assumed?"
- If yes (returns
ReplanDecision(should_replan=True, reason=..., affected_steps=[...])), AgentPlanner.replan() is called with: original goal + completed results so far + the deviation reason
- A revised plan is generated for remaining tasks only and injected into their descriptions
- Execution continues with the updated plan
- A
replan_count counter prevents runaway loops (capped at max_replans)
Files I'd change:
src/crewai/utilities/replanning_evaluator.py — new file
src/crewai/utilities/planning_handler.py — add replan() method
src/crewai/crew.py — add fields + hook into _execute_tasks()
tests/utilities/test_replanning_evaluator.py — new tests
docs/core-concepts/Planning.mdx — new section
Why Backwards Compatible
replan_on_failure defaults to False. All existing crews with planning=True are completely unaffected unless they explicitly opt in.
Questions for Maintainers
- Does this direction align with your vision for the planning feature?
- Any preference on where the evaluator logic lives — separate class vs. inside
AgentPlanner?
- Should the
ReplanningEvaluator be pluggable (i.e. users can pass a custom evaluator)? I'd lean yes for flexibility.
- Any preference on the parameter names (
replan_on_failure, max_replans)?
Happy to open a draft PR once direction is confirmed. Let me know if you'd like me to adjust the approach.
Describe alternatives you've considered
No response
Additional context
No response
Willingness to Contribute
Yes, I'd be happy to submit a pull request
Feature Area
Core functionality
Is your feature request related to a an existing bug? Please link it here.
Problem
When
planning=True,AgentPlannergenerates a step-by-step plan before the crew starts executing — which is great. However, the plan is static: once generated, it is never updated regardless of what actually happens during execution.This becomes a problem when a task returns results that contradict the plan's assumptions. For example:
In all these cases, the remaining agents continue following the original (now incorrect) plan, leading to compounding errors and poor final outputs.
Describe the solution you'd like
Proposed Solution
Add an optional
replan_on_failureflag to theCrewclass that enables adaptive re-planning during execution.New API (fully backwards compatible — defaults to
False):How it would work:
ReplanningEvaluatormakes a structured LLM call asking: "Does this result deviate significantly from what the plan assumed?"ReplanDecision(should_replan=True, reason=..., affected_steps=[...])),AgentPlanner.replan()is called with: original goal + completed results so far + the deviation reasonreplan_countcounter prevents runaway loops (capped atmax_replans)Files I'd change:
src/crewai/utilities/replanning_evaluator.py— new filesrc/crewai/utilities/planning_handler.py— addreplan()methodsrc/crewai/crew.py— add fields + hook into_execute_tasks()tests/utilities/test_replanning_evaluator.py— new testsdocs/core-concepts/Planning.mdx— new sectionWhy Backwards Compatible
replan_on_failuredefaults toFalse. All existing crews withplanning=Trueare completely unaffected unless they explicitly opt in.Questions for Maintainers
AgentPlanner?ReplanningEvaluatorbe pluggable (i.e. users can pass a custom evaluator)? I'd lean yes for flexibility.replan_on_failure,max_replans)?Happy to open a draft PR once direction is confirmed. Let me know if you'd like me to adjust the approach.
Describe alternatives you've considered
No response
Additional context
No response
Willingness to Contribute
Yes, I'd be happy to submit a pull request