When LLMs output Action: None (or variations like Action: N/A), the parser fails with OutputParserError and leaks internal "Thought:" text to users instead of providing a clean response.
Problem
The ReAct format has a gap in handling scenarios where an LLM recognizes it should use a tool but can't/shouldn't.
In these cases, LLMs commonly output:
Thought: [reasoning about why tool can't be used]
Action: None (direct response required)
This doesn't match the expected formats:
Action: [tool] + Action Input: {...} → for tool calls
Final Answer: [answer] → for direct responses
The parser raises OutputParserError, and the error handler returns the raw text including "Thought:" prefix, creating poor UX.
Environment
- CrewAI version: 1.5.0+
- LLM: Any ReAct-style LLM
- Python: 3.10+