Problem Statement
Currently, Strands structured output appears to behave as a guidance/tool-driven mechanism rather than a strictly enforced schema-constrained output mode.
In mixed conversational workflows, the agent may sometimes:
- return valid schema-based JSON,
- and other times return free-form natural language,
even when structured_output_model is configured at the agent level.
This creates uncertainty for production applications that depend on deterministic machine-readable responses
Current Behavior
Example:
from pydantic import BaseModel
from strands import Agent
class MlExpertOutputSchema(BaseModel):
model: str
inventor: str
avg_accuracy: float
agent = Agent(
structured_output_model=MlExpertOutputSchema
)
Observed behavior:
Some prompts return valid JSON matching the schema
Some prompts return conversational text instead
Example:
Prompt
for churn prediction which is best
Output
Natural language explanation returned instead of schema output
But later prompts:
lightgbm?
correctly return:
{
"model": "LightGBM",
"inventor": "Microsoft Research",
"avg_accuracy": 0.86
}.
Problem
This makes it difficult to use Strands structured outputs in:
APIs
workflow orchestration
downstream parsing pipelines
enterprise automation
frontend integrations expecting deterministic JSON
Currently, developers must rely heavily on:
prompt engineering
retry loops
manual validation
custom fallback logic
instead of having SDK-level enforcement.
Proposed Solution
Add a strict structured output mode similar to modern provider-native constrained decoding approaches.
Example API idea:
agent = Agent(
structured_output_model=MlExpertOutputSchema,
strict_structured_output=True
)
or invocation-level:
agent.invoke(
prompt,
structured_output_model=MlExpertOutputSchema,
strict=True
)
Use Case
Expected behavior:
- ALL responses must conform to the schema
- No free-form conversational responses
- Validation failures should raise explicit SDK exceptions
- Optional retry/self-correction behavior could be configurable
Why This Matters
Other ecosystems now support deterministic schema enforcement:
- OpenAI Structured Outputs (strict: true)
- LangChain/LangGraph integrations with provider-native strict mode
- constrained decoding based JSON generation
This would significantly improve:
- reliability
- production readiness
- developer confidence
- interoperability with automation systems
Alternatives Solutions
No response
Additional Context
No response
Problem Statement
Currently, Strands structured output appears to behave as a guidance/tool-driven mechanism rather than a strictly enforced schema-constrained output mode.
In mixed conversational workflows, the agent may sometimes:
even when
structured_output_modelis configured at the agent level.This creates uncertainty for production applications that depend on deterministic machine-readable responses
Current Behavior
Example:
Observed behavior:
Some prompts return valid JSON matching the schema
Some prompts return conversational text instead
Example:
But later prompts:
Problem
This makes it difficult to use Strands structured outputs in:
APIs
workflow orchestration
downstream parsing pipelines
enterprise automation
frontend integrations expecting deterministic JSON
Currently, developers must rely heavily on:
prompt engineering
retry loops
manual validation
custom fallback logic
instead of having SDK-level enforcement.
Proposed Solution
Add a strict structured output mode similar to modern provider-native constrained decoding approaches.
Example API idea:
or invocation-level:
Use Case
Expected behavior:
Why This Matters
Other ecosystems now support deterministic schema enforcement:
This would significantly improve:
Alternatives Solutions
No response
Additional Context
No response