feat(composition): dynamic sub-workflow inputs via input_mapping (#101)#109
feat(composition): dynamic sub-workflow inputs via input_mapping (#101)#109PolyphonyRequiem wants to merge 4 commits intomicrosoft:mainfrom
Conversation
…rosoft#101) Add input_mapping field to AgentDef for type='workflow' agents. When present, each value is a Jinja2 expression rendered against the parent context to build sub-workflow inputs. When absent, existing behavior (forwarding parent's workflow.input.*) is preserved. - Schema: Add input_mapping to AgentDef with validation for workflow-only - Engine: Render input_mapping templates in _execute_subworkflow() - Tests: Schema validation for all agent types - Experimental workflows: test-input-mapping parent/child pair Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #109 +/- ##
=======================================
Coverage ? 84.93%
=======================================
Files ? 53
Lines ? 7182
Branches ? 0
=======================================
Hits ? 6100
Misses ? 1082
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
When a parent workflow calls a sub-workflow via type: workflow, the child engine now inherits the parent's agent outputs in its context. This allows sub-workflow agents to access parent agent state (e.g., task_manager.output, pr_group_manager.output) by declaring them in their input: list, even when input_mapping doesn't cover all fields. Previously, sub-workflow agents could only access workflow.input.* (populated via input_mapping) and their own sibling agents' outputs. Parent agent outputs were lost at the sub-workflow boundary, causing nested sub-workflows to see default values (0, '') instead of the actual parent state. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
jrob5756
left a comment
There was a problem hiding this comment.
Looks good. Some small comments.
| # even when input_mapping doesn't cover all fields. | ||
| for key, value in context.items(): | ||
| if key not in ("workflow", "context") and isinstance(value, dict): | ||
| child_engine.context.agent_outputs[key] = value.get("output", value) |
There was a problem hiding this comment.
Critical: Unconditional parent context injection breaks sub-workflow isolation
This block runs for every sub-workflow, not just those with input_mapping. This is a backward-incompatible behavioral change — previously child workflows started with a clean agent_outputs.
Bugs this causes:
- Namespace collision: If parent and child both have an agent
analyzer, the parent's output is pre-injected. Before the child's ownanalyzerruns,{{ analyzer.output }}resolves to stale parent data instead of raising an error. - Context bypass: Direct assignment to
agent_outputsskipscontext.store(), soexecution_historyis out of sync. - State leakage:
value.get("output", value)— if no"output"key exists, the entire internal dict (includingmodel,tokens, etc.) is injected.
Fix: Either:
- Guard with
if agent.input_mapping:(make it opt-in), or - Remove entirely —
input_mappingalready provides controlled access to parent data. Blanket injection is redundant and unsafe.
There was a problem hiding this comment.
I think this current commit should cover it.
I've removed the parent context injection block entirely.
input_mapping already provides explicit, controlled access to parent data via Jinja2 expressions, making
the blanket injection both redundant and harmful for the reasons you outlined
| rendered = renderer.render(template_expr, context) | ||
| # Attempt to parse rendered values as JSON for non-string types | ||
| try: | ||
| sub_inputs[key] = json.loads(rendered) |
There was a problem hiding this comment.
Critical: Implicit json.loads coercion is a foot-gun
This silently coerces types by content sniffing:
"42"→int·"true"→bool·"null"→None
Users have no way to pass the literal string "true" as a string input. The "null" → None case is especially dangerous — a missing/empty variable becomes None and the sub-workflow may not catch it.
Suggestion: Remove json.loads and always pass rendered values as strings (matching how --input CLI works). If structured data is needed, let users use the {{ expr | json }} filter explicitly.
There was a problem hiding this comment.
was unaware of this, thank you!
| renderer = TemplateRenderer() | ||
| sub_inputs = {} | ||
| for key, template_expr in agent.input_mapping.items(): | ||
| rendered = renderer.render(template_expr, context) |
There was a problem hiding this comment.
Important: Template errors lack key context
If renderer.render() raises TemplateError (e.g., undefined variable), the error message won't indicate which input_mapping key failed. With many keys, debugging is a guessing game.
| rendered = renderer.render(template_expr, context) | |
| for key, template_expr in agent.input_mapping.items(): | |
| try: | |
| rendered = renderer.render(template_expr, context) | |
| except Exception as e: | |
| raise ExecutionError( | |
| f"Failed to render input_mapping key '{key}' for agent " | |
| f"'{agent.name}': {e}", | |
| suggestion=f"Check that the expression '{template_expr}' " | |
| "references valid context variables.", | |
| ) from e |
| dict(workflow_ctx.get("input", {})) if isinstance(workflow_ctx, dict) else {} | ||
| ) | ||
| sub_inputs: dict[str, Any] | ||
| if agent.input_mapping: |
There was a problem hiding this comment.
Suggestion: if agent.input_mapping: — empty dict {} is falsy
An explicit input_mapping: {} in YAML means "pass no inputs", but {} is falsy in Python, so this falls through to the default behavior (forwarding all workflow.input.*). Consider using if agent.input_mapping is not None: to distinguish "not set" from "explicitly empty".
| with pytest.raises(ValidationError, match="input_mapping"): | ||
| AgentDef( | ||
| name="script", | ||
| type="script", |
There was a problem hiding this comment.
Important: No runtime tests for the core feature
These 5 schema tests are good, but there are zero tests for the 34 new runtime lines in workflow.py. Missing coverage:
- Jinja2 rendering of
input_mappingexpressions against parent context - JSON type coercion behavior (
"42"→ int,"true"→ bool) - Error when Jinja2 references an undefined variable
- Parent context injection and namespace collision with child agents
- End-to-end: parent output →
input_mapping→ child sub-workflow receives correct inputs - Backward compat: no
input_mapping→ existing behavior preserved
Recommend adding these in tests/test_engine/test_subworkflow.py using the existing mock provider pattern.
- Use 'is not None' guard instead of truthiness check so empty
input_mapping ({}) means 'pass no inputs' rather than falling
through to default forwarding (schema validators + runtime)
- Remove implicit json.loads coercion on rendered values; always
pass strings to match explicit-data-flow intent
- Add per-key error wrapping with key name + expression in error
messages for easier debugging of template failures
- Remove unconditional parent context injection into child
workflows to preserve sub-workflow isolation and avoid namespace
collisions, context.store() bypass, and state leakage
- Add 7 runtime tests covering input_mapping rendering, string
values, backward compat, empty mapping, error messages, and
parent context isolation
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Summary
Adds an optional
input_mappingfield totype: workflowagents, enabling parameterized sub-workflow invocation. Each key maps a sub-workflow input name to a Jinja2 expression evaluated in the parent's context.Closes #101
Example
Without
input_mapping, parent'sworkflow.input.*is forwarded (existing behavior preserved).Changes
input_mapping: dict[str, str] | NoneonAgentDef, validated only fortype: workflowinput_mappingtemplates against parent context in_execute_subworkflowDependency
Foundation for #102 (for_each with workflows) and #103 (self-referential workflows).