Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions docs/workflow-syntax.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,42 @@ prompt: |

**Restrictions** — workflow steps cannot have `prompt`, `model`, `provider`, `tools`, `system_prompt`, `command`, or `options`. Workflow steps also cannot be used inside `parallel` groups or `for_each` groups.

### Dialog Mode

Dialog mode allows agents to conditionally pause after execution and enter a free-form conversation with the user. An LLM evaluator examines the agent's output against user-defined criteria and decides whether to initiate a dialog.

```yaml
agents:
- name: researcher
prompt: "Research the given topic thoroughly"
dialog:
trigger_prompt: |
Enter dialog if the agent expresses uncertainty about
the user's intent, encounters ambiguous requirements,
or needs clarification before proceeding.
routes:
- to: writer
```

When triggered, the user is presented with a choice:
1. **Discuss** — engage in a multi-turn conversation with the agent
2. **Do your best and continue** — skip the dialog and let the agent proceed

After the conversation, the agent re-executes with the dialog transcript as additional context, producing a refined output.

**Configuration:**

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `dialog.trigger_prompt` | string | Yes | Criteria for the LLM evaluator to decide when dialog is needed |

**Behavior notes:**
- Dialog is supported on regular `agent` type only (not `human_gate`, `script`, or `workflow`)
- In web dashboard mode, the dialog temporarily replaces the graph area with a chat interface
- When `--skip-gates` is set (e.g., CI/automation), dialogs are automatically skipped
- The evaluator prompt should describe *when* to trigger dialog, not *what* to ask — the evaluator generates the opening question from the agent's output context
- After dialog, the agent sees the full conversation transcript and produces updated output

## Parallel Groups

Parallel groups execute multiple agents concurrently for improved performance.
Expand Down
77 changes: 77 additions & 0 deletions examples/dialog-mode.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Dialog Mode Example
#
# This example demonstrates the dialog mode feature, where an agent
# can conditionally pause and have a conversation with the user.
#
# The researcher agent has a dialog trigger that fires when the agent
# encounters ambiguity or needs clarification. When triggered, the
# user can choose to discuss or let the agent proceed on its own.
#
# Usage:
# conductor run examples/dialog-mode.yaml --input topic="quantum computing"
# conductor run examples/dialog-mode.yaml --web --input topic="quantum computing"

workflow:
name: dialog-mode
description: Research workflow with agent-initiated dialog
version: "1.0.0"
entry_point: researcher

runtime:
provider: copilot

input:
topic:
type: string
required: true
description: The topic to research

agents:
- name: researcher
description: Researches the topic and may ask for user clarification
prompt: |
Research the following topic and provide a comprehensive summary:

Topic: {{ workflow.input.topic }}

If the topic is broad, pick a specific angle and explain your choice.
If anything is ambiguous, note what assumptions you're making.
output:
summary:
type: string
description: Research summary
key_findings:
type: string
description: Bullet-pointed key findings
assumptions:
type: string
description: Any assumptions made during research
dialog:
trigger_prompt: |
Trigger dialog if the agent:
- Expresses significant uncertainty about the scope or angle
- Makes assumptions that could lead the research in the wrong direction
- Encounters multiple valid interpretations of the topic
Do NOT trigger for minor uncertainties that the agent can resolve on its own.
routes:
- to: writer

- name: writer
description: Writes a polished article from the research
prompt: |
Write a clear, engaging article based on this research:

Summary: {{ researcher.output.summary }}
Key Findings: {{ researcher.output.key_findings }}

Keep the tone informative but accessible.
output:
article:
type: string
description: The final article
routes:
- to: $end

output:
article: "{{ writer.output.article }}"
research_assumptions: "{{ researcher.output.assumptions }}"
60 changes: 60 additions & 0 deletions examples/dialog-test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# Dialog Mode Test — guaranteed to trigger dialog
#
# This workflow asks an agent to analyze a codebase question,
# but the agent will express uncertainty about scope/approach,
# triggering a dialog with the user.
#
# Usage:
# conductor run examples/dialog-test.yaml --input question="How should I refactor the error handling?"

workflow:
name: dialog-test
description: Quick test to exercise dialog mode
entry_point: analyst

runtime:
provider: copilot

input:
question:
type: string
required: true
description: A code analysis question — the agent will seek clarification

agents:
- name: analyst
description: An agent that analyzes code but needs clarification on scope
model: claude-haiku-4.5
prompt: |
The user asked: {{ workflow.input.question }}

You MUST respond as though this question is too broad to answer well.
Identify at least 3 different ways you could approach or scope this
analysis, explain why each leads to very different recommendations,
and state clearly that you need the user to narrow the scope before
you can give a useful answer. Express strong uncertainty about which
approach the user wants.
output:
approaches:
type: string
description: The possible approaches or scopes you identified
chosen_approach:
type: string
description: Which approach you went with (after dialog)
analysis:
type: string
description: Your analysis based on the chosen approach
dialog:
trigger_prompt: |
ALWAYS trigger dialog. The agent's output will contain
uncertainty and multiple possible approaches — this is by design.
Trigger if the output mentions any ambiguity, multiple
approaches, uncertainty, or inability to proceed.
In practice: always return trigger=true for this agent.
routes:
- to: $end

output:
approaches: "{{ analyst.output.approaches }}"
chosen: "{{ analyst.output.chosen_approach }}"
analysis: "{{ analyst.output.analysis }}"
2 changes: 2 additions & 0 deletions src/conductor/config/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
from conductor.config.schema import (
AgentDef,
ContextConfig,
DialogConfig,
GateOption,
HooksConfig,
InputDef,
Expand All @@ -34,6 +35,7 @@
# Schema models
"AgentDef",
"ContextConfig",
"DialogConfig",
"GateOption",
"HooksConfig",
"InputDef",
Expand Down
51 changes: 51 additions & 0 deletions src/conductor/config/schema.py
Original file line number Diff line number Diff line change
Expand Up @@ -387,6 +387,33 @@ class RetryPolicy(BaseModel):
"""


class DialogConfig(BaseModel):
"""Configuration for agent dialog mode.

When present on an agent, enables the agent to conditionally pause
after execution and enter a free-form conversation with the user.

An evaluator LLM call examines the agent's output against the
user-defined trigger_prompt criteria and decides whether to pause
and start a conversation.

Example YAML::

dialog:
trigger_prompt: |
Enter dialog if the agent expresses uncertainty about
the user's intent or needs clarification on requirements.
"""

trigger_prompt: str
"""User-defined criteria for when to enter dialog mode.

This prompt is wrapped in a system message and evaluated against
the agent's output. The evaluator decides whether to pause and
start a conversation with the user.
"""


class AgentDef(BaseModel):
"""Definition for a single agent in the workflow."""

Expand Down Expand Up @@ -543,6 +570,24 @@ class AgentDef(BaseModel):
- timeout
"""

dialog: DialogConfig | None = None
"""Optional dialog mode configuration.

When set, enables this agent to conditionally pause after execution
and enter a free-form conversation with the user. A lightweight
evaluator LLM call uses the trigger_prompt to decide whether dialog
should be triggered based on the agent's output.

Only applies to provider-backed agents (type='agent' or None).

Example YAML::

dialog:
trigger_prompt: |
Enter dialog if the agent is uncertain about the user's
intent or needs clarification on ambiguous requirements.
"""

@field_validator("timeout")
@classmethod
def validate_timeout(cls, v: int | None) -> int | None:
Expand All @@ -561,6 +606,8 @@ def validate_agent_type(self) -> AgentDef:
raise ValueError("human_gate agents require 'prompt'")
if self.input_mapping is not None:
raise ValueError("human_gate agents cannot have 'input_mapping'")
if self.dialog is not None:
raise ValueError("human_gate agents cannot have 'dialog'")
if self.max_depth is not None:
raise ValueError("human_gate agents cannot have 'max_depth'")
elif self.type == "script":
Expand Down Expand Up @@ -591,6 +638,8 @@ def validate_agent_type(self) -> AgentDef:
raise ValueError("script agents cannot have 'retry'")
if self.input_mapping is not None:
raise ValueError("script agents cannot have 'input_mapping'")
if self.dialog is not None:
raise ValueError("script agents cannot have 'dialog'")
if self.max_depth is not None:
raise ValueError("script agents cannot have 'max_depth'")
elif self.type == "workflow":
Expand All @@ -616,6 +665,8 @@ def validate_agent_type(self) -> AgentDef:
raise ValueError("workflow agents cannot have 'max_agent_iterations'")
if self.retry is not None:
raise ValueError("workflow agents cannot have 'retry'")
if self.dialog is not None:
raise ValueError("workflow agents cannot have 'dialog'")
else:
# Regular agent or human_gate — input_mapping is not valid
if self.input_mapping is not None:
Expand Down
Loading
Loading