Skip to content

feat(forms): conversational form filling with LLM assistant#72

Merged
danielnaab merged 45 commits intomainfrom
story-9/conversational-sections
Apr 19, 2026
Merged

feat(forms): conversational form filling with LLM assistant#72
danielnaab merged 45 commits intomainfrom
story-9/conversational-sections

Conversation

@danielnaab
Copy link
Copy Markdown
Member

Summary

  • Adds conversational form filling mode using Claude via Bedrock
  • flex-assistant component renders in a sticky right panel (editor-layout pattern)
  • ScriptedFillingAgent for tests, BedrockFillingAgent for production
  • System prompt builder generates context-aware prompts with tool definitions
  • ConversationGateway persists message history (SQLite)
  • Toggle between traditional form view and chat view on conversational pages

Known Issues (follow-up PR)

  • Bedrock responses sometimes return empty text (fallback message shown)
  • Message ordering: assistant-first history causes API failures (fix in place but needs verification on EC2)
  • Need to verify EC2 instance has proper Bedrock credentials for this branch
  • Layout polish: assistant panel height tuning on various viewport sizes

Test plan

  • Integration tests pass (4 tests covering full flow, toggle, conditional fields)
  • All 769 tests pass
  • Manual E2E testing blocked by Bedrock credential issue on EC2

FillingAgent interface, FillingContext, FillingTurn,
ConversationMessage, and ConversationGateway types.
Add deterministic filling agent for testing conversational forms:
- Walks through required fields sequentially
- Evaluates field and group conditions using evaluateCondition
- Skips fields/groups whose conditions aren't met
- Returns finished=true when all applicable required fields are collected

Test coverage includes:
- Initial field prompt with empty context
- Value collection and progression to next field
- Completion detection
- Conditional field/group skipping

ScriptedFillingAgent provides a baseline for testing conversational
form behavior before implementing the LLM-based BedrockFillingAgent.
Adds SqliteConversationGateway with:
- appendMessage() to store messages with role, content, toolCalls, createdAt
- getMessages() to retrieve messages in insertion order
- clear() to remove all messages for a session
- Proper session isolation

Part of story-9: conversational form filling agent infrastructure.
Implemented buildSystemPrompt function that generates comprehensive
system prompts for LLM-based filling agents. The builder:
- Describes the form structure with all groups and requirements
- Documents three tools: collect_field, explain_field, skip_field
- Lists remaining fields to collect based on current state
- Respects conditional logic for both groups and fields
- Filters out already-collected fields
- Provides field metadata (type, choices, validation, help text)

Test coverage includes:
- Form structure description
- Tool descriptions
- Field filtering (collected vs uncollected)
- Conditional field evaluation
- Conditional group evaluation
- Field metadata inclusion
- Add BedrockFillingAgent class that implements FillingAgent interface
- Uses Vercel AI SDK generateText with tool-use pattern
- Defines three tools: collect_field, explain_field, skip_field
- Parses tool calls and updates fieldsCollected
- Determines form completion by checking remaining required fields
- Includes comprehensive test coverage with 8 test cases
Implements GET and POST handlers for conversational form page delivery:

- GET /forms/:specId/:sessionId/pages/:pageIndex/chat - renders ChatPanel with conversation history
- POST /forms/:specId/:sessionId/pages/:pageIndex/chat - processes user message, calls FillingAgent, returns JSON (X-Live-Chat) or redirects

Key behaviors:
- Checks deliveryMode='conversational' before rendering chat interface
- Integrates with ConversationGateway for message persistence
- Calls FillingAgent.advance() to process user input and collect fields
- Updates FormSession.fields with collected data
- Returns JSON response for live chat (X-Live-Chat header)
- Supports branch-qualified routes for preview deployments
- Loads chat.js client script for progressive enhancement

Part of Story 9: Carlos completes complex sections through conversation
- Add toggle button in static form view when deliveryMode is 'conversational' or 'hybrid'
- Add toggle button in chat view when deliveryMode is 'hybrid'
- Update chat view to accept both 'conversational' and 'hybrid' delivery modes
- Toggle only appears when conversation gateway and filling agent are available
- Import BedrockFillingAgent, ScriptedFillingAgent, SqliteConversationGateway
- Create conversationGateway instance with formsDbPath
- Create fillingAgent based on USE_SCRIPTED_AGENT env var
- Pass both to createFormRouter

This completes the integration of conversational form filling at the
server entrypoint level. Forms can now toggle between static and
conversational modes with full LLM support.
Modified testFormSpec to set page-2 (Employment) deliveryMode to 'conversational'.
This page includes the employment and income groups with conditional fields
(employmentType conditional on employed=Yes, income group conditional on employed=Yes).

Enables testing of conversational page flow including:
- Agent-driven field collection
- Conditional field handling
- Chat interface rendering
- View toggle functionality

All 765 tests pass.
- Create InMemoryConversationGateway for testing
- Test full flow: static page → conversational page → review → submit
- Test conditional field handling (skipping fields when conditions not met)
- Test both form and chat view access for conversational pages
- Verify conversation messages are persisted
- Verify fields are collected correctly by agent

Bug fix: Pass all groups to filling agent instead of pre-filtered groups.
The agent needs to evaluate conditions dynamically as it collects fields,
not just work with currently visible groups.
Bedrock expects tool results after assistant messages that contain tool
calls. Since we handle tool execution outside the AI SDK (we use the
tool calls to update session state directly), we filter these messages
from the conversation history to avoid Bedrock validation errors.

This allows the conversational flow to work end-to-end.
The agent expects userMessage to not be in the history yet. Previously
we were appending the user message before building the context, causing
the agent to see it twice (once in history, once as current response).
This led to concatenated responses like 'danhello?hello?'.

Now we build the context first, call the agent, then append both user
and assistant messages to the conversation history.
Updated system prompt to:
- Extract all information from each user response
- Avoid re-asking for already provided information
- Only ask follow-up questions when information is missing
- Move forward efficiently

This fixes the issue where the agent would ask about fields that were
already answered (e.g., asking for firstName after user said 'dan jones').
When a user first navigates to a conversational page, the agent now
generates an initial greeting by calling advance() with null. This
provides a natural starting point for the conversation instead of
showing an empty chat panel with just the page title.

Updated tests to expect the initial greeting message.
Previously we were filtering out assistant messages with tool calls,
which caused the agent to lose memory of what it had asked. Now we
include all messages in the history, formatted as simple text content.
This allows the agent to maintain conversational context and ask
appropriate follow-up questions.

Also noting: should consider using flex-assistant component from design
system for better chat UI.
Updated system prompt to explicitly tell Claude to always include
conversational text alongside tool calls. When collecting a field,
immediately ask for the next one in the same message.

Also added fallback logic: if result.text is empty (shouldn't happen
now), generate a default message to avoid Bedrock 'empty content' error.

This fixes the issue where agent would collect a field but not ask
a follow-up question, leaving the user confused about what to do next.
Major refactor to use existing flex-assistant component instead of
custom ChatPanel:

- Replace ChatPanel with flex-assistant component
- Create hybrid layout: form fields on left, assistant on right
- Add conversational-form.js client script to wire up events
- Load conversation history into assistant on page load
- Handle assistant:message-submitted events
- Add CSS for conversational-form-layout (two-column grid)

Benefits:
- Reuse proven assistant component from editor
- Better UX with proper chat semantics
- Show form and assistant side-by-side like homework implementation
- Simpler, less custom code

Tests need updating to check for flex-assistant instead of chat-panel.
Updated tests to expect flex-assistant and conversational-form-layout
instead of chat-panel. All tests passing.
Applies editor-layout pattern: position sticky with max-block-size: 100dvh
and overflow: hidden. The flex-assistant component has height: 100% and
expects its parent to provide height constraint. This makes the assistant
panel stay within viewport bounds with sticky header and scrollable messages.
Changed from max-block-size + overflow: hidden to block-size: 100dvh
with display: flex. The flex-assistant component expects height: 100%
from parent, and needs the parent to be flex to properly distribute
space between header, messages, and input areas.
The flex-assistant web component wasn't being registered because
components.js wasn't loaded. Added script tag to load it before
conversational-form.js so the custom element is defined.
Changed from block-size: 100dvh to max-block-size with header offset
calculation. The assistant now sticks below the header with proper
height constraint, keeping input visible without page scroll.
1. Add "Back to Form View" link in chat mode for escape hatch
2. Always show navigation area with helpful prompt or Continue button
3. Update finished detection to check for completion message from assistant
4. Update system prompt to explicitly say "complete" when done
5. Make page header and controls more compact with flexbox

This makes the conversational flow more intuitive - users always know
what to do next and can easily switch back to traditional form if needed.
Three issues preventing the chat from working:

1. conversational-form.js wasn't in dist/ - added copy to build step
2. Script ran before custom element was defined - changed to type="module"
   and added customElements.whenDefined() await
3. Removed duplicate components.js script tag (Layout already loads it)
1. Fixed POST handler rejecting hybrid delivery mode pages
2. Replaced "Processing..." fallback with helpful contextual messages
3. Use contentWidth="full" to break out of l-page-content constraints
4. Assistant panel now uses inset-block-start: 0 + max-block-size: 100dvh
   (matches editor-layout pattern exactly)
5. Added border-inline-start to visually separate assistant from form
6. Form content gets proper padding within its column
Bedrock requires messages to start with a user role. The initial
greeting is stored as an assistant message, so subsequent calls had
[assistant, user] ordering which caused silent API failures. Now
prepends a synthetic user greeting when history starts with assistant.
When the filling agent throws, return the error message as a chat
response so it's visible in the browser instead of being swallowed.
…l-sections

# Conflicts:
#	src/entrypoints/app/server.tsx
@danielnaab danielnaab temporarily deployed to story-9-conversational-sections April 19, 2026 14:51 Inactive
@danielnaab danielnaab merged commit 808bdf1 into main Apr 19, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant