Skip to content

feat: queue input for running automation flows#268

Closed
mason5052 wants to merge 4 commits intovxcontrol:feature/next-releasefrom
mason5052:codex/issue-192-automation-queue
Closed

feat: queue input for running automation flows#268
mason5052 wants to merge 4 commits intovxcontrol:feature/next-releasefrom
mason5052:codex/issue-192-automation-queue

Conversation

@mason5052
Copy link
Copy Markdown
Contributor

Summary

  • allow automation flows to accept follow-up input while status is Running
  • queue submitted input in the backend so it is delivered at the next task boundary instead of requiring an interrupt
  • keep Stop available in the UI and add queue-focused backend tests

Problem

Issue #192 asks for tighter assistant and automation integration. A smaller but immediately useful part of that request is use case 2: submit new instructions while automation is already running without interrupting the current task.

Today PentAGI blocks that path in two places.

  • the automation UI hides submit while the flow is Running and only offers Stop
  • the backend input path uses an unbuffered handoff, so follow-up instructions are not handled as a reliable queue

Solution

Implement the queueing path for running automation flows without expanding scope into assistant handoff or file injection yet.

  • flow input now uses a bounded FIFO queue with explicit queue-full errors and tests for enqueue, early error, cancel, and stop-drain behavior
  • provider switching now happens when queued input is actually processed, so queued messages follow the current flow execution path safely
  • the automation form keeps provider changes disabled, but now allows message submission while Running and shows Stop alongside Submit
  • queue-focused helper text explains that new instructions are applied at the next task boundary

This intentionally targets use case 2 from #192. Assistant-generated plans, structured handoff, and file or data-source injection can still follow in later work.

User Impact

Users can add follow-up instructions to a running automation flow without interrupting the active task. The current task continues, the new instruction is queued for the next handoff point, and Stop remains available when they do want to cancel execution.

Test Plan

  • go test ./pkg/controller/...
  • npx eslint src/features/flows/flow-form.tsx src/features/flows/messages/flow-automation-messages.tsx src/providers/flow-provider.tsx
  • npm run build
  • git diff --check

Notes

npm run lint still fails on this Windows host because of pre-existing unrelated frontend lint errors in files such as src/components/shared/markdown.tsx, src/components/ui/textarea.tsx, and several existing flow/search components. The files changed in this PR pass the targeted eslint command above.

Signed-off-by: mason5052 <ehehwnwjs5052@gmail.com>
Copilot AI review requested due to automatic review settings April 16, 2026 00:59
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enables submitting follow-up instructions to an automation flow while it is Running by queueing input server-side and adjusting the UI to allow submission without interrupting the current task.

Changes:

  • Backend: switch flow input from an unbuffered handoff to a bounded buffered queue and process provider switches at consumption time.
  • Frontend: allow message submission while Running, keep Stop visible, and add queue-focused helper/placeholder text.
  • Backend tests: add queue-focused unit tests covering enqueue, full-queue error, early error propagation, cancel, and stop-drain behavior.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
frontend/src/providers/flow-provider.tsx Removes provider override from automation message submission payload.
frontend/src/features/flows/messages/flow-automation-messages.tsx Enables input submission while Running, keeps Stop available, adds queue messaging.
frontend/src/features/flows/flow-form.tsx Adds props to allow input while loading and show both Submit/Stop buttons as needed.
backend/pkg/controller/flow.go Implements bounded input queueing, stop-time discard/drain handling, and defers provider switching to processing time.
backend/pkg/controller/flow_test.go Adds unit tests for queued input behavior and stop-drain semantics.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread backend/pkg/controller/flow.go Outdated
Comment thread backend/pkg/controller/flow.go
Comment thread backend/pkg/controller/flow_test.go
Signed-off-by: mason5052 <ehehwnwjs5052@gmail.com>
Signed-off-by: mason5052 <ehehwnwjs5052@gmail.com>
Signed-off-by: mason5052 <ehehwnwjs5052@gmail.com>
@asdek asdek changed the base branch from main to feature/next-release April 22, 2026 18:32
@asdek
Copy link
Copy Markdown
Contributor

asdek commented Apr 22, 2026

@mason5052 thanks again - you've been a consistent and high-quality contributor and this PR is no exception in terms of engineering care.

after reviewing the approach carefully I'm going to decline merging it in its current shape, and I want to explain exactly why so the reasoning is clear

the core concern: in-memory queue fragility

the buffered-channel queue solves the immediate "don't block the caller" problem, but it introduces a class of issues we'd be inheriting permanently:

  • no persistence. a restart, crash, or container restart silently drops everything in the queue. users who submitted follow-up instructions would never know they were lost, and we'd have no way to surface that.
  • no visibility. there's nothing in the UI, the database, or the GraphQL schema that lets a user see what's waiting in the queue. they submit and… hope. this is especially problematic for automation flows that can run for tens of minutes.
  • no management. there is no way to cancel, reorder, or inspect queued items. once something is in the channel it either gets processed or silently dropped on stop/drain. that's a regression in user control compared to the current hard-stop model.

the task-boundary model creates runaway flows

the deeper architectural issue is what happens at the "next task boundary". PentAGI's current model is flow -> tasks -> subtasks -> actions with deterministic lifetime. when queued input is consumed, the flow will spawn a new top-level task (or equivalent). without a hard bound on the queue depth and without the user being able to see the outstanding task graph, the flow becomes open-ended in a way that's very hard to reason about. users would have no reliable signal for "this flow is done."

thank you for tackling this - the problem itself is real and worth solving properly.

@asdek asdek closed this Apr 22, 2026
@asdek
Copy link
Copy Markdown
Contributor

asdek commented Apr 23, 2026

@mason5052 I've tried to implement this algorithm here 5067e8f
what do you think about that?

@mason5052
Copy link
Copy Markdown
Contributor Author

Thanks for the detailed review and for spelling out the concerns so clearly. I agree the in-memory queue version introduces too much hidden state around persistence, visibility, and flow lifetime, so closing this version makes sense.

I also looked through \5067e8f5a44456984f8b3401d47e8aff4722189b, and I think the flow-management-tool direction is much more aligned with PentAGI's task model. I'm happy to treat this PR as superseded and follow up later with tests or docs around that approach if that would be useful.

mason5052 added a commit to mason5052/pentagi that referenced this pull request May 8, 2026
- Reword PR vxcontrol#268 references to talk about review feedback rather
  than the PR being rejected.
- Clarify host.docker.internal availability: not universally provided
  by the core compose stack; Docker Desktop typically resolves it,
  while Linux/operator-managed compose stacks may need an explicit
  extra_hosts: host.docker.internal:host-gateway entry or another
  controlled endpoint.
- Make the safe initial Burp example unambiguously read-only: drop
  start_active_scan from the illustrative allowlist and call out
  that active capabilities belong to a later, explicitly gated
  milestone with scope and approval controls.
- Rephrase the awkward 'PentAGI must not be inferred...' sentence
  for clarity.

Signed-off-by: mason5052 <ehehwnwjs5052@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants