feat: queue pending prompts when session is busy (#16102)#19156
feat: queue pending prompts when session is busy (#16102)#19156beenotung wants to merge 4 commits intoanomalyco:devfrom
Conversation
|
This PR doesn't fully meet our contributing guidelines and PR template. What needs to be fixed:
Please edit this PR description to address the above within 2 hours, or it will be automatically closed. If you believe this was flagged incorrectly, please let a maintainer know. |
|
The following comment was made by an LLM, it may be inaccurate: Based on my search results, I found several related PRs about queuing and message handling, but they appear to be addressing different aspects or older features. The most relevant ones are:
However, none of these appear to be duplicates of PR #19156. Your PR (#19156) implements a new backend mechanism to queue pending prompts when the session is busy (instead of rejecting with BusyError), which is different from the earlier client-side queuing and UI control features. No duplicate PRs found for the specific feature being implemented in PR #19156. |
- Fixed extra parenthesis in catch block (line 834) - Added .opencode/memory/ to gitignore to keep agent notes local
- Add priority field to queue data structure (urgent/normal/background) - Implement batched draining with configurable limit per loop iteration - Sort queued messages by priority then by queue time (FIFO within same priority) - Add context tagging for queued messages with <queued-message> wrapper - Update PromptInput schema to accept priority parameter - Update both sync and async prompt endpoints to support priority This enhancement allows users to send high-priority messages that get processed first, while lower-priority messages are batched efficiently.
- Update prompt_async error handling test to accept both .catch() and try/catch - Add comprehensive tests for queue pending prompts feature: - Queuing behavior when session is busy - Priority levels (urgent/normal/background) - Smart draining with priority ordering - Batch limiting per iteration
|
This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window. Feel free to open a new pull request that follows our guidelines. |
|
I'm adding additional changes to the PR, will test it and re-open the issue when it's ready. Also welcome others to join. |
Issue for this PR
Closes #16102
Type of change
What does this PR do?
When the session is busy (agent mid-task), incoming prompts are now queued instead of being rejected with a BusyError. The queued prompts are injected as user messages at the start of each loop iteration, before loading message history - allowing mid-task steering without interrupting the current task.
This implements the core mechanism requested in #16102 - draining queued messages at the start of each
loop()iteration and injecting them as context before the next LLM call.Note: This PR was primarily generated by an AI coding assistant (opencode). Additional review for edge cases and consistency with existing patterns would be valuable.
How did you verify your code works?
The changes handle both sync and async prompt endpoints:
Screenshots / recordings
N/A (backend change)
Checklist