Skip to content

Conversation

agenticbuddy
Copy link

@agenticbuddy agenticbuddy commented Sep 10, 2025

Supersedes #3358 (locked after accidental close).
Closes #3013.

Why

Reduce clutter and improve privacy: saved prompts can be very large (hundreds of lines). By default, their bodies are redacted from the transcript, and only the user-typed command is shown (e.g., /name). Users can override via CLI/config.
Enable flexibility: add per-submission custom instructions. A single saved prompt can be reused with different runtime directives without hard-coding variants.
Ensure resume shows exactly what the user saw: transcript text (verbatim/pretty) is persisted separately from the expanded model context.
Privacy: rollout remains unredacted to preserve resume correctness; logs/traces avoid leaking prompt bodies.

Before

Transcript always showed the full saved prompt body as the user message.
No support for appending custom instructions to saved prompts.
Resume reconstructed initial messages from expanded rollout content, so users could see long bodies instead of the short form.
Logs could include template text.

What We Did (Overview)

  • Transcript redaction: transcript renders exactly what the user typed (/ or / ). The full saved prompt body is still sent to the agent.
  • User control: CLI flags (--show-saved-prompt, alias --no-redact-saved-prompt) and config key redact_saved_prompt_body (default true).
  • Custom instructions: / (multiline supported). Agent receives a structured Directive where priority is explicit (CustomInstruction > SavedPrompt). Both parts wrapped in CDATA for robustness.
  • Queue vs execute UX: while a task is running, the queue shows only what the user typed; on execute, redaction policy applies — ON shows the typed command, OFF shows a human-readable layout (“Custom instruction:” → “Saved prompt:”) via pretty_unredacted.
  • Rollout persistence: alongside expanded ResponseItem, core now also persists transcript-only EventMsg::UserMessage(message=shown) so resume displays exactly what the user saw live.
  • Privacy: rollout stays authoritative and unredacted; logs/traces avoid verbose prompt bodies.
  • Docs: prompts.md and config.md updated.

After

By default, the transcript shows only what the user typed; bodies stay private unless redaction is disabled.
Saved prompts become significantly more flexible due to per-submission instructions.
Queues stay concise; execution history can render expanded context when redaction is OFF.
Resume faithfully replays transcript text (EventMsg), while model context is restored from expanded ResponseItem.
Existing saved prompts without arguments continue to work unchanged; default redaction is opt-out via CLI/config.
Privacy: transcript/logs are redacted; rollout remains intact for resume.
All tests green; workspace builds and lints clean.

Technical Details

  • TUI: introduce SubmittedWithDisplay { text, display, pretty_unredacted }. History uses display (redaction ON) or pretty_unredacted (redaction OFF). Agent always uses text.
  • Composer: extract multiline custom instruction; build Directive with explicit priority. Provide pretty_unredacted layout (“Custom instruction:” then “Saved prompt:”) when redaction OFF.
  • ChatWidget: queue always shows display_text; history selects display_text vs pretty_unredacted based on config; send Op::AddToHistory { text: shown }.
  • Core: handle Op::AddToHistory by appending to cross-session history and persisting transcript-only EventMsg::UserMessage. Expanded context still persisted as ResponseItem.
  • Config/CLI: add redact_saved_prompt_body key; wire CLI flags --show-saved-prompt / --no-redact-saved-prompt.
  • CLI tests: override logic inline; test-only helper verifies flags. Removed unused helper to satisfy clippy.
  • Cross-crate: exec and mcp-server initializers updated with redact_saved_prompt_body: None; core tests updated.

Tests Added / Updated

  • selecting_custom_prompt_submits_file_contents (updated)
  • selecting_custom_prompt_with_instruction_wraps_and_displays_typed (added)
  • custom_prompt_shows_command_in_history (added)
  • custom_prompt_shows_body_when_redaction_disabled (added)
  • custom_instruction_with_cdata_terminator_does_not_panic_and_is_included (added)
  • CLI override tests (added):
    • flag_show_saved_prompt_sets_override_false
    • alias_no_redact_saved_prompt_sets_override_false
    • default_no_flag_sets_no_override

Docs

  • prompts.md: sections “Transcript redaction of saved prompts” and “Custom instructions for saved prompts”.
  • config.md: new key redact_saved_prompt_body.

Relation to #3164

This PR builds on the same motivation as #3164 (supporting arguments for saved prompts and keeping transcripts clean), but takes a slightly different approach:

  • Ability to include arguments in custom prompts #3164 passes user input through $PROMPT_ARGUMENT and relies on template logic to handle it.
  • This PR instead keeps saved prompts unchanged and provides the user’s inline instruction as high-priority context before the saved prompt (CustomInstruction > SavedPrompt).

Both approaches aim to make saved prompts more flexible. The difference here is mainly in UX: this PR avoids requiring template authors to add $ARGUMENTS handling, letting the model interpret both the instruction and the saved prompt directly. For users, this should feel simpler and more natural, while $PROMPT_ARGUMENT remains available for cases where deterministic parsing is preferred.

@agenticbuddy
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

@agenticbuddy
Copy link
Author

Some screens of the new functionality
With reduction:
image
No reduction, one command is waiting:
image
No reduction, both commands are executed:
image

…-prompt flag and redact_saved_prompt_body config

Signed-off-by: Roman Aleynikov <agenticbuddy@gmail.com>
…name …”, multiline), send structured Directive (CustomInstruction > SavedPrompt); update docs; add tests incl. redaction OFF, CLI override, and CDATA robustness

Signed-off-by: Roman Aleynikov <agenticbuddy@gmail.com>
… ON shows “/name …”, redaction OFF shows “Custom instruction:” then “Saved prompt:”; agent still gets Directive (CustomInstruction > SavedPrompt)

Signed-off-by: Roman Aleynikov <agenticbuddy@gmail.com>
…; keep rollout unredacted; avoid leaking prompt text in logs; show verbatim /prompt in transcript

Signed-off-by: Roman Aleynikov <agenticbuddy@gmail.com>
…ne markers in session files; keep debug logs non-sensitive

Signed-off-by: Roman Aleynikov <agenticbuddy@gmail.com>
@agenticbuddy agenticbuddy force-pushed the feat/prompt-redaction-clean branch from 8ff2baf to dc06dd4 Compare September 15, 2025 07:13
@vultuk
Copy link

vultuk commented Sep 15, 2025

The use of $ARGUMENTS actively replicates the process used in Claude code. Using this method ensures that people switching do not need to modify current prompts.

@agenticbuddy
Copy link
Author

The use of $ARGUMENTS actively replicates the process used in Claude code. Using this method ensures that people switching do not need to modify current prompts.

Thanks, I had the same thought early on, since I also use Claude Code a lot and love how $ARGUMENTS works there.

In practice though I’ve found myself augmenting saved prompts more often with arbitrary text, and the LLM has always handled both the instruction and the prompt body just fine, even with 1000+ line prompts. That’s the behavior I aimed to capture here: no special $ARGUMENTS wiring needed, while still being fully complementary to your approach.

dedrisian-oai pushed a commit that referenced this pull request Sep 25, 2025
…r + frontmatter hints (#3565)

Key features
- Custom prompts accept arguments: $1..$9, $ARGUMENTS, and $$ (literal)
- @ file picker in composer: type @ to fuzzy‑search and insert quoted
paths
- Frontmatter hints: optional description + argument-hint shown in
palette (body stripped before send)

Why
- Make saved prompts reusable with runtime parameters.
- Improve discoverability with concise, helpful hints in the slash
popup.
- Preserve privacy and approvals; no auto‑execution added.

Details
- Protocol: extend CustomPrompt with description, argument_hint
(optional).
- Core: parse minimal YAML‑style frontmatter at file top; strip it from
the submitted body.
- TUI: expand arguments; insert @ paths; render
description/argument-hint or fallback excerpt.
- Docs: prompts.md updated with frontmatter and argument examples.

Tests
- Frontmatter parsing (description/argument-hint extracted; body
stripped).
- Popup rows show description + argument-hint; excerpt fallback; builtin
name collision.
- Argument expansion for $1..$9, $ARGUMENTS, $$; quoted args and @ path
insertion.

Safety / Approvals
- No changes to approvals or sandboxing; prompts do not auto‑run tools.

Related
- Closes #2890
- Related #3265
- Complements #3403
dedrisian-oai pushed a commit that referenced this pull request Sep 29, 2025
…r + frontmatter hints (#3565)

Key features
- Custom prompts accept arguments: $1..$9, $ARGUMENTS, and $$ (literal)
- @ file picker in composer: type @ to fuzzy‑search and insert quoted
paths
- Frontmatter hints: optional description + argument-hint shown in
palette (body stripped before send)

Why
- Make saved prompts reusable with runtime parameters.
- Improve discoverability with concise, helpful hints in the slash
popup.
- Preserve privacy and approvals; no auto‑execution added.

Details
- Protocol: extend CustomPrompt with description, argument_hint
(optional).
- Core: parse minimal YAML‑style frontmatter at file top; strip it from
the submitted body.
- TUI: expand arguments; insert @ paths; render
description/argument-hint or fallback excerpt.
- Docs: prompts.md updated with frontmatter and argument examples.

Tests
- Frontmatter parsing (description/argument-hint extracted; body
stripped).
- Popup rows show description + argument-hint; excerpt fallback; builtin
name collision.
- Argument expansion for $1..$9, $ARGUMENTS, $$; quoted args and @ path
insertion.

Safety / Approvals
- No changes to approvals or sandboxing; prompts do not auto‑run tools.

Related
- Closes #2890
- Related #3265
- Complements #3403
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support arguments and cleaner transcript handling for saved prompts
2 participants