feat(tools): markdown output for internal tool results#1172
feat(tools): markdown output for internal tool results#1172senamakel merged 3 commits intotinyhumansai:mainfrom
Conversation
…timize LLM token usage
Wires the markdown output path added in the previous commit through concrete internal tools so the agent loop sees compact markdown instead of pretty-printed JSON when context.prefer_markdown_tool_output is on. Tools converted (override execute_with_options + supports_markdown): - cron_list, cron_runs, cron_add, cron_run, cron_update - web_search_tool, current_time - git_operations (status / log / branch sub-ops) Each tool keeps the JSON content block for callers that want raw structure, and populates ToolResult.markdown_formatted with a hand- rolled markdown rendering that the harness prefers for LLM input. Also reformats turn.rs / skills/types.rs lines touched by the framework PR via cargo fmt.
📝 WalkthroughWalkthroughThis PR adds optional markdown-formatted tool outputs and per-call execution options. It introduces ChangesMarkdown Output Support Infrastructure
Sequence DiagramsequenceDiagram
actor Agent as Agent Harness
participant Ctx as ContextManager
participant Tool as Tool Implementation
participant Result as ToolResult
participant LLM as LLM
Agent->>Ctx: prefer_markdown_tool_output()?
Ctx-->>Agent: true/false
Agent->>Agent: Build ToolCallOptions { prefer_markdown }
Agent->>Tool: execute_with_options(args, options)
Tool->>Tool: Generate JSON payload
alt options.prefer_markdown & tool supports_markdown
Tool->>Tool: Render markdown_formatted
end
Tool->>Result: return ToolResult { content/json, markdown_formatted? }
Tool-->>Agent: ToolResult
Agent->>Result: output_for_llm(prefer_markdown)
alt prefer_markdown && markdown present
Result-->>Agent: markdown_formatted
else
Result-->>Agent: content/json
end
Agent->>LLM: Send preferred format
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Review rate limit: 0/5 reviews remaining, refill in 56 minutes and 19 seconds. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/openhuman/tools/impl/filesystem/git_operations.rs (1)
452-462: ⚡ Quick winHonor
prefer_markdowninexecute_with_optionsinstead of ignoring it.Line 455 receives options but the method always forwards unchanged, while markdown is generated unconditionally downstream. This makes the option ineffective and does unnecessary work when markdown is disabled.
♻️ Minimal fix to enforce option semantics
- async fn execute_with_options( - &self, - args: serde_json::Value, - _options: ToolCallOptions, - ) -> anyhow::Result<ToolResult> { + async fn execute_with_options( + &self, + args: serde_json::Value, + options: ToolCallOptions, + ) -> anyhow::Result<ToolResult> { @@ - self.execute(args).await + let mut result = self.execute(args).await?; + if !options.prefer_markdown { + result.markdown_formatted = None; + } + Ok(result) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/impl/filesystem/git_operations.rs` around lines 452 - 462, execute_with_options currently ignores the provided ToolCallOptions; read the prefer_markdown flag from the ToolCallOptions parameter and propagate it to the downstream execution path instead of always relying on internal markdown generation. Concretely, inspect options.prefer_markdown in execute_with_options and either (a) inject a boolean field "prefer_markdown" into the serde_json::Value args before calling self.execute(args) or (b) if self.execute has an overload that accepts options, call that overload with the options; update the call to self.execute(...) accordingly so the markdown generation honors the prefer_markdown flag.src/openhuman/tools/impl/system/current_time.rs (1)
123-159: ⚡ Quick winPrefer JSON content blocks for the payload, then attach markdown.
Line 123 serializes JSON into a text block; this weakens structured-consumer consistency now that markdown is first-class. Returning a JSON content block and attaching markdown keeps both paths explicit.
♻️ Suggested refactor
- let mut result = ToolResult::success(serde_json::to_string_pretty(&payload)?); + let mut result = ToolResult::json(payload.clone()); if options.prefer_markdown { let mut md = String::new(); @@ - result.markdown_formatted = Some(md); + result = result.with_markdown(md); } Ok(result)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/impl/system/current_time.rs` around lines 123 - 159, The code currently serializes the payload into the primary ToolResult text block when building `result`, which loses the structured JSON representation; change the logic in the function that builds `result` so the serialized payload (from `serde_json::to_string_pretty(&payload)`) is stored as the JSON/structured content of the `ToolResult` (leave it as the canonical JSON content) and then, when `options.prefer_markdown` is true, generate the `md` string exactly as before and set it to `result.markdown_formatted = Some(md)` without replacing or moving the JSON content; keep variable names `result`, `payload`, `options.prefer_markdown`, and fields like `markdown_formatted` intact so consumers receive both a JSON content block and an attached Markdown representation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/tools/impl/cron/list.rs`:
- Around line 41-43: The preview creation is slicing bytes with &trimmed[..200]
which can panic on multi-byte UTF-8; change to use character-based truncation:
check trimmed.chars().count() > 200 and build the preview with
trimmed.chars().take(200).collect::<String>() (then append the ellipsis) instead
of byte-slicing; update the conditional that sets preview accordingly
(referencing the preview and trimmed variables).
---
Nitpick comments:
In `@src/openhuman/tools/impl/filesystem/git_operations.rs`:
- Around line 452-462: execute_with_options currently ignores the provided
ToolCallOptions; read the prefer_markdown flag from the ToolCallOptions
parameter and propagate it to the downstream execution path instead of always
relying on internal markdown generation. Concretely, inspect
options.prefer_markdown in execute_with_options and either (a) inject a boolean
field "prefer_markdown" into the serde_json::Value args before calling
self.execute(args) or (b) if self.execute has an overload that accepts options,
call that overload with the options; update the call to self.execute(...)
accordingly so the markdown generation honors the prefer_markdown flag.
In `@src/openhuman/tools/impl/system/current_time.rs`:
- Around line 123-159: The code currently serializes the payload into the
primary ToolResult text block when building `result`, which loses the structured
JSON representation; change the logic in the function that builds `result` so
the serialized payload (from `serde_json::to_string_pretty(&payload)`) is stored
as the JSON/structured content of the `ToolResult` (leave it as the canonical
JSON content) and then, when `options.prefer_markdown` is true, generate the
`md` string exactly as before and set it to `result.markdown_formatted =
Some(md)` without replacing or moving the JSON content; keep variable names
`result`, `payload`, `options.prefer_markdown`, and fields like
`markdown_formatted` intact so consumers receive both a JSON content block and
an attached Markdown representation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: cc6e2e49-558e-42a6-b0e8-6c731b645da2
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (14)
src/openhuman/agent/harness/session/turn.rssrc/openhuman/config/schema/context.rssrc/openhuman/context/manager.rssrc/openhuman/skills/types.rssrc/openhuman/tools/impl/cron/add.rssrc/openhuman/tools/impl/cron/list.rssrc/openhuman/tools/impl/cron/run.rssrc/openhuman/tools/impl/cron/runs.rssrc/openhuman/tools/impl/cron/update.rssrc/openhuman/tools/impl/filesystem/git_operations.rssrc/openhuman/tools/impl/network/web_search.rssrc/openhuman/tools/impl/system/current_time.rssrc/openhuman/tools/mod.rssrc/openhuman/tools/traits.rs
`&trimmed[..200]` slices by bytes and panics at runtime on multibyte UTF-8 input (emoji, CJK). Switch to char-based truncation so the markdown rendering is safe for any prompt content. Reported by CodeRabbit on PR tinyhumansai#1172.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/openhuman/tools/impl/cron/list.rs (2)
87-113: ⚡ Quick winAdd a direct test for the markdown-preferred execution path.
Current tests validate
execute()only. A focused test forexecute_with_options(..., ToolCallOptions { prefer_markdown: true })would lock in the new behavior and prevent silent regressions.Test example
#[tokio::test] async fn returns_empty_list_when_no_jobs() { @@ } + +#[tokio::test] +async fn returns_markdown_when_preferred() { + let tmp = TempDir::new().unwrap(); + let cfg = test_config(&tmp).await; + let tool = CronListTool::new(cfg); + + let result = tool + .execute_with_options(json!({}), ToolCallOptions { prefer_markdown: true }) + .await + .unwrap(); + + assert!(!result.is_error); + assert_eq!(result.output().trim(), "[]"); + assert_eq!( + result.markdown_formatted.as_deref(), + Some("_No scheduled cron jobs._") + ); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/impl/cron/list.rs` around lines 87 - 113, Add a unit test that calls the tool's execute_with_options method directly with ToolCallOptions { prefer_markdown: true } to ensure the markdown branch is exercised: stub or mock cron::list_jobs to return a known jobs value, call execute_with_options on the relevant struct (the implementation that defines execute_with_options), assert the returned ToolResult is success and that result.markdown_formatted is Some and equals the expected render_jobs_markdown(&jobs) output; also include a complementary assertion that when prefer_markdown is false the markdown_formatted remains None.
16-47: ⚡ Quick winEscape markdown-sensitive values before inline interpolation.
label,agent,command, andprevieware rendered raw; backticks/newlines can break markdown structure and reduce output reliability for the LLM path.Proposed patch
+fn escape_markdown_inline(value: &str) -> String { + value + .replace('\\', "\\\\") + .replace('`', "\\`") + .replace('\n', " ") + .replace('\r', " ") +} + fn render_jobs_markdown(jobs: &[CronJob]) -> String { @@ - let label = job.name.as_deref().unwrap_or(&job.id); + let label = escape_markdown_inline(job.name.as_deref().unwrap_or(&job.id)); let _ = writeln!(out, "\n## {label}"); - let _ = writeln!(out, "- **id**: `{}`", job.id); - let _ = writeln!(out, "- **schedule**: `{}`", job.expression); + let _ = writeln!(out, "- **id**: `{}`", escape_markdown_inline(&job.id)); + let _ = writeln!( + out, + "- **schedule**: `{}`", + escape_markdown_inline(&job.expression) + ); @@ if let Some(agent) = &job.agent_id { - let _ = writeln!(out, "- **agent**: `{agent}`"); + let _ = writeln!(out, "- **agent**: `{}`", escape_markdown_inline(agent)); } - let _ = writeln!(out, "- **command**: `{}`", job.command); + let _ = writeln!( + out, + "- **command**: `{}`", + escape_markdown_inline(&job.command) + ); @@ - let _ = writeln!(out, "- **prompt**: {preview}"); + let _ = writeln!(out, "- **prompt**: {}", escape_markdown_inline(&preview)); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tools/impl/cron/list.rs` around lines 16 - 47, The fields written with writeln! (label, agent, command, preview) are interpolated raw and can contain backticks or newlines that break Markdown; add a small helper (e.g., escape_markdown_inline) that normalizes/removes newlines and escapes or replaces backticks and other markdown-sensitive characters, then call it on job.name/label, job.agent_id/agent, job.command/command and the computed preview before using them in the writeln! calls so all inline interpolations are safe.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/openhuman/tools/impl/cron/list.rs`:
- Around line 87-113: Add a unit test that calls the tool's execute_with_options
method directly with ToolCallOptions { prefer_markdown: true } to ensure the
markdown branch is exercised: stub or mock cron::list_jobs to return a known
jobs value, call execute_with_options on the relevant struct (the implementation
that defines execute_with_options), assert the returned ToolResult is success
and that result.markdown_formatted is Some and equals the expected
render_jobs_markdown(&jobs) output; also include a complementary assertion that
when prefer_markdown is false the markdown_formatted remains None.
- Around line 16-47: The fields written with writeln! (label, agent, command,
preview) are interpolated raw and can contain backticks or newlines that break
Markdown; add a small helper (e.g., escape_markdown_inline) that
normalizes/removes newlines and escapes or replaces backticks and other
markdown-sensitive characters, then call it on job.name/label,
job.agent_id/agent, job.command/command and the computed preview before using
them in the writeln! calls so all inline interpolations are safe.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6cfff6b8-c6f6-4a8b-b883-66258f507107
📒 Files selected for processing (1)
src/openhuman/tools/impl/cron/list.rs
* feat(remotion): Ghosty character library with transparent MOV variants (tinyhumansai#1059) Co-authored-by: WOZCODE <contact@withwoz.com> * feat(composio/gmail): sync into memory tree (Slack-parity) (tinyhumansai#1056) Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(scheduler-gate): throttle background AI on battery / busy CPU (tinyhumansai#1062) * fix(core,cef): run core in-process and stop orphaning CEF helpers on Cmd+Q (tinyhumansai#1061) * ci: add dedicated staging release workflow (tinyhumansai#1066) * fix(sentry): Rust source context + per-release deploy marker (tinyhumansai#405) (tinyhumansai#1067) * fix(welcome): re-enable OAuth buttons with focus/timeout recovery (tinyhumansai#1049) (tinyhumansai#1069) Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * chore(dependencies): update pnpm-lock.yaml and Cargo.lock for package… (tinyhumansai#1082) * fix(onboarding): personalize welcome agent greeting with user identity (tinyhumansai#1078) * fix(chat): make agent message bubbles fit content width (tinyhumansai#1083) * Feat/dmg checks (tinyhumansai#1084) * fix(linux): Add X11 platform flags to .deb package launcher (tinyhumansai#1087) Co-authored-by: unn-Known1 <unn-known1@users.noreply.github.com> * fix(sentry): auto-send React events; collapse core→tauri for desktop (tinyhumansai#1086) Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai> * fix(cef): run blank reload guard on the CEF UI thread (tinyhumansai#1092) * fix(app): reload webview instead of restart_app in dev mode (tinyhumansai#1068) (tinyhumansai#1071) * fix(linux): deliver X11 ozone flags via custom .desktop template (tinyhumansai#1091) * fix(webview-accounts): retry data-dir purge so CEF handle race doesn't leak cookies (tinyhumansai#1076) (tinyhumansai#1081) Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai> * fix(webview/slack): media perms + deep-link isolation (tinyhumansai#1074) (tinyhumansai#1080) Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai> * ci(release): split staging vs production workflows; promote staging tags (tinyhumansai#1094) * Update release-staging.yml (tinyhumansai#1097) * chore(staging): v0.53.5 * chore(staging): v0.53.6 * ci(staging): cut staging from main; add act local-debug helper (tinyhumansai#1099) * chore(staging): v0.53.7 * fix(ci): correct sentry-cli download URL and trap scope (tinyhumansai#1100) * chore(staging): v0.53.8 * feat(chat): forward thread_id to backend for KV cache locality (tinyhumansai#1095) * fix(ci): bump pinned sentry-cli to 3.4.1 (2.34.2 was never published) (tinyhumansai#1102) * chore(staging): v0.53.9 * fix(ci): drop bash trap in upload_sentry_symbols.sh; inline cleanup (tinyhumansai#1103) * chore(staging): v0.53.10 * refactor(session): flatten session_raw/, switch md to YYYY_MM_DD (tinyhumansai#1098) * Add full Composio managed-auth toolkit catalog (tinyhumansai#1093) * ci: add diff-aware 80% coverage gate (Vitest + cargo-llvm-cov) (tinyhumansai#1104) * feat(scripts): pnpm work + pnpm debug for agent-driven workflows (tinyhumansai#1105) * ci: pull pnpm into CI image, drop redundant setup steps (tinyhumansai#1107) * docs: add Cursor Cloud specific instructions to AGENTS.md (tinyhumansai#1106) Co-authored-by: Cursor Agent <cursoragent@cursor.com> * chore(staging): v0.53.11 * docs: surface 80% coverage gate and scripts/debug runners (tinyhumansai#1108) * feat(app): show Composio integrations as sorted icon grid on Skills (tinyhumansai#1109) Co-authored-by: Cursor Agent <cursoragent@cursor.com> * feat(composio): client-side trigger enable/disable toggles (tinyhumansai#1110) * feat(skills): channels grid + integrations card polish; tolerant Composio trigger decode (tinyhumansai#1112) * chore(staging): v0.53.12 * feat(home): early-bird banner + assistant→agent terminology (tinyhumansai#1113) * feat(updater): in-app auto-update with auto-download + restart prompt (tinyhumansai#677) (tinyhumansai#1114) * chore(claude): add ship-and-babysit slash command (tinyhumansai#1115) * feat(home): EarlyBirdyBanner + agent terminology + LinkedIn enrichment model pin (tinyhumansai#1118) * fix(chat): single onboarding thread in sidebar after wizard (tinyhumansai#1116) Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: Steven Enamakel <senamakel@users.noreply.github.com> * fix: filter out global namespace from citation chips (tinyhumansai#1124) Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com> Co-authored-by: senamakel-droid <281415773+senamakel-droid@users.noreply.github.com> * feat(nav): enable Memory tab in BottomTabBar (tinyhumansai#1125) * feat(memory): singleton ingestion + status RPC + UI pill (tinyhumansai#1126) * feat(human): mascot tab with viseme-driven lipsync (staging only) (tinyhumansai#1127) * Fix CEF zombie processes on full app close and restart (tinyhumansai#1128) Co-authored-by: senamakel-droid <281415773+senamakel-droid@users.noreply.github.com> Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai> * Update issue templates for GitHub issue types (tinyhumansai#1146) * feat(human): expand mascot expressions and tighten reply-speech state machine (tinyhumansai#1147) * feat(memory): ingestion pipeline + tree-architecture docs + ops/schemas split (tinyhumansai#1142) * feat(threads): surface live subagent work in parent thread (tinyhumansai#1122) (tinyhumansai#1159) * fix(human): keep mascot mouth animating when TTS ships no viseme data (tinyhumansai#1160) * feat(composio): consume backend markdownFormatted for LLM output (tinyhumansai#1165) * fix(subagent): lazy-register toolkit actions filtered out of fuzzy top-K (tinyhumansai#1162) * feat(memory): user-facing long-term memory window preset (tinyhumansai#1137) (tinyhumansai#1161) * fix(tauri-shell): proactively kill stale openhuman RPC on startup (tinyhumansai#1166) * chore(staging): v0.53.13 * fix(composio): per-action tool consumes backend markdownFormatted (tinyhumansai#1167) * fix(threads): persist selectedThreadId across reloads (tinyhumansai#1168) * feat(memory_tree): switch embed model to bge-m3 (1024-dim, 8K context) (tinyhumansai#1174) Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix(agent): drop redundant [Memory context] recall injection (tinyhumansai#1173) * chore(memory_tree): drop body-read timeouts on Ollama HTTP calls (tinyhumansai#1171) Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(transcript): emit thread_id + fix orchestrator missing cost (tinyhumansai#1169) * fix(composio/gmail): phase out html2md, prefer text/plain MIME part (tinyhumansai#1170) Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(tools): markdown output for internal tool results (tinyhumansai#1172) * feat(security): enforce prompt-injection guard before model and tool execution (tinyhumansai#1175) * fix(cef): popup paint dies after first frame — skip blank-page guard for popups (tinyhumansai#1079) (tinyhumansai#1182) Co-authored-by: Steven Enamakel <31011319+senamakel@users.noreply.github.com> * chore(sentry): rename OPENHUMAN_SENTRY_DSN → OPENHUMAN_CORE_SENTRY_DSN (tinyhumansai#1186) * feat(remotion): add yellow mascot character with all animation variants (tinyhumansai#1193) Co-authored-by: Neel Mistry <neelmistry@Neels-MacBook-Pro.local> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(composio): hide raw connection ID, derive friendly label (tinyhumansai#1153) (tinyhumansai#1185) Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com> * fix(windows): align install.ps1 MSI with per-machine scope (tinyhumansai#913) (tinyhumansai#1187) Co-authored-by: Cursor <cursoragent@cursor.com> * fix(tauri): deterministic CEF teardown on full app close (tinyhumansai#1120) (tinyhumansai#1189) Co-authored-by: Cursor <cursoragent@cursor.com> * fix(composio): cap Gmail HTML body before strip (crash mitigation) (tinyhumansai#1191) Co-authored-by: Cursor <cursoragent@cursor.com> * fix(auth): stop stale chat threads after signup (tinyhumansai#1192) Co-authored-by: Cursor <cursoragent@cursor.com> * feat(sentry): staging-only "Trigger Sentry Test" button (tinyhumansai#1072) (tinyhumansai#1183) * chore(staging): v0.53.14 * chore(staging): v0.53.15 * feat(composio): format trigger slugs into human-readable labels (tinyhumansai#1129) (tinyhumansai#1179) Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com> * fix(ui): hide unsupported permission UI on non-macOS for Screen Intelligence (tinyhumansai#1194) Co-authored-by: Cursor <cursoragent@cursor.com> * chore(tauri-shell): retire embedded Gmail webview-account flow (tinyhumansai#1181) * feat(onboarding): replace welcome-agent bot with react-joyride walkthrough (tinyhumansai#1180) * chore(release): v0.53.16 * fix(threads): preserve selectedThreadId on cold-boot identity hydration (tinyhumansai#1196) * feat(core): version/shutdown/update RPCs + mid-thread integration refresh (tinyhumansai#1195) * fix(mascot): swap to yellow mascot via @remotion/player (tinyhumansai#1200) * feat(memory_tree): cloud-default LLM, queue priority, entity filter, Memory tab UI (tinyhumansai#1198) Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * Persist turn state + restore conversation history on cold-boot (tinyhumansai#1202) * feat(mascot): floating desktop mascot via native NSPanel + WKWebView (macOS) (tinyhumansai#1203) * fix(memory/tree): emit summary children as Obsidian wikilinks (tinyhumansai#1210) Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * feat(tools): coding-harness baseline primitives (tinyhumansai#1205) (tinyhumansai#1208) * docs: add Codex PR checklist for remote agents --------- Co-authored-by: Steven Enamakel <31011319+senamakel@users.noreply.github.com> Co-authored-by: WOZCODE <contact@withwoz.com> Co-authored-by: sanil-23 <sanil@vezures.xyz> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> Co-authored-by: Cyrus Gray <144336577+graycyrus@users.noreply.github.com> Co-authored-by: CodeGhost21 <164498022+CodeGhost21@users.noreply.github.com> Co-authored-by: oxoxDev <164490987+oxoxDev@users.noreply.github.com> Co-authored-by: Mega Mind <146339422+M3gA-Mind@users.noreply.github.com> Co-authored-by: Gaurang Patel <ptelgm.yt@gmail.com> Co-authored-by: unn-Known1 <unn-known1@users.noreply.github.com> Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: Steven Enamakel <senamakel@users.noreply.github.com> Co-authored-by: Steven Enamakel's Droid <enamakel.agent@tinyhumans.ai> Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com> Co-authored-by: senamakel-droid <281415773+senamakel-droid@users.noreply.github.com> Co-authored-by: YellowSnnowmann <167776381+YellowSnnowmann@users.noreply.github.com> Co-authored-by: Neil <neil@maha.xyz> Co-authored-by: Neel Mistry <neelmistry@Neels-MacBook-Pro.local> Co-authored-by: obchain <167975049+obchain@users.noreply.github.com> Co-authored-by: Jwalin Shah <jshah1331@gmail.com>
Summary
markdownFormattedpattern (feat(composio): consume backend markdownFormatted for LLM output #1165) but for in-process tools — opt-in per tool via a newTool::execute_with_optionsdefault method.markdown_formattedso the wins land immediately.Problem
Internal tools currently dump pretty-printed JSON into the agent's tool-result stream. JSON is materially more expensive than markdown in the model context window — keys, quotes, braces, indentation all burn tokens that add up fast in tool-heavy loops. Composio already migrated to markdown server-side in #1165; the same lever is available for our own tools but nothing was wired through.
Solution
Two small surfaces, both designed so existing tools keep working without changes:
ToolResult.markdown_formatted: Option<String>(skills/types.rs) — optional field withsuccess_with_markdown()/with_markdown()builders and anoutput_for_llm(prefer_markdown)selector. Serialised asmarkdownFormattedto match the composio shape; skipped whenNone.Tool::execute_with_options(args, ToolCallOptions)(tools/traits.rs) — new trait method with a default implementation that forwards toexecute(args). Tools that can render markdown override it, inspectoptions.prefer_markdown, and populate the field. Paired withTool::supports_markdown() -> boolfor telemetry/diagnostics.Plumbing:
context.prefer_markdown_tool_output(defaulttrue).ContextManagerexposes the flag;execute_tool_callinagent/harness/session/turn.rsbuildsToolCallOptionsper call, dispatches viaexecute_with_options, and usesoutput_for_llm(prefer_markdown)so the harness picks markdown when present and falls back to the JSON content block otherwise.Tools converted:
cron_list,cron_runs,cron_add,cron_run,cron_updateweb_search_tool,current_timegit_operations(status / log / branch sub-ops)Tools intentionally not touched: those that already emit compact text (
memory_recall,gitbooks_*,schedule.handle_list) or non-trivial domain renderers left for follow-ups (memory_tree_*,proxy_config, browser).Submission Checklist
docs/TESTING-STRATEGY.mdskills/types.rs. Diff coverage on the markdown branches is best evaluated by the CI gate.docs/TESTING-STRATEGY.md)Impact
context.prefer_markdown_tool_output = false— tools then fall back to their existing JSON output.Related
memory_tree_*,proxy_config,schedule.handle_get, browser tool, computer-control tools)[agent_loop] tool=… returned markdown payload bytes=…debug logs in real sessionsSummary by CodeRabbit