Skip to content

feat(tools): markdown output for internal tool results#1172

Merged
senamakel merged 3 commits intotinyhumansai:mainfrom
senamakel:feat/tools-md
May 4, 2026
Merged

feat(tools): markdown output for internal tool results#1172
senamakel merged 3 commits intotinyhumansai:mainfrom
senamakel:feat/tools-md

Conversation

@senamakel
Copy link
Copy Markdown
Member

@senamakel senamakel commented May 4, 2026

Summary

  • Add a markdown output path for internal Rust tools so the agent loop can hand the LLM compact markdown instead of pretty-printed JSON when configured.
  • Mirrors the composio backend markdownFormatted pattern (feat(composio): consume backend markdownFormatted for LLM output #1165) but for in-process tools — opt-in per tool via a new Tool::execute_with_options default method.
  • Convert 9 high-traffic tools (cron_*, web_search, current_time, git_operations) to populate markdown_formatted so the wins land immediately.

Problem

Internal tools currently dump pretty-printed JSON into the agent's tool-result stream. JSON is materially more expensive than markdown in the model context window — keys, quotes, braces, indentation all burn tokens that add up fast in tool-heavy loops. Composio already migrated to markdown server-side in #1165; the same lever is available for our own tools but nothing was wired through.

Solution

Two small surfaces, both designed so existing tools keep working without changes:

  1. ToolResult.markdown_formatted: Option<String> (skills/types.rs) — optional field with success_with_markdown() / with_markdown() builders and an output_for_llm(prefer_markdown) selector. Serialised as markdownFormatted to match the composio shape; skipped when None.
  2. Tool::execute_with_options(args, ToolCallOptions) (tools/traits.rs) — new trait method with a default implementation that forwards to execute(args). Tools that can render markdown override it, inspect options.prefer_markdown, and populate the field. Paired with Tool::supports_markdown() -> bool for telemetry/diagnostics.

Plumbing:

  • New config knob context.prefer_markdown_tool_output (default true).
  • ContextManager exposes the flag; execute_tool_call in agent/harness/session/turn.rs builds ToolCallOptions per call, dispatches via execute_with_options, and uses output_for_llm(prefer_markdown) so the harness picks markdown when present and falls back to the JSON content block otherwise.

Tools converted:

  • cron_list, cron_runs, cron_add, cron_run, cron_update
  • web_search_tool, current_time
  • git_operations (status / log / branch sub-ops)

Tools intentionally not touched: those that already emit compact text (memory_recall, gitbooks_*, schedule.handle_list) or non-trivial domain renderers left for follow-ups (memory_tree_*, proxy_config, browser).

Submission Checklist

  • Tests added or updated (happy path + at least one failure / edge case) per docs/TESTING-STRATEGY.md
  • N/A: framework + per-tool conversions are exercised by existing tool unit tests; new selector/serde tests added in skills/types.rs. Diff coverage on the markdown branches is best evaluated by the CI gate.
  • N/A: behaviour-only change — no feature rows added/removed/renamed in the coverage matrix.
  • N/A: no matrix feature IDs touched.
  • No new external network dependencies introduced (mock backend used per docs/TESTING-STRATEGY.md)
  • N/A: no release-cut surface touched.
  • N/A: no linked issue — proactive token-cost optimization.

Impact

  • Desktop/CLI: agent loops on tool-heavy turns send markdown to the LLM by default, cutting tokens spent on JSON syntax. JSON content blocks are still produced and serialised, so any UI or RPC consumer that reads structured tool output is unchanged.
  • Performance: net token-cost reduction in tool-heavy agent turns; the markdown rendering itself is cheap (string formatting on already-collected data).
  • Compatibility: opt-in per tool via the new trait default; no breaking signature changes. Disable globally by setting context.prefer_markdown_tool_output = false — tools then fall back to their existing JSON output.
  • Security: no auth/permission changes.

Related

  • Closes:
  • Follow-up PR(s)/TODOs:
    • convert remaining JSON-emitting tools (memory_tree_*, proxy_config, schedule.handle_get, browser tool, computer-control tools)
    • measure observed token savings via [agent_loop] tool=… returned markdown payload bytes=… debug logs in real sessions

Summary by CodeRabbit

  • New Features
    • Tools can optionally return human-friendly Markdown alongside JSON for clearer results.
    • New configuration setting (enabled by default) lets the agent prefer Markdown-formatted tool output.
    • Cron management, git operations, web search, time, and other tools now provide Markdown renderings when available.

senamakel added 2 commits May 3, 2026 23:23
Wires the markdown output path added in the previous commit through
concrete internal tools so the agent loop sees compact markdown
instead of pretty-printed JSON when context.prefer_markdown_tool_output
is on.

Tools converted (override execute_with_options + supports_markdown):
- cron_list, cron_runs, cron_add, cron_run, cron_update
- web_search_tool, current_time
- git_operations (status / log / branch sub-ops)

Each tool keeps the JSON content block for callers that want raw
structure, and populates ToolResult.markdown_formatted with a hand-
rolled markdown rendering that the harness prefers for LLM input.

Also reformats turn.rs / skills/types.rs lines touched by the
framework PR via cargo fmt.
@senamakel senamakel requested a review from a team May 4, 2026 06:35
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 4, 2026

📝 Walkthrough

Walkthrough

This PR adds optional markdown-formatted tool outputs and per-call execution options. It introduces ToolCallOptions { prefer_markdown }, an execute_with_options path and supports_markdown trait hook, a markdown_formatted field and output_for_llm on ToolResult, context wiring to propagate the preference, and updates multiple tools to optionally render markdown.

Changes

Markdown Output Support Infrastructure

Layer / File(s) Summary
Trait & Option Definitions
src/openhuman/tools/traits.rs
Adds pub struct ToolCallOptions { pub prefer_markdown: bool }; adds async fn execute_with_options(...) with default forwarding to execute, and fn supports_markdown(&self) -> bool (default false) to the Tool trait.
ToolResult Type Enhancement
src/openhuman/skills/types.rs
ToolResult gains pub markdown_formatted: Option<String> (serde markdownFormatted); adds success_with_markdown, with_markdown, and output_for_llm(prefer_markdown); existing constructors set markdown_formatted: None; tests updated.
Configuration & Context
src/openhuman/config/schema/context.rs, src/openhuman/context/manager.rs
ContextConfig adds prefer_markdown_tool_output: bool (serde default true); ContextManager stores the flag and exposes prefer_markdown_tool_output(&self) -> bool.
Agent Harness Integration
src/openhuman/agent/harness/session/turn.rs
Agent::execute_tool_call reads self.context.prefer_markdown_tool_output(), builds ToolCallOptions { prefer_markdown }, calls tool.execute_with_options(...), and uses r.output_for_llm(prefer_markdown) for LLM-facing output (errors use same selection); emits debug log when markdown chosen and present.
Tool Implementations
src/openhuman/tools/impl/...
Multiple tools (cron: add/list/run/runs/update, filesystem/git_operations, network/web_search, system/current_time) now implement supports_markdown() -> true, add execute_with_options(args, options) or route execute through it, and conditionally populate ToolResult.markdown_formatted when options.prefer_markdown is true; new renderer helpers format markdown outputs.
Module Re-exports
src/openhuman/tools/mod.rs
Re-exports ToolCallOptions from traits to public API.

Sequence Diagram

sequenceDiagram
    actor Agent as Agent Harness
    participant Ctx as ContextManager
    participant Tool as Tool Implementation
    participant Result as ToolResult
    participant LLM as LLM

    Agent->>Ctx: prefer_markdown_tool_output()?
    Ctx-->>Agent: true/false
    Agent->>Agent: Build ToolCallOptions { prefer_markdown }
    Agent->>Tool: execute_with_options(args, options)
    Tool->>Tool: Generate JSON payload
    alt options.prefer_markdown & tool supports_markdown
        Tool->>Tool: Render markdown_formatted
    end
    Tool->>Result: return ToolResult { content/json, markdown_formatted? }
    Tool-->>Agent: ToolResult
    Agent->>Result: output_for_llm(prefer_markdown)
    alt prefer_markdown && markdown present
        Result-->>Agent: markdown_formatted
    else
        Result-->>Agent: content/json
    end
    Agent->>LLM: Send preferred format
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐰 I nibbled code and found a key,

Tools can whisper markdown to me.
JSON or pretty, whichever you choose,
The agent fetches both — no need to lose.
Hop, hum, render — happy outputs, whee!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 45.10% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat(tools): markdown output for internal tool results' accurately summarizes the main change: adding markdown output support to tools, which is the primary purpose of this PR.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Review rate limit: 0/5 reviews remaining, refill in 56 minutes and 19 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
src/openhuman/tools/impl/filesystem/git_operations.rs (1)

452-462: ⚡ Quick win

Honor prefer_markdown in execute_with_options instead of ignoring it.

Line 455 receives options but the method always forwards unchanged, while markdown is generated unconditionally downstream. This makes the option ineffective and does unnecessary work when markdown is disabled.

♻️ Minimal fix to enforce option semantics
-    async fn execute_with_options(
-        &self,
-        args: serde_json::Value,
-        _options: ToolCallOptions,
-    ) -> anyhow::Result<ToolResult> {
+    async fn execute_with_options(
+        &self,
+        args: serde_json::Value,
+        options: ToolCallOptions,
+    ) -> anyhow::Result<ToolResult> {
@@
-        self.execute(args).await
+        let mut result = self.execute(args).await?;
+        if !options.prefer_markdown {
+            result.markdown_formatted = None;
+        }
+        Ok(result)
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/tools/impl/filesystem/git_operations.rs` around lines 452 -
462, execute_with_options currently ignores the provided ToolCallOptions; read
the prefer_markdown flag from the ToolCallOptions parameter and propagate it to
the downstream execution path instead of always relying on internal markdown
generation. Concretely, inspect options.prefer_markdown in execute_with_options
and either (a) inject a boolean field "prefer_markdown" into the
serde_json::Value args before calling self.execute(args) or (b) if self.execute
has an overload that accepts options, call that overload with the options;
update the call to self.execute(...) accordingly so the markdown generation
honors the prefer_markdown flag.
src/openhuman/tools/impl/system/current_time.rs (1)

123-159: ⚡ Quick win

Prefer JSON content blocks for the payload, then attach markdown.

Line 123 serializes JSON into a text block; this weakens structured-consumer consistency now that markdown is first-class. Returning a JSON content block and attaching markdown keeps both paths explicit.

♻️ Suggested refactor
-        let mut result = ToolResult::success(serde_json::to_string_pretty(&payload)?);
+        let mut result = ToolResult::json(payload.clone());
         if options.prefer_markdown {
             let mut md = String::new();
@@
-            result.markdown_formatted = Some(md);
+            result = result.with_markdown(md);
         }
         Ok(result)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/tools/impl/system/current_time.rs` around lines 123 - 159, The
code currently serializes the payload into the primary ToolResult text block
when building `result`, which loses the structured JSON representation; change
the logic in the function that builds `result` so the serialized payload (from
`serde_json::to_string_pretty(&payload)`) is stored as the JSON/structured
content of the `ToolResult` (leave it as the canonical JSON content) and then,
when `options.prefer_markdown` is true, generate the `md` string exactly as
before and set it to `result.markdown_formatted = Some(md)` without replacing or
moving the JSON content; keep variable names `result`, `payload`,
`options.prefer_markdown`, and fields like `markdown_formatted` intact so
consumers receive both a JSON content block and an attached Markdown
representation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/openhuman/tools/impl/cron/list.rs`:
- Around line 41-43: The preview creation is slicing bytes with &trimmed[..200]
which can panic on multi-byte UTF-8; change to use character-based truncation:
check trimmed.chars().count() > 200 and build the preview with
trimmed.chars().take(200).collect::<String>() (then append the ellipsis) instead
of byte-slicing; update the conditional that sets preview accordingly
(referencing the preview and trimmed variables).

---

Nitpick comments:
In `@src/openhuman/tools/impl/filesystem/git_operations.rs`:
- Around line 452-462: execute_with_options currently ignores the provided
ToolCallOptions; read the prefer_markdown flag from the ToolCallOptions
parameter and propagate it to the downstream execution path instead of always
relying on internal markdown generation. Concretely, inspect
options.prefer_markdown in execute_with_options and either (a) inject a boolean
field "prefer_markdown" into the serde_json::Value args before calling
self.execute(args) or (b) if self.execute has an overload that accepts options,
call that overload with the options; update the call to self.execute(...)
accordingly so the markdown generation honors the prefer_markdown flag.

In `@src/openhuman/tools/impl/system/current_time.rs`:
- Around line 123-159: The code currently serializes the payload into the
primary ToolResult text block when building `result`, which loses the structured
JSON representation; change the logic in the function that builds `result` so
the serialized payload (from `serde_json::to_string_pretty(&payload)`) is stored
as the JSON/structured content of the `ToolResult` (leave it as the canonical
JSON content) and then, when `options.prefer_markdown` is true, generate the
`md` string exactly as before and set it to `result.markdown_formatted =
Some(md)` without replacing or moving the JSON content; keep variable names
`result`, `payload`, `options.prefer_markdown`, and fields like
`markdown_formatted` intact so consumers receive both a JSON content block and
an attached Markdown representation.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: cc6e2e49-558e-42a6-b0e8-6c731b645da2

📥 Commits

Reviewing files that changed from the base of the PR and between 644c5c8 and c1d463d.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (14)
  • src/openhuman/agent/harness/session/turn.rs
  • src/openhuman/config/schema/context.rs
  • src/openhuman/context/manager.rs
  • src/openhuman/skills/types.rs
  • src/openhuman/tools/impl/cron/add.rs
  • src/openhuman/tools/impl/cron/list.rs
  • src/openhuman/tools/impl/cron/run.rs
  • src/openhuman/tools/impl/cron/runs.rs
  • src/openhuman/tools/impl/cron/update.rs
  • src/openhuman/tools/impl/filesystem/git_operations.rs
  • src/openhuman/tools/impl/network/web_search.rs
  • src/openhuman/tools/impl/system/current_time.rs
  • src/openhuman/tools/mod.rs
  • src/openhuman/tools/traits.rs

Comment thread src/openhuman/tools/impl/cron/list.rs Outdated
`&trimmed[..200]` slices by bytes and panics at runtime on multibyte
UTF-8 input (emoji, CJK). Switch to char-based truncation so the
markdown rendering is safe for any prompt content.

Reported by CodeRabbit on PR tinyhumansai#1172.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/openhuman/tools/impl/cron/list.rs (2)

87-113: ⚡ Quick win

Add a direct test for the markdown-preferred execution path.

Current tests validate execute() only. A focused test for execute_with_options(..., ToolCallOptions { prefer_markdown: true }) would lock in the new behavior and prevent silent regressions.

Test example
 #[tokio::test]
 async fn returns_empty_list_when_no_jobs() {
@@
 }
+
+#[tokio::test]
+async fn returns_markdown_when_preferred() {
+    let tmp = TempDir::new().unwrap();
+    let cfg = test_config(&tmp).await;
+    let tool = CronListTool::new(cfg);
+
+    let result = tool
+        .execute_with_options(json!({}), ToolCallOptions { prefer_markdown: true })
+        .await
+        .unwrap();
+
+    assert!(!result.is_error);
+    assert_eq!(result.output().trim(), "[]");
+    assert_eq!(
+        result.markdown_formatted.as_deref(),
+        Some("_No scheduled cron jobs._")
+    );
+}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/tools/impl/cron/list.rs` around lines 87 - 113, Add a unit test
that calls the tool's execute_with_options method directly with ToolCallOptions
{ prefer_markdown: true } to ensure the markdown branch is exercised: stub or
mock cron::list_jobs to return a known jobs value, call execute_with_options on
the relevant struct (the implementation that defines execute_with_options),
assert the returned ToolResult is success and that result.markdown_formatted is
Some and equals the expected render_jobs_markdown(&jobs) output; also include a
complementary assertion that when prefer_markdown is false the
markdown_formatted remains None.

16-47: ⚡ Quick win

Escape markdown-sensitive values before inline interpolation.

label, agent, command, and preview are rendered raw; backticks/newlines can break markdown structure and reduce output reliability for the LLM path.

Proposed patch
+fn escape_markdown_inline(value: &str) -> String {
+    value
+        .replace('\\', "\\\\")
+        .replace('`', "\\`")
+        .replace('\n', " ")
+        .replace('\r', " ")
+}
+
 fn render_jobs_markdown(jobs: &[CronJob]) -> String {
@@
-        let label = job.name.as_deref().unwrap_or(&job.id);
+        let label = escape_markdown_inline(job.name.as_deref().unwrap_or(&job.id));
         let _ = writeln!(out, "\n## {label}");
-        let _ = writeln!(out, "- **id**: `{}`", job.id);
-        let _ = writeln!(out, "- **schedule**: `{}`", job.expression);
+        let _ = writeln!(out, "- **id**: `{}`", escape_markdown_inline(&job.id));
+        let _ = writeln!(
+            out,
+            "- **schedule**: `{}`",
+            escape_markdown_inline(&job.expression)
+        );
@@
         if let Some(agent) = &job.agent_id {
-            let _ = writeln!(out, "- **agent**: `{agent}`");
+            let _ = writeln!(out, "- **agent**: `{}`", escape_markdown_inline(agent));
         }
-        let _ = writeln!(out, "- **command**: `{}`", job.command);
+        let _ = writeln!(
+            out,
+            "- **command**: `{}`",
+            escape_markdown_inline(&job.command)
+        );
@@
-                let _ = writeln!(out, "- **prompt**: {preview}");
+                let _ = writeln!(out, "- **prompt**: {}", escape_markdown_inline(&preview));
             }
         }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/tools/impl/cron/list.rs` around lines 16 - 47, The fields
written with writeln! (label, agent, command, preview) are interpolated raw and
can contain backticks or newlines that break Markdown; add a small helper (e.g.,
escape_markdown_inline) that normalizes/removes newlines and escapes or replaces
backticks and other markdown-sensitive characters, then call it on
job.name/label, job.agent_id/agent, job.command/command and the computed preview
before using them in the writeln! calls so all inline interpolations are safe.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/openhuman/tools/impl/cron/list.rs`:
- Around line 87-113: Add a unit test that calls the tool's execute_with_options
method directly with ToolCallOptions { prefer_markdown: true } to ensure the
markdown branch is exercised: stub or mock cron::list_jobs to return a known
jobs value, call execute_with_options on the relevant struct (the implementation
that defines execute_with_options), assert the returned ToolResult is success
and that result.markdown_formatted is Some and equals the expected
render_jobs_markdown(&jobs) output; also include a complementary assertion that
when prefer_markdown is false the markdown_formatted remains None.
- Around line 16-47: The fields written with writeln! (label, agent, command,
preview) are interpolated raw and can contain backticks or newlines that break
Markdown; add a small helper (e.g., escape_markdown_inline) that
normalizes/removes newlines and escapes or replaces backticks and other
markdown-sensitive characters, then call it on job.name/label,
job.agent_id/agent, job.command/command and the computed preview before using
them in the writeln! calls so all inline interpolations are safe.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 6cfff6b8-c6f6-4a8b-b883-66258f507107

📥 Commits

Reviewing files that changed from the base of the PR and between c1d463d and e521fe5.

📒 Files selected for processing (1)
  • src/openhuman/tools/impl/cron/list.rs

@senamakel senamakel merged commit 05ce526 into tinyhumansai:main May 4, 2026
19 checks passed
jwalin-shah added a commit to jwalin-shah/openhuman that referenced this pull request May 5, 2026
* feat(remotion): Ghosty character library with transparent MOV variants (tinyhumansai#1059)

Co-authored-by: WOZCODE <contact@withwoz.com>

* feat(composio/gmail): sync into memory tree (Slack-parity) (tinyhumansai#1056)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(scheduler-gate): throttle background AI on battery / busy CPU (tinyhumansai#1062)

* fix(core,cef): run core in-process and stop orphaning CEF helpers on Cmd+Q (tinyhumansai#1061)

* ci: add dedicated staging release workflow (tinyhumansai#1066)

* fix(sentry): Rust source context + per-release deploy marker (tinyhumansai#405) (tinyhumansai#1067)

* fix(welcome): re-enable OAuth buttons with focus/timeout recovery (tinyhumansai#1049) (tinyhumansai#1069)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(dependencies): update pnpm-lock.yaml and Cargo.lock for package… (tinyhumansai#1082)

* fix(onboarding): personalize welcome agent greeting with user identity (tinyhumansai#1078)

* fix(chat): make agent message bubbles fit content width (tinyhumansai#1083)

* Feat/dmg checks (tinyhumansai#1084)

* fix(linux): Add X11 platform flags to .deb package launcher (tinyhumansai#1087)

Co-authored-by: unn-Known1 <unn-known1@users.noreply.github.com>

* fix(sentry): auto-send React events; collapse core→tauri for desktop (tinyhumansai#1086)

Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai>

* fix(cef): run blank reload guard on the CEF UI thread (tinyhumansai#1092)

* fix(app): reload webview instead of restart_app in dev mode (tinyhumansai#1068) (tinyhumansai#1071)

* fix(linux): deliver X11 ozone flags via custom .desktop template (tinyhumansai#1091)

* fix(webview-accounts): retry data-dir purge so CEF handle race doesn't leak cookies (tinyhumansai#1076) (tinyhumansai#1081)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai>

* fix(webview/slack): media perms + deep-link isolation (tinyhumansai#1074) (tinyhumansai#1080)

Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai>

* ci(release): split staging vs production workflows; promote staging tags (tinyhumansai#1094)

* Update release-staging.yml (tinyhumansai#1097)

* chore(staging): v0.53.5

* chore(staging): v0.53.6

* ci(staging): cut staging from main; add act local-debug helper (tinyhumansai#1099)

* chore(staging): v0.53.7

* fix(ci): correct sentry-cli download URL and trap scope (tinyhumansai#1100)

* chore(staging): v0.53.8

* feat(chat): forward thread_id to backend for KV cache locality (tinyhumansai#1095)

* fix(ci): bump pinned sentry-cli to 3.4.1 (2.34.2 was never published) (tinyhumansai#1102)

* chore(staging): v0.53.9

* fix(ci): drop bash trap in upload_sentry_symbols.sh; inline cleanup (tinyhumansai#1103)

* chore(staging): v0.53.10

* refactor(session): flatten session_raw/, switch md to YYYY_MM_DD (tinyhumansai#1098)

* Add full Composio managed-auth toolkit catalog (tinyhumansai#1093)

* ci: add diff-aware 80% coverage gate (Vitest + cargo-llvm-cov) (tinyhumansai#1104)

* feat(scripts): pnpm work + pnpm debug for agent-driven workflows (tinyhumansai#1105)

* ci: pull pnpm into CI image, drop redundant setup steps (tinyhumansai#1107)

* docs: add Cursor Cloud specific instructions to AGENTS.md (tinyhumansai#1106)

Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* chore(staging): v0.53.11

* docs: surface 80% coverage gate and scripts/debug runners (tinyhumansai#1108)

* feat(app): show Composio integrations as sorted icon grid on Skills (tinyhumansai#1109)

Co-authored-by: Cursor Agent <cursoragent@cursor.com>

* feat(composio): client-side trigger enable/disable toggles (tinyhumansai#1110)

* feat(skills): channels grid + integrations card polish; tolerant Composio trigger decode (tinyhumansai#1112)

* chore(staging): v0.53.12

* feat(home): early-bird banner + assistant→agent terminology (tinyhumansai#1113)

* feat(updater): in-app auto-update with auto-download + restart prompt (tinyhumansai#677) (tinyhumansai#1114)

* chore(claude): add ship-and-babysit slash command (tinyhumansai#1115)

* feat(home): EarlyBirdyBanner + agent terminology + LinkedIn enrichment model pin (tinyhumansai#1118)

* fix(chat): single onboarding thread in sidebar after wizard (tinyhumansai#1116)

Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Steven Enamakel <senamakel@users.noreply.github.com>

* fix: filter out global namespace from citation chips (tinyhumansai#1124)

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
Co-authored-by: senamakel-droid <281415773+senamakel-droid@users.noreply.github.com>

* feat(nav): enable Memory tab in BottomTabBar (tinyhumansai#1125)

* feat(memory): singleton ingestion + status RPC + UI pill (tinyhumansai#1126)

* feat(human): mascot tab with viseme-driven lipsync (staging only) (tinyhumansai#1127)

* Fix CEF zombie processes on full app close and restart (tinyhumansai#1128)

Co-authored-by: senamakel-droid <281415773+senamakel-droid@users.noreply.github.com>
Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai>

* Update issue templates for GitHub issue types (tinyhumansai#1146)

* feat(human): expand mascot expressions and tighten reply-speech state machine (tinyhumansai#1147)

* feat(memory): ingestion pipeline + tree-architecture docs + ops/schemas split (tinyhumansai#1142)

* feat(threads): surface live subagent work in parent thread (tinyhumansai#1122) (tinyhumansai#1159)

* fix(human): keep mascot mouth animating when TTS ships no viseme data (tinyhumansai#1160)

* feat(composio): consume backend markdownFormatted for LLM output (tinyhumansai#1165)

* fix(subagent): lazy-register toolkit actions filtered out of fuzzy top-K (tinyhumansai#1162)

* feat(memory): user-facing long-term memory window preset (tinyhumansai#1137) (tinyhumansai#1161)

* fix(tauri-shell): proactively kill stale openhuman RPC on startup (tinyhumansai#1166)

* chore(staging): v0.53.13

* fix(composio): per-action tool consumes backend markdownFormatted (tinyhumansai#1167)

* fix(threads): persist selectedThreadId across reloads (tinyhumansai#1168)

* feat(memory_tree): switch embed model to bge-m3 (1024-dim, 8K context) (tinyhumansai#1174)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(agent): drop redundant [Memory context] recall injection (tinyhumansai#1173)

* chore(memory_tree): drop body-read timeouts on Ollama HTTP calls (tinyhumansai#1171)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(transcript): emit thread_id + fix orchestrator missing cost (tinyhumansai#1169)

* fix(composio/gmail): phase out html2md, prefer text/plain MIME part (tinyhumansai#1170)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(tools): markdown output for internal tool results (tinyhumansai#1172)

* feat(security): enforce prompt-injection guard before model and tool execution (tinyhumansai#1175)

* fix(cef): popup paint dies after first frame — skip blank-page guard for popups (tinyhumansai#1079) (tinyhumansai#1182)

Co-authored-by: Steven Enamakel <31011319+senamakel@users.noreply.github.com>

* chore(sentry): rename OPENHUMAN_SENTRY_DSN → OPENHUMAN_CORE_SENTRY_DSN (tinyhumansai#1186)

* feat(remotion): add yellow mascot character with all animation variants (tinyhumansai#1193)

Co-authored-by: Neel Mistry <neelmistry@Neels-MacBook-Pro.local>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor(composio): hide raw connection ID, derive friendly label (tinyhumansai#1153) (tinyhumansai#1185)

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>

* fix(windows): align install.ps1 MSI with per-machine scope (tinyhumansai#913) (tinyhumansai#1187)

Co-authored-by: Cursor <cursoragent@cursor.com>

* fix(tauri): deterministic CEF teardown on full app close (tinyhumansai#1120) (tinyhumansai#1189)

Co-authored-by: Cursor <cursoragent@cursor.com>

* fix(composio): cap Gmail HTML body before strip (crash mitigation) (tinyhumansai#1191)

Co-authored-by: Cursor <cursoragent@cursor.com>

* fix(auth): stop stale chat threads after signup (tinyhumansai#1192)

Co-authored-by: Cursor <cursoragent@cursor.com>

* feat(sentry): staging-only "Trigger Sentry Test" button (tinyhumansai#1072) (tinyhumansai#1183)

* chore(staging): v0.53.14

* chore(staging): v0.53.15

* feat(composio): format trigger slugs into human-readable labels (tinyhumansai#1129) (tinyhumansai#1179)

Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>

* fix(ui): hide unsupported permission UI on non-macOS for Screen Intelligence (tinyhumansai#1194)

Co-authored-by: Cursor <cursoragent@cursor.com>

* chore(tauri-shell): retire embedded Gmail webview-account flow (tinyhumansai#1181)

* feat(onboarding): replace welcome-agent bot with react-joyride walkthrough (tinyhumansai#1180)

* chore(release): v0.53.16

* fix(threads): preserve selectedThreadId on cold-boot identity hydration (tinyhumansai#1196)

* feat(core): version/shutdown/update RPCs + mid-thread integration refresh (tinyhumansai#1195)

* fix(mascot): swap to yellow mascot via @remotion/player (tinyhumansai#1200)

* feat(memory_tree): cloud-default LLM, queue priority, entity filter, Memory tab UI (tinyhumansai#1198)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Persist turn state + restore conversation history on cold-boot (tinyhumansai#1202)

* feat(mascot): floating desktop mascot via native NSPanel + WKWebView (macOS) (tinyhumansai#1203)

* fix(memory/tree): emit summary children as Obsidian wikilinks (tinyhumansai#1210)

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(tools): coding-harness baseline primitives (tinyhumansai#1205) (tinyhumansai#1208)

* docs: add Codex PR checklist for remote agents

---------

Co-authored-by: Steven Enamakel <31011319+senamakel@users.noreply.github.com>
Co-authored-by: WOZCODE <contact@withwoz.com>
Co-authored-by: sanil-23 <sanil@vezures.xyz>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Cyrus Gray <144336577+graycyrus@users.noreply.github.com>
Co-authored-by: CodeGhost21 <164498022+CodeGhost21@users.noreply.github.com>
Co-authored-by: oxoxDev <164490987+oxoxDev@users.noreply.github.com>
Co-authored-by: Mega Mind <146339422+M3gA-Mind@users.noreply.github.com>
Co-authored-by: Gaurang Patel <ptelgm.yt@gmail.com>
Co-authored-by: unn-Known1 <unn-known1@users.noreply.github.com>
Co-authored-by: Steven Enamakel <enamakel@tinyhumans.ai>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Steven Enamakel <senamakel@users.noreply.github.com>
Co-authored-by: Steven Enamakel's Droid <enamakel.agent@tinyhumans.ai>
Co-authored-by: google-labs-jules[bot] <161369871+google-labs-jules[bot]@users.noreply.github.com>
Co-authored-by: senamakel-droid <281415773+senamakel-droid@users.noreply.github.com>
Co-authored-by: YellowSnnowmann <167776381+YellowSnnowmann@users.noreply.github.com>
Co-authored-by: Neil <neil@maha.xyz>
Co-authored-by: Neel Mistry <neelmistry@Neels-MacBook-Pro.local>
Co-authored-by: obchain <167975049+obchain@users.noreply.github.com>
Co-authored-by: Jwalin Shah <jshah1331@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant