Skip to content

feat(welcome): personality-driven onboarding agent with Gmail OAuth + instant greeting#578

Merged
graycyrus merged 8 commits intotinyhumansai:feat/agentic-onboardingfrom
graycyrus:main
Apr 16, 2026
Merged

feat(welcome): personality-driven onboarding agent with Gmail OAuth + instant greeting#578
graycyrus merged 8 commits intotinyhumansai:feat/agentic-onboardingfrom
graycyrus:main

Conversation

@graycyrus
Copy link
Copy Markdown
Contributor

@graycyrus graycyrus commented Apr 15, 2026

Summary

  • Rewrite welcome agent prompt: charismatic personality, shameless name-asking, Gmail OAuth flow via composio_authorize, no internal system names (Composio, SQLite, etc.), 80-150 word limit
  • Add composio_list_connections + composio_authorize to welcome agent tools, bump max_iterations 6→10
  • Enrich check_status JSON snapshot with user_profile (from /auth/me) and onboarding_tasks (from app-state.json)
  • Fix proactive delivery: emit chat_done instead of proactive_message, use default-thread for visibility
  • Pre-build snapshot in welcome_proactive.rs to skip one LLM round-trip
  • Instant draft greeting ("Hey {name}!") published immediately while LLM generates full welcome
  • 4s socket delay for frontend mount timing, 180s timeout guard

Test plan

  • Reset chat_onboarding_completed = false in config.toml
  • Restart app, complete onboarding overlay
  • Verify instant draft message appears in chat within ~4-5 seconds
  • Verify full LLM welcome message appears ~30-50s later
  • Verify no mention of Composio, SQLite, or internal system names
  • Verify tone is human/conversational, not corporate/technical
  • Verify Gmail OAuth link offered if Gmail not connected
  • Edge case: no user profile → agent asks for name
  • Edge case: composio disabled → Gmail step skipped silently

Summary by CodeRabbit

  • New Features

    • Welcome agent now supports Gmail connection authorization through Composio integration.
    • Proactive welcome messages now include contextual information gathering before sending.
  • Improvements

    • Enhanced welcome agent conversational flow with expanded iterations and better message framing.
    • Improved onboarding experience with refined greeting logic and account setup guidance.

… proactive delivery fixes

- Rewrite welcome prompt.md: charismatic personality, shameless name-asking,
  Gmail connection flow via composio_authorize, trimmed subscription upsell
- Add composio_list_connections + composio_authorize to welcome agent tools,
  bump max_iterations 6→10
- Enrich check_status JSON snapshot with user_profile (from /auth/me) and
  onboarding_tasks (from app-state.json), both best-effort with timeout
- Make app_state::ops pub(crate) so complete_onboarding can import helpers
- Fix proactive delivery: emit chat_done instead of proactive_message so
  existing frontend handlers render it; use default-thread for visibility
- welcome_proactive.rs: run full agent workflow (not shortcut), 3s socket
  delay, 120s timeout guard on run_single
- Update welcome agent test assertions (4 tools, max_iterations=10)
- Fix model resolution in from_config_for_agent: respect agent
  definition's ModelSpec hint instead of always using config.default_model.
  Welcome agent now correctly uses agentic-v1 (fast) instead of
  reasoning-v1 (slow).
- Pre-build snapshot in welcome_proactive.rs: gather config + user
  profile + onboarding tasks + composio connections in Rust, skip
  iteration 1 tool calls. One LLM call instead of two.
- Reduce socket delay 3s→1s, timeout 120s→60s.
- Prompt tone overhaul: ban technical jargon (SQLite, memory backend,
  tool names), enforce 80-150 word limit, no capability lists, no
  Settings navigation instructions. Talk like a friend, not a product page.
agentic-v1 model takes 50-60s on first call due to large context
(system prompt + resumed transcript history). Increase timeout from
60s to 180s to prevent false timeouts while we investigate context
size reduction.
agentic-v1 endpoint is returning 504 gateway timeouts. Revert the
model resolution change so all agents use config.default_model
(reasoning-v1) which is slower but reliable. The per-agent model
hint feature can be re-enabled once the backend gateway is stable.
Composio is an internal integration name — users shouldn't see it.
Strengthened the prompt rules to ban all internal names (Composio,
SQLite, memory backend, model routes, etc.) and reworded the
integration reference section header.
Publish a short template greeting ("Hey {name}! Welcome to OpenHuman
— give me a sec...") immediately after fetching user profile, before
the LLM agent runs. User sees something in chat within 1-2 seconds
instead of waiting 30-50s for the full personalized message.
The 1s delay wasn't enough — frontend needs ~3-4s to mount the
Conversations page and subscribe to socket events after the onboarding
overlay closes. Draft message was arriving before the listener was ready.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 15, 2026

📝 Walkthrough

Walkthrough

The welcome agent is enhanced to support a richer, multi-step conversational flow with expanded tool access. Configuration increases max_iterations from 6 to 10 and adds composio_list_connections and composio_authorize to the named tools allowlist. The welcome_proactive function now enriches context by fetching user profile and onboarding task data before publishing a draft message and running the agent with a 180-second timeout. Infrastructure functions are made crate-accessible for use by the new enrichment logic.

Changes

Cohort / File(s) Summary
Welcome Agent Configuration
src/openhuman/agent/agents/mod.rs, src/openhuman/agent/agents/welcome/agent.toml
Updated test to expect four named tools and ten max_iterations. Configuration increases max_iterations from 6 to 10 and expands named tools from complete_onboarding, memory_recall to include composio_list_connections, composio_authorize.
Welcome Agent Prompt
src/openhuman/agent/agents/welcome/prompt.md
Comprehensive rework shifting from fixed 2-step to multi-step conversational flow. Added parallel Composio connection listing and user enrichment handling. Introduced dedicated Gmail OAuth step via composio_authorize when missing. Updated message framing, constraints around tool-failure messaging, and reduced output verbosity.
Proactive Welcome Logic
src/openhuman/agent/welcome_proactive.rs
Added 4-second initial delay, context enrichment (user profile fetch via HTTP, onboarding tasks load, Composio connections list), and instant draft message publication before agent invocation. Wrapped agent execution in 180-second timeout. Agent prompt now receives pre-built context to skip iteration 1.
Complete Onboarding Tool
src/openhuman/tools/impl/agent/complete_onboarding.rs
Enhanced snapshot with best-effort user_profile (via session-token-protected HTTP, 5s timeout) and onboarding_tasks (from local state). Enrichment failures gracefully omit fields without failing the tool call.
Internal Visibility Changes
src/openhuman/app_state/mod.rs, src/openhuman/app_state/ops.rs
Made ops module pub(crate) and exposed load_stored_app_state and fetch_current_user functions as pub(crate) to support welcome_proactive enrichment logic.
Proactive Delivery Channel
src/openhuman/channels/proactive.rs
Changed thread_id generation from per-job_name to constant "default-thread". Updated Socket.IO event type from "proactive_message" to "chat_done".

Sequence Diagram

sequenceDiagram
    participant ProactiveWelcome as run_proactive_welcome
    participant StateOps as State Ops
    participant HTTPClient as HTTP Client
    participant MessagePub as Message Publisher
    participant WelcomeAgent as Welcome Agent
    
    ProactiveWelcome->>ProactiveWelcome: Wait 4 seconds
    ProactiveWelcome->>StateOps: Load onboarding tasks (best-effort)
    ProactiveWelcome->>HTTPClient: Fetch user profile (token, 5s timeout)
    ProactiveWelcome->>HTTPClient: List Composio connections (5s timeout)
    ProactiveWelcome->>ProactiveWelcome: Build enriched context snapshot
    ProactiveWelcome->>MessagePub: Publish instant draft message (with firstName)
    ProactiveWelcome->>WelcomeAgent: Run agent with enriched context (180s timeout)
    WelcomeAgent->>WelcomeAgent: Skip iteration 1, write welcome
    ProactiveWelcome->>MessagePub: Publish final proactive message
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related issues

Possibly related PRs

Suggested reviewers

  • senamakel

Poem

🐰 Once just a two-step hop down onboarding's lane,
Now four tools and contexts flow through the agent's brain,
Composio connections and user profiles blend,
Ten iterations to craft the perfect greeting friend,
A richer, warmer welcome awaits everyone! 💌✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title directly reflects the main changes: a personality-driven welcome agent with Gmail OAuth integration and instant greeting delivery.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/openhuman/agent/welcome_proactive.rs (1)

90-147: Extract the snapshot enrichment into one shared helper.

This is now the second copy of the onboarding_tasks/user_profile enrichment logic; the first lives in src/openhuman/tools/impl/agent/complete_onboarding.rs:222-282. Keeping both copies in sync on field names, timeout policy, and omission behavior will be fragile. Please move the enrichment into one shared function and have both the reactive and proactive welcome paths call it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/agent/welcome_proactive.rs` around lines 90 - 147, Extract the
duplicated enrichment logic into a single async helper (e.g.,
enrich_snapshot_with_user_and_tasks) that accepts a mutable JSON object/map and
&config, then perform the same steps currently duplicated: sync
load_stored_app_state to insert "onboarding_tasks" (using
serde_json::to_value(...).unwrap_or_default() and tracing::warn on load error),
get_session_token and tokio::time::timeout 5s to call fetch_current_user and
insert "user_profile" only on Ok(Ok(Some(user))) otherwise omit with
tracing::debug, and build_composio_client + timeout 5s to call
client.list_connections and insert "composio_connections" with
serde_json::to_value(...).unwrap_or_default() or omit with tracing::debug; keep
the same keys and omission behavior. Replace the duplicated blocks in
welcome_proactive.rs (around build_status_snapshot) and
src/openhuman/tools/impl/agent/complete_onboarding.rs (lines ~222-282) to call
this new helper, preserving async/await usage and logging semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/openhuman/agent/agents/welcome/prompt.md`:
- Around line 13-16: The markdown examples showing the calls
complete_onboarding({"action": "check_status"}) and
composio_list_connections({}) use unlabeled fenced code blocks; update each
fence to include a language identifier (e.g., add ```text) so markdownlint MD040
passes, and do the same for the other unlabeled fenced examples of these calls
elsewhere in the file (the second occurrence of the same examples).

In `@src/openhuman/agent/welcome_proactive.rs`:
- Around line 178-180: The log call using tracing::info! in the
welcome::proactive path currently interpolates first_name (PII); remove the
first_name interpolation and either log a non-PII indicator such as "user
present" or the authentication source, or log a redacted placeholder (e.g.
"<redacted>") instead; update the tracing::info! invocation that contains the
message "instant draft published for user '{}'" and the first_name argument so
it no longer emits the raw first_name value.

In `@src/openhuman/channels/proactive.rs`:
- Around line 123-128: The code is reusing the "chat_done" event and "system"
client room for proactive broadcasts which will trigger normal
inference-completion handlers; change the WebChannelEvent emitted by
publish_web_channel_event in proactive.rs so it uses a distinct event name
(e.g., "proactive_message" or "proactive_welcome") and a non-shared client_id
instead of "system" (so the frontend won't treat it like a regular
chat_done/inference completion). Update the WebChannelEvent construction (the
event and client_id fields) and any consumers that expect proactive events to
the new name so proactive broadcasts do not hit the regular chat_done handlers.

---

Nitpick comments:
In `@src/openhuman/agent/welcome_proactive.rs`:
- Around line 90-147: Extract the duplicated enrichment logic into a single
async helper (e.g., enrich_snapshot_with_user_and_tasks) that accepts a mutable
JSON object/map and &config, then perform the same steps currently duplicated:
sync load_stored_app_state to insert "onboarding_tasks" (using
serde_json::to_value(...).unwrap_or_default() and tracing::warn on load error),
get_session_token and tokio::time::timeout 5s to call fetch_current_user and
insert "user_profile" only on Ok(Ok(Some(user))) otherwise omit with
tracing::debug, and build_composio_client + timeout 5s to call
client.list_connections and insert "composio_connections" with
serde_json::to_value(...).unwrap_or_default() or omit with tracing::debug; keep
the same keys and omission behavior. Replace the duplicated blocks in
welcome_proactive.rs (around build_status_snapshot) and
src/openhuman/tools/impl/agent/complete_onboarding.rs (lines ~222-282) to call
this new helper, preserving async/await usage and logging semantics.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f6e98a0e-bfb6-4849-883b-6c44fa21f588

📥 Commits

Reviewing files that changed from the base of the PR and between 70a2a6f and 9304fa1.

📒 Files selected for processing (8)
  • src/openhuman/agent/agents/mod.rs
  • src/openhuman/agent/agents/welcome/agent.toml
  • src/openhuman/agent/agents/welcome/prompt.md
  • src/openhuman/agent/welcome_proactive.rs
  • src/openhuman/app_state/mod.rs
  • src/openhuman/app_state/ops.rs
  • src/openhuman/channels/proactive.rs
  • src/openhuman/tools/impl/agent/complete_onboarding.rs

Comment on lines 13 to 16
```
complete_onboarding({"action": "check_status"})
composio_list_connections({})
```
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifiers to the fenced examples.

Both code blocks are unlabeled, which will keep markdownlint MD040 failing. Add a language like text to each fence.

Also applies to: 49-51

🧰 Tools
🪛 markdownlint-cli2 (0.22.0)

[warning] 13-13: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/agent/agents/welcome/prompt.md` around lines 13 - 16, The
markdown examples showing the calls complete_onboarding({"action":
"check_status"}) and composio_list_connections({}) use unlabeled fenced code
blocks; update each fence to include a language identifier (e.g., add ```text)
so markdownlint MD040 passes, and do the same for the other unlabeled fenced
examples of these calls elsewhere in the file (the second occurrence of the same
examples).

Comment on lines +178 to +180
tracing::info!(
"[welcome::proactive] instant draft published for user '{}'",
first_name
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove the first-name log.

first_name comes from /auth/me, so this info! writes user PII to logs on every onboarding run. Log presence/source instead, or redact the value. As per coding guidelines Never log secrets, API keys, JWTs, credentials, or full PII in Rust logs; redact or omit sensitive fields.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/agent/welcome_proactive.rs` around lines 178 - 180, The log
call using tracing::info! in the welcome::proactive path currently interpolates
first_name (PII); remove the first_name interpolation and either log a non-PII
indicator such as "user present" or the authentication source, or log a redacted
placeholder (e.g. "<redacted>") instead; update the tracing::info! invocation
that contains the message "instant draft published for user '{}'" and the
first_name argument so it no longer emits the raw first_name value.

Comment on lines 123 to 128
// 1. Always deliver to the web channel via Socket.IO.
// Emit as `chat_done` so the existing frontend chat handlers
// pick it up — no dedicated `proactive_message` listener needed.
publish_web_channel_event(WebChannelEvent {
event: "proactive_message".to_string(),
event: "chat_done".to_string(),
client_id: "system".to_string(),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't repurpose chat_done for proactive broadcasts.

This still goes out through the shared system room, and src/core/socketio.rs:368-387 emits that event name verbatim to every socket in that room. app/src/services/chatService.ts:197-203 already reserves chat_done for normal inference completion, so proactive welcomes will now hit the regular chat-finished handlers and can interfere with streaming state or message routing. Keep a distinct proactive event name, or stop broadcasting it on the shared proactive channel.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/channels/proactive.rs` around lines 123 - 128, The code is
reusing the "chat_done" event and "system" client room for proactive broadcasts
which will trigger normal inference-completion handlers; change the
WebChannelEvent emitted by publish_web_channel_event in proactive.rs so it uses
a distinct event name (e.g., "proactive_message" or "proactive_welcome") and a
non-shared client_id instead of "system" (so the frontend won't treat it like a
regular chat_done/inference completion). Update the WebChannelEvent construction
(the event and client_id fields) and any consumers that expect proactive events
to the new name so proactive broadcasts do not hit the regular chat_done
handlers.

@graycyrus graycyrus marked this pull request as draft April 15, 2026 18:50
@graycyrus graycyrus marked this pull request as ready for review April 16, 2026 09:54
@graycyrus graycyrus changed the base branch from main to feat/agentic-onboarding April 16, 2026 09:55
@graycyrus graycyrus merged commit 1705699 into tinyhumansai:feat/agentic-onboarding Apr 16, 2026
7 of 8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant