feat(welcome): personality-driven onboarding agent with Gmail OAuth + instant greeting#578
Conversation
… proactive delivery fixes - Rewrite welcome prompt.md: charismatic personality, shameless name-asking, Gmail connection flow via composio_authorize, trimmed subscription upsell - Add composio_list_connections + composio_authorize to welcome agent tools, bump max_iterations 6→10 - Enrich check_status JSON snapshot with user_profile (from /auth/me) and onboarding_tasks (from app-state.json), both best-effort with timeout - Make app_state::ops pub(crate) so complete_onboarding can import helpers - Fix proactive delivery: emit chat_done instead of proactive_message so existing frontend handlers render it; use default-thread for visibility - welcome_proactive.rs: run full agent workflow (not shortcut), 3s socket delay, 120s timeout guard on run_single - Update welcome agent test assertions (4 tools, max_iterations=10)
- Fix model resolution in from_config_for_agent: respect agent definition's ModelSpec hint instead of always using config.default_model. Welcome agent now correctly uses agentic-v1 (fast) instead of reasoning-v1 (slow). - Pre-build snapshot in welcome_proactive.rs: gather config + user profile + onboarding tasks + composio connections in Rust, skip iteration 1 tool calls. One LLM call instead of two. - Reduce socket delay 3s→1s, timeout 120s→60s. - Prompt tone overhaul: ban technical jargon (SQLite, memory backend, tool names), enforce 80-150 word limit, no capability lists, no Settings navigation instructions. Talk like a friend, not a product page.
agentic-v1 model takes 50-60s on first call due to large context (system prompt + resumed transcript history). Increase timeout from 60s to 180s to prevent false timeouts while we investigate context size reduction.
agentic-v1 endpoint is returning 504 gateway timeouts. Revert the model resolution change so all agents use config.default_model (reasoning-v1) which is slower but reliable. The per-agent model hint feature can be re-enabled once the backend gateway is stable.
Composio is an internal integration name — users shouldn't see it. Strengthened the prompt rules to ban all internal names (Composio, SQLite, memory backend, model routes, etc.) and reworded the integration reference section header.
Publish a short template greeting ("Hey {name}! Welcome to OpenHuman
— give me a sec...") immediately after fetching user profile, before
the LLM agent runs. User sees something in chat within 1-2 seconds
instead of waiting 30-50s for the full personalized message.
The 1s delay wasn't enough — frontend needs ~3-4s to mount the Conversations page and subscribe to socket events after the onboarding overlay closes. Draft message was arriving before the listener was ready.
📝 WalkthroughWalkthroughThe welcome agent is enhanced to support a richer, multi-step conversational flow with expanded tool access. Configuration increases Changes
Sequence DiagramsequenceDiagram
participant ProactiveWelcome as run_proactive_welcome
participant StateOps as State Ops
participant HTTPClient as HTTP Client
participant MessagePub as Message Publisher
participant WelcomeAgent as Welcome Agent
ProactiveWelcome->>ProactiveWelcome: Wait 4 seconds
ProactiveWelcome->>StateOps: Load onboarding tasks (best-effort)
ProactiveWelcome->>HTTPClient: Fetch user profile (token, 5s timeout)
ProactiveWelcome->>HTTPClient: List Composio connections (5s timeout)
ProactiveWelcome->>ProactiveWelcome: Build enriched context snapshot
ProactiveWelcome->>MessagePub: Publish instant draft message (with firstName)
ProactiveWelcome->>WelcomeAgent: Run agent with enriched context (180s timeout)
WelcomeAgent->>WelcomeAgent: Skip iteration 1, write welcome
ProactiveWelcome->>MessagePub: Publish final proactive message
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/openhuman/agent/welcome_proactive.rs (1)
90-147: Extract the snapshot enrichment into one shared helper.This is now the second copy of the
onboarding_tasks/user_profileenrichment logic; the first lives insrc/openhuman/tools/impl/agent/complete_onboarding.rs:222-282. Keeping both copies in sync on field names, timeout policy, and omission behavior will be fragile. Please move the enrichment into one shared function and have both the reactive and proactive welcome paths call it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/welcome_proactive.rs` around lines 90 - 147, Extract the duplicated enrichment logic into a single async helper (e.g., enrich_snapshot_with_user_and_tasks) that accepts a mutable JSON object/map and &config, then perform the same steps currently duplicated: sync load_stored_app_state to insert "onboarding_tasks" (using serde_json::to_value(...).unwrap_or_default() and tracing::warn on load error), get_session_token and tokio::time::timeout 5s to call fetch_current_user and insert "user_profile" only on Ok(Ok(Some(user))) otherwise omit with tracing::debug, and build_composio_client + timeout 5s to call client.list_connections and insert "composio_connections" with serde_json::to_value(...).unwrap_or_default() or omit with tracing::debug; keep the same keys and omission behavior. Replace the duplicated blocks in welcome_proactive.rs (around build_status_snapshot) and src/openhuman/tools/impl/agent/complete_onboarding.rs (lines ~222-282) to call this new helper, preserving async/await usage and logging semantics.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/openhuman/agent/agents/welcome/prompt.md`:
- Around line 13-16: The markdown examples showing the calls
complete_onboarding({"action": "check_status"}) and
composio_list_connections({}) use unlabeled fenced code blocks; update each
fence to include a language identifier (e.g., add ```text) so markdownlint MD040
passes, and do the same for the other unlabeled fenced examples of these calls
elsewhere in the file (the second occurrence of the same examples).
In `@src/openhuman/agent/welcome_proactive.rs`:
- Around line 178-180: The log call using tracing::info! in the
welcome::proactive path currently interpolates first_name (PII); remove the
first_name interpolation and either log a non-PII indicator such as "user
present" or the authentication source, or log a redacted placeholder (e.g.
"<redacted>") instead; update the tracing::info! invocation that contains the
message "instant draft published for user '{}'" and the first_name argument so
it no longer emits the raw first_name value.
In `@src/openhuman/channels/proactive.rs`:
- Around line 123-128: The code is reusing the "chat_done" event and "system"
client room for proactive broadcasts which will trigger normal
inference-completion handlers; change the WebChannelEvent emitted by
publish_web_channel_event in proactive.rs so it uses a distinct event name
(e.g., "proactive_message" or "proactive_welcome") and a non-shared client_id
instead of "system" (so the frontend won't treat it like a regular
chat_done/inference completion). Update the WebChannelEvent construction (the
event and client_id fields) and any consumers that expect proactive events to
the new name so proactive broadcasts do not hit the regular chat_done handlers.
---
Nitpick comments:
In `@src/openhuman/agent/welcome_proactive.rs`:
- Around line 90-147: Extract the duplicated enrichment logic into a single
async helper (e.g., enrich_snapshot_with_user_and_tasks) that accepts a mutable
JSON object/map and &config, then perform the same steps currently duplicated:
sync load_stored_app_state to insert "onboarding_tasks" (using
serde_json::to_value(...).unwrap_or_default() and tracing::warn on load error),
get_session_token and tokio::time::timeout 5s to call fetch_current_user and
insert "user_profile" only on Ok(Ok(Some(user))) otherwise omit with
tracing::debug, and build_composio_client + timeout 5s to call
client.list_connections and insert "composio_connections" with
serde_json::to_value(...).unwrap_or_default() or omit with tracing::debug; keep
the same keys and omission behavior. Replace the duplicated blocks in
welcome_proactive.rs (around build_status_snapshot) and
src/openhuman/tools/impl/agent/complete_onboarding.rs (lines ~222-282) to call
this new helper, preserving async/await usage and logging semantics.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f6e98a0e-bfb6-4849-883b-6c44fa21f588
📒 Files selected for processing (8)
src/openhuman/agent/agents/mod.rssrc/openhuman/agent/agents/welcome/agent.tomlsrc/openhuman/agent/agents/welcome/prompt.mdsrc/openhuman/agent/welcome_proactive.rssrc/openhuman/app_state/mod.rssrc/openhuman/app_state/ops.rssrc/openhuman/channels/proactive.rssrc/openhuman/tools/impl/agent/complete_onboarding.rs
| ``` | ||
| complete_onboarding({"action": "check_status"}) | ||
| composio_list_connections({}) | ||
| ``` |
There was a problem hiding this comment.
Add language identifiers to the fenced examples.
Both code blocks are unlabeled, which will keep markdownlint MD040 failing. Add a language like text to each fence.
Also applies to: 49-51
🧰 Tools
🪛 markdownlint-cli2 (0.22.0)
[warning] 13-13: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/agent/agents/welcome/prompt.md` around lines 13 - 16, The
markdown examples showing the calls complete_onboarding({"action":
"check_status"}) and composio_list_connections({}) use unlabeled fenced code
blocks; update each fence to include a language identifier (e.g., add ```text)
so markdownlint MD040 passes, and do the same for the other unlabeled fenced
examples of these calls elsewhere in the file (the second occurrence of the same
examples).
| tracing::info!( | ||
| "[welcome::proactive] instant draft published for user '{}'", | ||
| first_name |
There was a problem hiding this comment.
Remove the first-name log.
first_name comes from /auth/me, so this info! writes user PII to logs on every onboarding run. Log presence/source instead, or redact the value. As per coding guidelines Never log secrets, API keys, JWTs, credentials, or full PII in Rust logs; redact or omit sensitive fields.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/agent/welcome_proactive.rs` around lines 178 - 180, The log
call using tracing::info! in the welcome::proactive path currently interpolates
first_name (PII); remove the first_name interpolation and either log a non-PII
indicator such as "user present" or the authentication source, or log a redacted
placeholder (e.g. "<redacted>") instead; update the tracing::info! invocation
that contains the message "instant draft published for user '{}'" and the
first_name argument so it no longer emits the raw first_name value.
| // 1. Always deliver to the web channel via Socket.IO. | ||
| // Emit as `chat_done` so the existing frontend chat handlers | ||
| // pick it up — no dedicated `proactive_message` listener needed. | ||
| publish_web_channel_event(WebChannelEvent { | ||
| event: "proactive_message".to_string(), | ||
| event: "chat_done".to_string(), | ||
| client_id: "system".to_string(), |
There was a problem hiding this comment.
Don't repurpose chat_done for proactive broadcasts.
This still goes out through the shared system room, and src/core/socketio.rs:368-387 emits that event name verbatim to every socket in that room. app/src/services/chatService.ts:197-203 already reserves chat_done for normal inference completion, so proactive welcomes will now hit the regular chat-finished handlers and can interfere with streaming state or message routing. Keep a distinct proactive event name, or stop broadcasting it on the shared proactive channel.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/channels/proactive.rs` around lines 123 - 128, The code is
reusing the "chat_done" event and "system" client room for proactive broadcasts
which will trigger normal inference-completion handlers; change the
WebChannelEvent emitted by publish_web_channel_event in proactive.rs so it uses
a distinct event name (e.g., "proactive_message" or "proactive_welcome") and a
non-shared client_id instead of "system" (so the frontend won't treat it like a
regular chat_done/inference completion). Update the WebChannelEvent construction
(the event and client_id fields) and any consumers that expect proactive events
to the new name so proactive broadcasts do not hit the regular chat_done
handlers.
1705699
into
tinyhumansai:feat/agentic-onboarding
Summary
Test plan
chat_onboarding_completed = falsein config.tomlSummary by CodeRabbit
New Features
Improvements