Drop in one React Native component and your app gets AI support that answers questions, navigates users to the right screen, fills forms, and resolves issues end-to-end — with live human backup when needed. No custom API connectors required — the app UI is already the integration.
Two names, one package — pick whichever you prefer:
npm install @mobileai/react-native
# — or —
npm install react-native-agentic-aiTwo names, one package — install either: @mobileai/react-native or react-native-agentic-ai
⭐ If this helped you, star this repo — it helps others find it!
Intercom, Zendesk, and every chat widget all do the same thing: send the user instructions in a chat bubble.
"To cancel your order, go to Orders, tap the order, then tap Cancel."
That's not support. That's documentation delivery with a chat UI.
This SDK takes a different approach. Instead of telling users where to go, it — with the user's permission — goes there for them.
Every other support tool needs you to build API connectors: endpoints, webhooks, action definitions in their dashboard. Months of backend work before the AI can do anything useful.
This SDK reads your app's live UI natively — every button, label, input, and screen — in real time. There's nothing to integrate. The UI is already the integration. The app already knows how to cancel orders, update addresses, apply promo codes — it has buttons for all of it. The AI just uses them.
No OCR. No image pipelines. No selectors. No annotations. No backend connectors.
The most important insight: UI control is only uncomfortable when it's unexpected. In a support conversation, the user has already asked for help — they're in a "please help me" mindset:
| Context | User reaction to AI controlling UI |
|---|---|
| Unprompted (out of nowhere) | 😨 "What is happening?" |
| In a support chat — user asked for help | 😊 "Yes please, do it for me" |
| User is frustrated and types "how do I..." | 😮💨 "Thank God, yes" |
The SDK handles every tier of support automatically — from a simple FAQ answer to live human chat:
┌──────────────────────────────────────────────────────┐
│ Level 1: Knowledge Answer │
│ Answers from knowledge base — instant, zero UI │
│ "What's your return policy?" → answered directly │
├──────────────────────────────────────────────────────┤
│ Level 2: Show & Guide │
│ AI navigates to exact screen, user acts last │
│ "Settings → Notifications. It's right here. ☘️" │
├──────────────────────────────────────────────────────┤
│ Level 3: Do & Confirm (Copilot — default) │
│ AI fills forms, user confirms the final action │
│ "I've typed your new address. Tap Save to confirm." │
├──────────────────────────────────────────────────────┤
│ Level 4: Full Resolution (Autopilot) │
│ AI completes entire flow with one permission gate │
│ "Done! Order #4521 cancelled. Refund in 3-5 days." │
├──────────────────────────────────────────────────────┤
│ Level 5: Human Escalation │
│ Live agent via WebSocket — seamless handoff │
│ Billing dispute, legal issues, edge cases │
└──────────────────────────────────────────────────────┘
Level 1 — Knowledge Answer
User: "What's your return policy?"
AI: "You can return items within 30 days. Full refund in 5-7 business days."
[from knowledge base — no UI needed]
Level 2 — Show & Guide
User: "How do I turn on price alerts?"
AI: "That's in Settings → Notifications. Want me to take you there?"
User: "Yes"
AI: [navigates to Settings → scrolls to Notifications section]
AI: "Here it is. The Price Alerts toggle is right here. ☘️"
[user taps the toggle themselves]
Level 3 — Do & Confirm (Copilot, default)
User: "I need to change my shipping address"
AI: "Sure! I'll navigate to your address settings."
[navigates to Profile → Address]
AI: "What's the new address?"
User: "15 Omar St, Nasr City, Cairo"
AI: [types the address into the fields]
AI: "I've filled in '15 Omar St, Nasr City, Cairo'. Tap Save to confirm."
[user taps Save themselves]
Level 4 — Full Resolution (Autopilot)
User: "Cancel my last order"
AI: "Order #4521 — Margherita pizza, $12.99. Want me to cancel it?"
User: "Yes"
AI: [navigates to Orders → finds #4521 → taps Cancel → taps Confirm]
AI: "Done! Order #4521 is cancelled. Refund arrives in 3-5 days."
Level 5 — Human Escalation
User: "I was charged twice for my order"
AI: "I can see your order history, but billing disputes need a human agent."
[triggers escalate → live agent chat via WebSocket]
| Intercom Fin | Zendesk AI | This SDK | |
|---|---|---|---|
| Answer questions | ✅ | ✅ | ✅ Knowledge base |
| Navigate user to right screen | ❌ | ❌ | ✅ App-aware navigation |
| Fill forms for the user | ❌ | ❌ | ✅ Types directly into fields |
| Execute in-app actions | Via API connectors (must build) | Via API connectors | ✅ Via UI — zero backend work |
| Voice support | ❌ | ❌ | ✅ Gemini Live |
| Human escalation | ✅ | ✅ | ✅ WebSocket live chat |
| Mobile-native | ❌ WebView overlay | ❌ WebView | ✅ React Native component |
| Setup time | Days–weeks (build connectors) | Days–weeks | Minutes (<AIAgent> wrapper) |
| Price per resolution | $0.99 + subscription | $1.50–2.00 | You decide |
No competitor can do Levels 2–4. Intercom and Zendesk answer questions (Level 1) and escalate to humans (Level 5). The middle — app-aware navigation, form assistance, and full in-app resolution — is uniquely possible because this SDK reads the React Native Fiber tree. That can't be added with a plugin or API connector.
The AI answers questions, guides users to the right screen, fills forms on their behalf, or completes full task flows — with voice support and human escalation built in. All in the existing app UI. Zero backend integration.
- Zero-config — wrap your app with
<AIAgent>, done. No annotations, no selectors, no API connectors - 5-level resolution — knowledge answer → guided navigation → copilot → full resolution → human escalation
- Copilot mode (default) — AI pauses once before irreversible actions (order, delete, submit). User always stays in control
- Human escalation — live chat via WebSocket, CSAT survey, ticket dashboard — all built in
- Knowledge base — policies, FAQs, product data queried on demand — no token waste
Full bidirectional voice AI powered by the Gemini Live API. Users speak their support request; the agent responds with voice AND navigates, fills forms, and resolves issues simultaneously.
- Sub-second latency — real-time audio via WebSockets, not turn-based
- Full resolution — same navigate, type, tap as text mode — all by voice
- Screen-aware — auto-detects screen changes and updates context instantly
💡 Speech-to-text in text mode: Install
expo-speech-recognitionfor a mic button in the chat bar — letting users dictate instead of typing. Separate from voice mode.
Every useAction you register automatically becomes a Siri shortcut and Spotlight action. One config plugin added at build time — no Swift required — and users can say:
"Hey Siri, track my order in MyApp" "Hey Siri, checkout in MyApp" "Hey Siri, cancel my last order in MyApp"
Setup — Expo Config Plugin
// app.json
{
"expo": {
"plugins": [
["@mobileai/react-native/withAppIntents", {
"scanDirectory": "src",
"appScheme": "myapp"
}]
]
}
}After npx expo prebuild, every registered useAction is available in Siri and Spotlight automatically.
Or generate manually:
# Scan useAction calls → intent-manifest.json
npx @mobileai/react-native generate-intents src
# Generate Swift AppIntents code
npx @mobileai/react-native generate-swift intent-manifest.json myapp
⚠️ iOS 16+ only. Android equivalent (Google Assistant App Actions) is on the roadmap.
Your app becomes MCP-compatible with one prop. Connect any AI — Antigravity, Claude Desktop, CI/CD pipelines — to remotely read and control the running app. Find bugs without writing a single test.
MCP-only mode — just want testing? No chat popup needed:
<AIAgent
showChatBar={false}
mcpServerUrl="ws://localhost:3101"
apiKey="YOUR_KEY"
navRef={navRef}
>
<App />
</AIAgent>The most powerful use case: test your app without writing test code. Connect your AI (Antigravity, Claude Desktop, or any MCP client) to the emulator and describe what to check — in English. No selectors to maintain, no flaky tests, self-healing by design.
Skip the test framework. Just ask:
Ad-hoc — ask your AI anything about the running app:
"Is the Laptop Stand price consistent between the home screen and the product detail page?"
YAML Test Plans — commit reusable checks to your repo:
# tests/smoke.yaml
checks:
- id: price-sync
check: "Read the Laptop Stand price on home, tap it, compare with detail page"
- id: profile-email
check: "Go to Profile tab. Is the email displayed under the user's name?"Then tell your AI: "Read tests/smoke.yaml and run each check on the emulator"
Real Results — 5 bugs found autonomously:
| # | What was checked | Bug found | AI steps |
|---|---|---|---|
| 1 | Price consistency (list → detail) | Laptop Stand: $45.99 vs $49.99 | 2 |
| 2 | Profile completeness | Email missing — only name shown | 2 |
| 3 | Settings navigation | Help Center missing from Support section | 2 |
| 4 | Description vs specifications | "breathable mesh" vs "Leather Upper" | 3 |
| 5 | Cross-screen price sync | Yoga Mat: $39.99 vs $34.99 | 4 |
Two names, one package — pick whichever you prefer:
npm install @mobileai/react-native
# — or —
npm install react-native-agentic-aiNo native modules required by default. Works with Expo managed workflow out of the box — no eject needed.
📸 Screenshots — for image/video content understanding
npx expo install react-native-view-shot🎙️ Speech-to-Text in Text Mode — dictate messages instead of typing
npx expo install expo-speech-recognitionAutomatically detected. No extra config needed — a mic icon appears in the text chat bar, letting users speak their message instead of typing. This is separate from voice mode.
🎤 Voice Mode — real-time bidirectional voice agent
npm install react-native-audio-apiExpo Managed — add to app.json:
{
"expo": {
"android": { "permissions": ["RECORD_AUDIO", "MODIFY_AUDIO_SETTINGS"] },
"ios": { "infoPlist": { "NSMicrophoneUsageDescription": "Required for voice chat with AI assistant" } }
}
}Then rebuild: npx expo prebuild && npx expo run:android (or run:ios)
Expo Bare / React Native CLI — add RECORD_AUDIO + MODIFY_AUDIO_SETTINGS to AndroidManifest.xml and NSMicrophoneUsageDescription to Info.plist, then rebuild.
Hardware echo cancellation (AEC) is automatically enabled — no extra setup.
💬 Human Support & Ticket Persistence — persist tickets and discovery tooltip state across sessions
npx expo install @react-native-async-storage/async-storageOptional but recommended when using:
- Human escalation support — tickets survive app restarts
- Discovery tooltip — remembers if the user has already seen it
Without it, both features gracefully degrade: tickets are only visible during the current session, and the tooltip shows every launch instead of once.
Add one line to your metro.config.js — the AI gets a map of every screen in your app, auto-generated on each dev start:
// metro.config.js
require('@mobileai/react-native/generate-map').autoGenerate(__dirname);Or generate it manually anytime:
npx @mobileai/react-native generate-mapWithout this, the AI can only see the currently mounted screen — it has no idea what other screens exist or how to reach them. Example: "Write a review for the Laptop Stand" — the AI sees the Home screen but doesn't know a
WriteReviewscreen exists 3 levels deep. With a map, it sees every screen in your app and knows exactly how to get there:Home → Products → Detail → Reviews → WriteReview.
import { AIAgent } from '@mobileai/react-native'; // or 'react-native-agentic-ai'
import { NavigationContainer, useNavigationContainerRef } from '@react-navigation/native';
import screenMap from './ai-screen-map.json'; // auto-generated by step 1
export default function App() {
const navRef = useNavigationContainerRef();
return (
<AIAgent
// ⚠️ Prototyping ONLY — don't ship API keys in production
apiKey="YOUR_API_KEY"
// ✅ Production: route through your secure backend proxy
// proxyUrl="https://api.yourdomain.com/ai-proxy"
// proxyHeaders={{ Authorization: `Bearer ${userToken}` }}
navRef={navRef}
screenMap={screenMap} // optional but recommended
>
<NavigationContainer ref={navRef}>
{/* Your existing screens — zero changes needed */}
</NavigationContainer>
</AIAgent>
);
}In your root layout (app/_layout.tsx):
import { AIAgent } from '@mobileai/react-native'; // or 'react-native-agentic-ai'
import { Slot, useNavigationContainerRef } from 'expo-router';
import screenMap from './ai-screen-map.json'; // auto-generated by step 1
export default function RootLayout() {
const navRef = useNavigationContainerRef();
return (
<AIAgent
apiKey={process.env.AI_API_KEY!}
navRef={navRef}
screenMap={screenMap}
>
<Slot />
</AIAgent>
);
}The examples above use Gemini (default). To use OpenAI for text mode, add the provider prop. Voice mode is not supported with OpenAI.
<AIAgent
provider="openai"
apiKey="YOUR_OPENAI_API_KEY"
// model="gpt-4.1-mini" ← default, or use any OpenAI model
navRef={navRef}
>
{/* Same app, different brain */}
</AIAgent>A floating chat bar appears automatically. Ask the AI to navigate, tap buttons, fill forms, answer questions.
Set enableUIControl={false} for a lightweight FAQ / support assistant. Single LLM call, ~70% fewer tokens:
<AIAgent enableUIControl={false} knowledgeBase={KNOWLEDGE} />| Full Agent (default) | Knowledge-Only | |
|---|---|---|
| UI analysis | ✅ Full structure read | ❌ Skipped |
| Tokens per request | ~500-2000 | ~200 |
| Agent loop | Up to 25 steps | Single call |
| Tools available | 7 | 2 (done, query_knowledge) |
The agent operates in copilot mode by default. It navigates, scrolls, types, and fills forms silently — then pauses once before the final irreversible action (place order, delete account, submit payment) to ask the user for confirmation.
// Default — copilot mode, zero extra config:
<AIAgent apiKey="..." navRef={navRef}>
<App />
</AIAgent>What the AI does silently:
- Navigating between screens and tabs
- Scrolling to find content
- Typing into form fields
- Selecting options and filters
- Adding items to cart
What the AI pauses on (asks the user first):
- Placing an order / completing a purchase
- Submitting a form that sends data to a server
- Deleting anything (account, item, message)
- Confirming a payment or transaction
- Saving account/profile changes
<AIAgent interactionMode="autopilot" />Use autopilot for power users, accessibility tools, or repeat-task automation where confirmations are unwanted.
In copilot mode, the prompt handles ~95% of cases automatically. For extra safety on your most sensitive buttons, add aiConfirm={true} — this adds a code-level block that cannot be bypassed even if the LLM ignores the prompt:
// These elements will ALWAYS require confirmation before the AI touches them
<Pressable aiConfirm onPress={deleteAccount}>
<Text>Delete Account</Text>
</Pressable>
<Pressable aiConfirm onPress={placeOrder}>
<Text>Place Order</Text>
</Pressable>
<TextInput aiConfirm placeholder="Credit card number" />aiConfirm works on any interactive element: Pressable, TextInput, Slider, Picker, Switch, DatePicker.
💡 Dev tip: In
__DEV__mode, the SDK logs a reminder to addaiConfirmto critical elements after each copilot task.
| Layer | Mechanism | Developer effort |
|---|---|---|
| Prompt (primary) | AI uses ask_user before irreversible commits |
Zero |
aiConfirm prop (optional safety net) |
Code blocks specific elements | Add prop to 2–3 critical buttons |
| Dev warning (preventive) | Logs tip in __DEV__ mode |
Zero |
Transform the AI agent into a production-grade support system. The AI resolves issues directly inside your app UI — no backend API integrations required. When it can't help, it escalates to a live human agent.
import { SupportGreeting, buildSupportPrompt, createEscalateTool } from '@mobileai/react-native';
<AIAgent
apiKey="..."
analyticsKey="mobileai_pub_xxx" // required for MobileAI escalation
instructions={{
system: buildSupportPrompt({
enabled: true,
greeting: {
message: "Hi! 👋 How can I help you today?",
agentName: "Support",
},
quickReplies: [
{ label: "Track my order", icon: "📦" },
{ label: "Cancel order", icon: "❌" },
{ label: "Talk to a human", icon: "👤" },
],
escalation: { provider: 'mobileai' },
csat: { enabled: true },
}),
}}
customTools={{ escalate: createEscalateTool({ provider: 'mobileai' }) }}
userContext={{
userId: user.id,
name: user.name,
email: user.email,
plan: 'pro',
}}
>
<App />
</AIAgent>- AI creates a ticket in the MobileAI Dashboard inbox
- User receives a real-time live chat thread (WebSocket)
- Support agent replies — user sees messages instantly
- Ticket is closed when resolved — a CSAT survey appears
| Provider | What happens |
|---|---|
'mobileai' |
Ticket → MobileAI Dashboard inbox + WebSocket live chat |
'custom' |
Calls your onEscalate callback — wire to Intercom, Zendesk, etc. |
// Custom provider — bring your own live chat:
createEscalateTool({
provider: 'custom',
onEscalate: (context) => {
Intercom.presentNewConversation();
// context includes: userId, message, screenName, chatHistory
},
})Pass user identity to the escalation ticket for agent visibility in the dashboard:
<AIAgent
userContext={{
userId: 'usr_123',
name: 'Ahmed Hassan',
email: 'ahmed@example.com',
plan: 'pro',
custom: { region: 'cairo', language: 'ar' },
}}
pushToken={expoPushToken} // for offline support reply notifications
pushTokenType="expo" // 'fcm' | 'expo' | 'apns'
/>Render the support greeting independently if you have a custom chat UI:
import { SupportGreeting } from '@mobileai/react-native';
<SupportGreeting
message="Hi! 👋 How can I help?"
agentName="Support"
quickReplies={[
{ label: 'Track order', icon: '📦' },
{ label: 'Talk to human', icon: '👤' },
]}
onQuickReply={(text) => send(text)}
/>By default, the AI navigates by reading what's on screen and tapping visible elements. Screen mapping gives the AI a complete map of every screen and how they connect — via static analysis of your source code (AST). No API key needed, runs in ~2 seconds.
Add to your metro.config.js — the screen map auto-generates every time Metro starts:
// metro.config.js
require('@mobileai/react-native/generate-map').autoGenerate(__dirname);
// ... rest of your Metro configThen pass the generated map to <AIAgent>:
import screenMap from './ai-screen-map.json';
<AIAgent screenMap={screenMap} navRef={navRef}>
<App />
</AIAgent>That's it. Works with both Expo Router and React Navigation — auto-detected.
| Without Screen Map | With Screen Map |
|---|---|
| AI sees only the current screen | AI knows every screen in your app |
| Must explore to find features | Plans the full navigation path upfront |
| Deep screens may be unreachable | Knows each screen's navigatesTo links |
| No knowledge of dynamic routes | Understands item/[id], category/[id] patterns |
<AIAgent screenMap={screenMap} useScreenMap={false} />Advanced: Watch mode, CLI options, and npm scripts
Manual generation:
npx @mobileai/react-native generate-mapWatch mode — auto-regenerates on file changes:
npx @mobileai/react-native generate-map --watchnpm scripts — auto-run before start/build:
{
"scripts": {
"generate-map": "npx @mobileai/react-native generate-map",
"prestart": "npm run generate-map",
"prebuild": "npm run generate-map"
}
}| Flag | Description |
|---|---|
--watch, -w |
Watch for file changes and auto-regenerate |
--dir=./path |
Custom project directory |
💡 The generated
ai-screen-map.jsonis committed to your repo — no runtime cost.
Give the AI domain knowledge it can query on demand — policies, FAQs, product details. Uses a query_knowledge tool to fetch only relevant entries (no token waste).
import type { KnowledgeEntry } from '@mobileai/react-native'; // or 'react-native-agentic-ai'
const KNOWLEDGE: KnowledgeEntry[] = [
{
id: 'shipping',
title: 'Shipping Policy',
content: 'Free shipping on orders over $75. Standard: 5-7 days. Express: 2-3 days.',
tags: ['shipping', 'delivery'],
},
{
id: 'returns',
title: 'Return Policy',
content: '30-day returns on all items. Refunds in 5-7 business days.',
tags: ['return', 'refund'],
screens: ['product/[id]', 'order-history'], // only surface on these screens
},
];
<AIAgent knowledgeBase={KNOWLEDGE} /><AIAgent
knowledgeBase={{
retrieve: async (query: string, screenName?: string) => {
const results = await fetch(`/api/knowledge?q=${query}&screen=${screenName}`);
return results.json();
},
}}
/>┌──────────────────┐ ┌──────────────────┐ WebSocket ┌──────────────────┐
│ Antigravity │ Streamable HTTP │ │ │ │
│ Claude Desktop │ ◄──────────────► │ @mobileai/ │ ◄─────────────► │ Your React │
│ or any MCP │ (port 3100) │ mcp-server │ (port 3101) │ Native App │
│ compatible AI │ + Legacy SSE │ │ │ │
└──────────────────┘ └──────────────────┘ └──────────────────┘
1. Start the MCP bridge — no install needed:
npx @mobileai/mcp-server2. Connect your React Native app:
<AIAgent
apiKey="YOUR_API_KEY"
mcpServerUrl="ws://localhost:3101"
/>3. Connect your AI:
Google Antigravity
Add to ~/.gemini/antigravity/mcp_config.json:
{
"mcpServers": {
"mobile-app": {
"command": "npx",
"args": ["@mobileai/mcp-server"]
}
}
}Click Refresh in MCP Store. You'll see mobile-app with 2 tools: execute_task and get_app_status.
Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"mobile-app": {
"url": "http://localhost:3100/mcp/sse"
}
}
}Other MCP Clients
- Streamable HTTP:
http://localhost:3100/mcp - Legacy SSE:
http://localhost:3100/mcp/sse
| Tool | Description |
|---|---|
execute_task(command) |
Send a natural language command to the app |
get_app_status() |
Check if the React Native app is connected |
| Variable | Default | Description |
|---|---|---|
MCP_PORT |
3100 |
HTTP port for MCP clients |
WS_PORT |
3101 |
WebSocket port for the React Native app |
| Prop | Type | Default | Description |
|---|---|---|---|
apiKey |
string |
— | API key for your provider (prototyping only — use proxyUrl in production). |
provider |
'gemini' | 'openai' |
'gemini' |
LLM provider for text mode. |
proxyUrl |
string |
— | Backend proxy URL (production). Routes all LLM traffic through your server. |
proxyHeaders |
Record<string, string> |
— | Auth headers for proxy (e.g., Authorization: Bearer ${token}). |
voiceProxyUrl |
string |
— | Dedicated proxy for Voice Mode WebSockets. Falls back to proxyUrl. |
voiceProxyHeaders |
Record<string, string> |
— | Auth headers for voice proxy. |
model |
string |
Provider default | Model name (e.g. gemini-2.5-flash, gpt-4.1-mini). |
navRef |
NavigationContainerRef |
— | Navigation ref for auto-navigation. |
children |
ReactNode |
— | Your app — zero changes needed inside. |
| Prop | Type | Default | Description |
|---|---|---|---|
interactionMode |
'copilot' | 'autopilot' |
'copilot' |
Copilot (default): AI pauses before irreversible actions. Autopilot: full autonomy, no confirmation. |
showDiscoveryTooltip |
boolean |
true |
Show one-time animated tooltip on FAB explaining AI capabilities. Dismissed after 6s or first tap. |
maxSteps |
number |
25 |
Max agent steps per task. |
maxTokenBudget |
number |
— | Max total tokens before auto-stopping the agent loop. |
maxCostUSD |
number |
— | Max estimated cost (USD) before auto-stopping. |
stepDelay |
number |
— | Delay between agent steps in ms. |
enableUIControl |
boolean |
true |
When false, AI becomes knowledge-only (faster, fewer tokens). |
enableVoice |
boolean |
false |
Show voice mode tab. |
showChatBar |
boolean |
true |
Show the floating chat bar. |
| Prop | Type | Default | Description |
|---|---|---|---|
screenMap |
ScreenMap |
— | Pre-generated screen map from generate-map CLI. |
useScreenMap |
boolean |
true |
Set false to disable screen map without removing the prop. |
router |
{ push, replace, back } |
— | Expo Router instance (from useRouter()). |
pathname |
string |
— | Current pathname (from usePathname() — Expo Router). |
| Prop | Type | Default | Description |
|---|---|---|---|
instructions |
{ system?, getScreenInstructions? } |
— | Custom system prompt + per-screen instructions. |
customTools |
Record<string, ToolDefinition | null> |
— | Add custom tools or remove built-in ones (set to null). |
knowledgeBase |
KnowledgeEntry[] | { retrieve } |
— | Domain knowledge the AI can query via query_knowledge. |
knowledgeMaxTokens |
number |
2000 |
Max tokens for knowledge results. |
transformScreenContent |
(content: string) => string |
— | Transform/mask screen content before the LLM sees it. |
| Prop | Type | Default | Description |
|---|---|---|---|
interactiveBlacklist |
React.RefObject<any>[] |
— | Refs of elements the AI must NOT interact with. |
interactiveWhitelist |
React.RefObject<any>[] |
— | If set, AI can ONLY interact with these elements. |
| Prop | Type | Default | Description |
|---|---|---|---|
userContext |
{ userId?, name?, email?, plan?, custom? } |
— | Logged-in user identity — attached to escalation tickets. |
pushToken |
string |
— | Push token for offline support reply notifications. |
pushTokenType |
'fcm' | 'expo' | 'apns' |
— | Type of the push token. |
| Prop | Type | Default | Description |
|---|---|---|---|
proactiveHelp |
ProactiveHelpConfig |
— | Detects user hesitation and shows a contextual help nudge. |
<AIAgent
proactiveHelp={{
enabled: true,
pulseAfterMinutes: 2, // subtle FAB pulse to catch attention
badgeAfterMinutes: 4, // badge: "Need help with this screen?"
badgeText: "Need help?",
dismissForSession: true, // once dismissed, won't show again this session
generateSuggestion: (screen) => {
if (screen === 'Checkout') return 'Having trouble with checkout?';
return undefined;
},
}}
/>| Prop | Type | Default | Description |
|---|---|---|---|
analyticsKey |
string |
— | Publishable key (mobileai_pub_xxx) — enables auto-analytics. |
analyticsProxyUrl |
string |
— | Enterprise: route events through your backend. |
analyticsProxyHeaders |
Record<string, string> |
— | Auth headers for analytics proxy. |
| Prop | Type | Default | Description |
|---|---|---|---|
mcpServerUrl |
string |
— | WebSocket URL for the MCP bridge (e.g. ws://localhost:3101). |
| Prop | Type | Default | Description |
|---|---|---|---|
onResult |
(result) => void |
— | Called when agent finishes a task. |
onBeforeTask |
() => void |
— | Called before task execution starts. |
onAfterTask |
(result) => void |
— | Called after task completes. |
onBeforeStep |
(stepCount) => void |
— | Called before each agent step. |
onAfterStep |
(history) => void |
— | Called after each step (with full step history). |
onTokenUsage |
(usage) => void |
— | Token usage data per step. |
onAskUser |
(question) => Promise<string> |
— | Custom handler for ask_user — agent blocks until resolved. |
| Prop | Type | Default | Description |
|---|---|---|---|
accentColor |
string |
— | Quick accent color for FAB, send button, active states. |
theme |
ChatBarTheme |
— | Full chat bar theme override. |
debug |
boolean |
false |
Enable SDK debug logging. |
// Quick — one color:
<AIAgent accentColor="#6C5CE7" />
// Full theme:
<AIAgent
accentColor="#6C5CE7"
theme={{
backgroundColor: 'rgba(44, 30, 104, 0.95)',
inputBackgroundColor: 'rgba(255, 255, 255, 0.12)',
textColor: '#ffffff',
successColor: 'rgba(40, 167, 69, 0.3)',
errorColor: 'rgba(220, 53, 69, 0.3)',
}}
/>Register isolated, headless logic for the AI to call (e.g., API requests, checkouts).
The handler is kept automatically fresh internally, so you never get stuck with a stale closure. The optional deps array re-registers the action so the AI sees an updated description.
import { useAction } from '@mobileai/react-native'; // or 'react-native-agentic-ai'
function CartScreen() {
const { cart, clearCart, getTotal } = useCart();
// Passing [cart.length] ensures the AI receives the live item count in its context!
useAction(
'checkout',
`Place the order and checkout (${cart.length} items for $${getTotal()})`,
{},
async () => {
if (cart.length === 0) return { success: false, message: 'Cart is empty' };
// Human-in-the-loop: AI pauses until user taps Confirm
return new Promise((resolve) => {
Alert.alert('Confirm Order', `Place order for $${getTotal()}?`, [
{ text: 'Cancel', onPress: () => resolve({ success: false, message: 'User denied.' }) },
{ text: 'Confirm', onPress: () => { clearCart(); resolve({ success: true, message: `Order placed!` }); } },
]);
});
},
[cart.length, getTotal]
);
}import { useAI } from '@mobileai/react-native'; // or 'react-native-agentic-ai'
function CustomChat() {
const { send, isLoading, status, messages } = useAI();
return (
<View style={{ flex: 1 }}>
<FlatList data={messages} renderItem={({ item }) => <Text>{item.content}</Text>} />
{isLoading && <Text>{status}</Text>}
<TextInput onSubmitEditing={(e) => send(e.nativeEvent.text)} placeholder="Ask the AI..." />
</View>
);
}Chat history persists across navigation. Override settings per-screen:
const { send } = useAI({
enableUIControl: false,
onResult: (result) => router.push('/(tabs)/chat'),
});Just add analyticsKey — every button tap, screen navigation, and session is tracked automatically. Zero code changes to your app components.
<AIAgent
apiKey="YOUR_KEY"
analyticsKey="mobileai_pub_abc123" // ← enables full auto-capture
navRef={navRef}
>
<App />
</AIAgent>What's captured automatically:
| Event | Data | How |
|---|---|---|
user_interaction |
Button label, screen, coordinates, actor: 'user' |
Root touch interceptor |
screen_view |
Screen name, previous screen | Navigation ref listener |
session_start |
Device, OS, SDK version | On mount |
session_end |
Duration, event count | On background |
agent_request |
User query | On AI task start |
agent_step |
Tool name, args, result | On each AI action |
agent_complete |
Success, steps, cost | On AI task end |
When the AI agent taps a button on behalf of the user, those taps are not counted as user_interaction events — they're already captured as agent_step events with full context.
This means your funnels and retention charts always show real human behaviour, while the AI's actions are separately attributed for ROI analysis. No other analytics SDK can offer this because they don't own the app root.
| Event | Who | Dashboard use |
|---|---|---|
user_interaction { actor: 'user' } |
Human only | Funnels, retention, journeys |
agent_step { tool: 'tap' } |
AI only | Agent ROI, resolution rate |
Custom business events — track what matters to you:
import { MobileAI } from '@mobileai/react-native';
MobileAI.track('purchase_complete', { order_id: 'ord_1', total: 29.99 });
MobileAI.identify('user_123', { plan: 'pro' });Enterprise: use
analyticsProxyUrlto route events through your own backend — zero keys in the app bundle.
<AIAgent
proxyUrl="https://myapp.vercel.app/api/gemini"
proxyHeaders={{ Authorization: `Bearer ${userToken}` }}
voiceProxyUrl="https://voice-server.render.com" // only if text proxy is serverless
navRef={navRef}
>
voiceProxyUrlfalls back toproxyUrlif not set. Only needed when your text API is on a serverless platform that can't hold WebSocket connections.
Next.js Text Proxy Example
import { NextResponse } from 'next/server';
export async function POST(req: Request) {
const body = await req.json();
const response = await fetch('https://generativelanguage.googleapis.com/...', {
method: 'POST',
headers: { 'Content-Type': 'application/json', 'x-goog-api-key': process.env.GEMINI_API_KEY! },
body: JSON.stringify(body),
});
return NextResponse.json(await response.json());
}Express WebSocket Proxy (Voice Mode)
const express = require('express');
const { createProxyMiddleware } = require('http-proxy-middleware');
const app = express();
const geminiProxy = createProxyMiddleware({
target: 'https://generativelanguage.googleapis.com',
changeOrigin: true,
ws: true,
pathRewrite: (path) => `${path}${path.includes('?') ? '&' : '?'}key=${process.env.GEMINI_API_KEY}`,
});
app.use('/v1beta/models', geminiProxy);
const server = app.listen(3000);
server.on('upgrade', geminiProxy.upgrade);// AI will never see or interact with this element:
<Pressable aiIgnore={true}><Text>Admin Panel</Text></Pressable>
// In copilot mode, AI must confirm before touching this element:
<Pressable aiConfirm={true} onPress={deleteAccount}>
<Text>Delete Account</Text>
</Pressable><AIAgent transformScreenContent={(c) => c.replace(/\b\d{13,16}\b/g, '****-****-****-****')} /><AIAgent instructions={{
system: 'You are a food delivery assistant.',
getScreenInstructions: (screen) => screen === 'Cart' ? 'Confirm total before checkout.' : undefined,
}} />| Hook | When |
|---|---|
onBeforeTask |
Before task execution starts |
onBeforeStep |
Before each agent step |
onAfterStep |
After each step (with full history) |
onAfterTask |
After task completes (success or failure) |
AIZone marks specific sections of your UI so the AI can operate within them with special capabilities: simplify cluttered areas, inject contextual cards, or highlight elements.
import { AIZone } from '@mobileai/react-native';
// Allow AI to simplify this zone if it's too cluttered
<AIZone id="product-details" allowSimplify>
<View>
<Text aiPriority="high">Price: $29.99</Text>
<Text aiPriority="low">SKU: ABC-123</Text>
<Text aiPriority="low">Weight: 500g</Text>
</View>
</AIZone>
// Allow AI to inject contextual cards (e.g. "Need help?" dialogs)
<AIZone id="checkout-summary" allowInjectCard allowHighlight>
<CheckoutSummary />
</AIZone>Tag any element with aiPriority to control AI visibility:
| Value | Effect |
|---|---|
"high" |
Always rendered — surfaced first in AI context |
"low" |
Hidden when AI calls simplify_zone() on the enclosing AIZone |
| Prop | Type | Description |
|---|---|---|
id |
string |
Unique zone identifier the AI uses to target operations |
allowSimplify |
boolean |
AI can call simplify_zone(id) to hide aiPriority="low" elements |
allowHighlight |
boolean |
AI can visually highlight elements inside this zone |
allowInjectHint |
boolean |
AI can inject a contextual text hint into this zone |
allowInjectCard |
boolean |
AI can inject a pre-built card template into this zone |
| Tool | What it does |
|---|---|
tap(index) |
Tap any interactive element — buttons, switches, checkboxes, custom components |
long_press(index) |
Long-press an element to trigger context menus |
type(index, text) |
Type into a text input |
scroll(direction, amount?) |
Scroll content — auto-detects edge, rejects PagerView |
slider(index, value) |
Drag a slider to a specific value |
picker(index, value) |
Select a value from a dropdown/picker |
date_picker(index, date) |
Set a date on a date picker |
navigate(screen) |
Navigate to any screen |
wait(seconds) |
Wait for loading states before acting |
capture_screenshot(reason) |
Capture the screen as an image (requires react-native-view-shot) |
done(text) |
Finish the task with a response |
ask_user(question) |
Ask the user for clarification |
query_knowledge(question) |
Search the knowledge base |
- React Native 0.72+
- Expo SDK 49+ (or bare React Native)
- Gemini API key — Get one free, or
- OpenAI API key — Get one
Gemini is the default provider and powers all modes (text + voice). OpenAI is available as a text mode alternative via
provider="openai". Voice mode usesgemini-2.5-flash-native-audio-preview(Gemini only).
MIT © Mohamed Salah
👋 Let's connect — LinkedIn
