What to build
A React Native (Expo) mobile app that connects to Deepgram's Voice Agent API for real-time conversational voice interactions, with a polished mobile UI including animated voice visualization and conversation history.
Why this matters
Mobile voice agents are the fastest-growing segment in voice AI, yet Deepgram has zero mobile examples that go beyond basic transcription. Developers building voice-enabled mobile apps — customer service bots, health assistants, language tutors — need a reference showing how to handle microphone permissions, audio streaming, and agent conversation state on mobile. The existing React Native example covers only basic STT; this covers the full voice agent experience.
Suggested scope
- Language: TypeScript
- Framework: Expo SDK 52+, React Native
- Deepgram APIs: Voice Agent API (WebSocket)
- Features: Microphone capture with expo-av, real-time audio streaming, agent response playback, conversation transcript display, animated voice indicator
- Complexity: Medium-high — requires mobile audio handling, WebSocket management, and UI polish
- Expo managed workflow (no native modules required)
Acceptance criteria
Raised by the DX intelligence system.
What to build
A React Native (Expo) mobile app that connects to Deepgram's Voice Agent API for real-time conversational voice interactions, with a polished mobile UI including animated voice visualization and conversation history.
Why this matters
Mobile voice agents are the fastest-growing segment in voice AI, yet Deepgram has zero mobile examples that go beyond basic transcription. Developers building voice-enabled mobile apps — customer service bots, health assistants, language tutors — need a reference showing how to handle microphone permissions, audio streaming, and agent conversation state on mobile. The existing React Native example covers only basic STT; this covers the full voice agent experience.
Suggested scope
Acceptance criteria
npx expo start)Raised by the DX intelligence system.