A privacy-first, offline-capable AI assistant that runs entirely in your browser. No cloud dependencies, no data collection, complete user control.
- 100% On-Device Processing - All AI runs locally in your browser
- Zero Cloud Dependency - Works completely offline after initial model download
- Privacy Mode - Toggle to prevent any cloud logging
- Full Data Control - Export or delete your data anytime
- Zero Network Latency - No API calls, instant responses
- WebGPU Acceleration - Hardware-accelerated inference
- Optimized Models - Fast, efficient on-device models
- Sub-100ms Response - First token in under 100ms
- 💬 Chat - Conversational AI assistant
- 📷 Vision - Image understanding and description
- 🎙️ Voice - Speech-to-text and text-to-speech
- Local Storage - All data stored in your browser
- CSV Export - Download your data as CSV
- JSON Export - Export in JSON format
- Full Privacy - No cloud, no external servers
- Node.js 18+ and npm
- Modern browser with WebGPU support (Chrome 113+, Edge 113+)
# Clone the repository
git clone https://github.com/yourusername/runanywhere-ai.git
cd runanywhere-ai/web-starter-app
# Install dependencies
npm install
# Start development server
npm run devVisit http://localhost:5173 in your browser.
web-starter-app/
├── src/
│ ├── components/ # React components
│ │ ├── ChatTab.tsx
│ │ ├── VisionTab.tsx
│ │ ├── VoiceTab.tsx
│ │ └── SettingsModal.tsx
│ ├── hooks/ # Custom React hooks
│ ├── workers/ # Web Workers for AI processing
│ ├── styles/ # CSS styles
│ ├── App.tsx # Main app component
│ ├── runanywhere.ts # SDK initialization
│ ├── privacy.ts # Privacy controls
│ ├── dataControl.ts # Data management
│ └── supabse.ts # Optional cloud storage
├── public/
│ ├── sw.js # Service Worker for offline support
│ └── manifest.json # PWA manifest
All data is stored locally in your browser. No external configuration needed.
Privacy mode is enabled by default. All data stays on your device:
- No data sent to cloud
- All processing on-device
- Data stored locally only
Toggle via the 🔒 button in the header or Settings (⚙️).
- Powered by LiquidAI LFM2 350M model
- Streaming responses for real-time feedback
- Conversation history with stats
- Quick prompt suggestions
- Camera integration for live analysis
- Single-shot and live mode
- Customizable prompts
- Fast inference with optimized models
- Voice Activity Detection (VAD)
- Speech-to-Text (Whisper Tiny)
- Text-to-Speech (Piper TTS)
- Complete voice pipeline
Install as a standalone app:
- Visit the app in Chrome/Edge
- Click the install icon in the address bar
- Use like a native app, works offline!
# Development server
npm run dev
# Build for production
npm run build
# Preview production build
npm run preview
# Type checking
npm run type-check| Browser | Version | WebGPU | Status |
|---|---|---|---|
| Chrome | 113+ | ✅ | ✅ Full Support |
| Edge | 113+ | ✅ | ✅ Full Support |
| Firefox | 121+ | 🚧 | |
| Safari | 18+ | 🚧 |
MIT License - see LICENSE file for details.
Contributions welcome! Please read our contributing guidelines first.
- RunAnywhere SDK - On-device AI framework
- LiquidAI - LFM2 models
- Supabase - Optional cloud storage
- 📧 Email: support@example.com
- 💬 Discord: Join our community
- 🐛 Issues: GitHub Issues
Built with ❤️ for privacy and performance