EdgeVis is a mobile AI navigation assistant that runs entirely on-device, using Vision-Language Models (VLMs) integrated with the RunAnywhere SDK in a React Native application.
Unlike cloud-based assistants, EdgeVis performs perception, reasoning, and navigation locally on the device—enabling low-latency, privacy-preserving, and offline-capable AI navigation.
EdgeVis combines real-time visual understanding with natural language reasoning to help users navigate their environment without relying on cloud infrastructure.
By leveraging on-device VLMs and the RunAnywhere SDK, EdgeVis delivers consistent AI-assisted navigation even in offline or low-connectivity environments.
-
Fully On-Device AI
- Vision and language inference run locally
- No cloud calls or remote processing
-
Vision-Language Navigation
- Interprets camera input and user queries
- Provides contextual navigation guidance
-
Privacy-First Architecture
- Images and sensor data never leave the device
- Suitable for sensitive or regulated environments
-
Low Latency
- Real-time responses enabled by edge inference
- No network round trips
-
Cross-Platform Mobile App
- Built with React Native
- Supports iOS and Android
-
Frontend
- React Native
- TypeScript
-
AI / ML
- On-device Vision-Language Models (VLMs)
- RunAnywhere SDK for local inference and model orchestration
-
Platform
- iOS and Android
- Native camera and sensor integrations
- Indoor and outdoor navigation
- Assistive navigation and accessibility
- Privacy-sensitive environments
- Offline or low-connectivity scenarios
- Edge AI research and experimentation
EdgeVis is designed with privacy as a core principle:
- No cloud inference
- No image or sensor data storage
- No external APIs required for core functionality
- Node.js
- React Native development environment
- RunAnywhere SDK access
npm install