Abserny Mobile is an Android-first assistive application designed for visually impaired users. It is built as an Arabic-speaking, offline-capable perception pipeline that listens for Arabic trigger words, analyzes surroundings through the camera, and produces concise Arabic spoken feedback.
This repository currently provides:
- An Android application scaffold (
app/) with a modular pipeline architecture. - Core service wiring for wake-word, camera, perception, context analysis, Arabic NLG, and TTS modules.
- A production-oriented architecture blueprint in
docs/.
The current implementation intentionally includes placeholder internals for wake-word and vision inference while preserving stable module boundaries for incremental hardening.
- Project Overview
- Installation and Local Setup (Cross-Environment, Detailed)
- Installation and Local Setup
- Android Architecture Blueprint
app/: Android application module (Kotlin, API 26+).docs/: Product and technical documentation.build.gradle.kts,settings.gradle.kts,gradle.properties: Gradle build configuration.
- Android SDK (API 26+)
- Kotlin
- Coroutines and Flow
- CameraX
- TensorFlow Lite (integration-ready)
- Android TextToSpeech
- Hilt dependency injection
- Voice-first interaction
- Offline-by-default processing
- Safety-focused messaging
- Minimal cognitive load in audio feedback
- Module isolation and restartability
To set up the project locally, follow:
If you are troubleshooting repository conflicts after pulling updates, use the dedicated conflict-check section in the installation guide.
For architecture and system behavior details, see:
This project is licensed under the terms of the LICENSE.