AIMindMesh is a privacy-first, local-first agentic ecosystem designed to orchestrate a mesh of heterogeneous nodes—Android devices, high-performance PC clients, and specialized server orchestrators—into a single, unified cognitive fabric. It is the first assistant capable of autonomous self-evolution, repairing and improving its own source code through a multi-path agentic loop.
AIMindMesh breaks the traditional "Wrapper" paradigm by implementing a strict Native/Frontend Split:
- Computational Bedrock (Native C++/Kotlin/Rust): Executes heavy-lift operations—LLM inference, NPU delegation, high-fidelity audio processing, and vector similarity—directly on bare metal.
- State Orchestration (React/TypeScript): Manages complex business logic, UI state, and real-time synchronization via a custom Robust Proxy Pattern.
- Distributed Mesh Topology: Nodes (Mobile, PC, VPS) communicate via a secured WireHole (WireGuard) tunnel, sharing inference power, knowledge deltas, and task execution.
The mobile node is not just a client; it's a high-performance inference powerhouse optimized for modern Snapdragon silicon that could also run standalone.
- Adreno OpenCL & Vulkan: Optimized GGUF inference using the Qualcomm-contributed OpenCL backend for Adreno GPUs, significantly outperforming generic Vulkan implementations.
- Hexagon NPU Delegation: Direct routing of LiteRT (
.litertlm) models to the Qualcomm Hexagon HTP via QNN delegates, achieving studio-level tokens-per-second with minimal thermal impact. - Speculative Decoding (MTP): Implements Multi-Token Prediction (LiteRT 0.11.0) to predict and verify multiple tokens per forward pass.
- Persistent KV Cache: Disk-based serialization of conversation states to
cache/litert_cache/, allowing instant resumption of context after app restarts or backgrounding. - VRAM Guardian: Dynamic RAM-pressure scaling that monitors
onTrimMemoryto prevent OOM kills by proactively compressing history via local summarization.
- 3-Pass Diarization:
- Profiling: Global clustering of ECAPA-TDNN voice embeddings.
- Classification: Segment assignment based on speaker centroid proximity.
- HMM Smoothing: Viterbi decoding to eliminate spurious speaker oscillations.
- Voxtral Realtime: 4B Multimodal STT (PCM → mel → CLIP → llama) for near-zero latency voice interaction.
- Durable Recording: Direct-to-disk PCM encoding ensures reliability for long sessions (>3h) without memory exhaustion.
- Driving-Optimized UI: Custom
GridTemplatedashboard for safe, hands-free interaction. - Seamless Sync: Real-time access to your Agenda, Kanban, and Assistant Call directly from the car's head unit.
- Privacy-First Call Mode: Routes audio through the earpiece or car speakers with full VAD-based turn-taking.
The server acts as the "Central Nervous System", managing long-term memory and the ecosystem's autonomous growth.
- Tiered Task Prioritization: Dynamically routes tasks (Embeddings → Lightweight → Complex → Evolution) across the mesh based on node hardware capability, proximity, and quota. The routes are customizable via Client PC per task type (e.g. embed, lightweight, complex, evolution).
- Neural Wiki & Knowledge Graph (Neo4j): Automatically synthesizes raw meeting data and memories into a structured Neo4j knowledge graph, creating a persistent, searchable "Neural Wiki" of your entire digital life.
- FCM Proactive Push: Real-time delivery of server-generated "Neural Insights" directly to mobile notification trays via Firebase.
AIMindMesh actively improves its own source tree through three distinct generation paths:
- Server-Native Evolution: Orchestrates multi-file contexts to ensure architecturally sound patches, delegating to Gemini or Openrouter for complex refactoring.
- Agentic OpenClaw Loop: High-autonomy worker for tasks requiring external research and sandbox validation in Kasm Workspaces.
- On-Device Termux Scripting: Local models generate and execute bash/python scripts via the native Termux Bridge for system-level Android optimizations.
A lightweight Tauri-based client that bridges high-performance PC hardware into the mesh.
- Ollama Bridge: Lends local GPU power to mobile nodes for complex reasoning.
- Telemetry Dashboard: Real-time monitoring of CPU/RAM/Thermal states across the entire mesh.
- WireHole VPN: WireGuard tunnel + PiHole (DNS privacy) + Unbound (Recursive DNS).
- Gitea: Self-hosted Git service for autonomous VCS and evolution patches.
- SearXNG: Private metasearch engine ensuring untracked web research.
- Kasm Workspaces: Secure, containerized environments for agentic execution and shadow testing.
- Neo4j Graph Database: The containerized knowledge engine powering the "Neural Wiki."
- Foundation: Run
./deploy_infrastructure.shon your VPS. This sets up the networking, Gitea, SearXNG, and Kasm, while automatically disabling any native Neo4j services to prevent port conflicts. - Brain: Deploy
aimindmesh-serverusing Docker Compose. Use./deploy_to_cloud.sh --fullfor a complete automated setup of the Server, Neo4j, and OpenClaw gateway. - Nodes: Configure Mobile and PC nodes to point to your VPS WireGuard internal IP.
The OpenClaw agent requires a configuration folder containing its skills and auth tokens.
- Copy the template folder:
cp -r aimindmesh-server/openclaw-config.template aimindmesh-server/openclaw-config - Open
aimindmesh-server/openclaw-config/openclaw.jsonand insert your Telegram Bot Token and define a Gateway Auth Token. - The folder is automatically ignored by Git to protect your tokens.
The repository includes .example.sh templates for rapid deployment (e.g., deploy_to_cloud.sh, publish_android.sh). Copy to .sh, configure, and run.
- Use
./deploy_to_cloud.sh --fullto deploy and configure the entire server stack (Server, Neo4j, OpenClaw). - These files are git-ignored to protect your private credentials.
Licensed under the PolyForm Noncommercial License 1.0.0.
- Free for personal, educational, and research use.
- Commercial use requires a separate, paid license agreement.
Architect & Designer: Andre (@aimindmesh)
Development Support: Co-authored and implemented in collaboration with Gemini & Claude.
Contact: aimindmesh@proton.me
Philosophy: Privacy is a right, Autonomy is the goal. Designed by Human intelligence, evolved with Artificial Intelligence.