Skip to content

aliasfoxkde/EdgeMind

Repository files navigation

EdgeMind — Client-Side AI Runtime

🤖 Local AI that runs in your browser

🌐 Live Demo · 📖 Documentation · 🎮 Playground · 💻 GitHub


✨ Features

Feature Description
🔒 Complete Privacy Data never leaves your browser
📴 Offline Capable Works without internet after first load
💰 Zero Costs No API calls, no per-token billing
Fast 15-30 tokens/second
🌐 ~97MB Total Fits on any device

🤖 The Model

Falcon-H1-Tiny-90M-Instruct-ONNX

Property Value
Parameters 90 Million
Size ~97MB
Context 2,048 tokens
Format ONNX (WebAssembly)
Source HuggingFace

🚀 Quick Start

Option 1: CDN (One line)

<script type="module" src="https://cdn.jsdelivr.net/npm/@edgemind/js@latest/dist/index.mjs"></script>

<script type="module">
  import { EdgeMind } from 'https://cdn.jsdelivr.net/npm/@edgemind/js@latest/dist/index.mjs';
  
  const ai = new EdgeMind();
  await ai.load();
  
  const result = await ai.chat({
    messages: [{ role: 'user', content: 'Hello!' }]
  });
  
  console.log(result.content);
</script>

Option 2: npm

npm install @edgemind/js
import { EdgeMind } from '@edgemind/js';

const ai = new EdgeMind();
await ai.load();

const result = await ai.chat({
  messages: [{ role: 'user', content: 'Hello!' }]
});

console.log(result.content);

Option 3: Streaming

for await (const token of ai.chatStream({
  messages: [{ role: 'user', content: 'Tell me a story' }]
})) {
  process.stdout.write(token);
}

📦 Model Deployment

The Falcon-H1-90M model can be served from multiple sources:

HuggingFace CDN (Recommended)

https://huggingface.co/onnx-community/Falcon-H1-Tiny-90M-Instruct-ONNX/resolve/main/

GitHub Releases

https://github.com/aliasfoxkde/edgemind/releases/latest/download/

Cloudflare R2 (Production)

r2://edgemind-models/falcon-h1-90m/

🏗️ Architecture

┌─────────────────────────────────────────────────────────────┐
│                    Browser Environment                         │
├─────────────────────────────────────────────────────────────┤
│  ┌─────────────────────────────────────────────────────┐  │
│  │               ONNX Runtime Web (WASM)                     │  │
│  │   ┌─────────────────────────────────────────────┐   │  │
│  │   │     Falcon-H1-Tiny-90M-Instruct ONNX       │   │  │
│  │   │                 (~85MB)                      │   │  │
│  │   └─────────────────────────────────────────────┘   │  │
│  └─────────────────────────────────────────────────────┘  │
│  ┌─────────────────────────────────────────────────────┐  │
│  │                 IndexedDB Cache (~97MB)                  │  │
│  └─────────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘

📊 Performance

Device Tokens/sec
Desktop (Modern) 25-30
Desktop (Older) 10-20
Mobile (High-end) 8-15
Mobile (Mid-range) 3-8

📁 Project Structure

edgemind/
├── api/                    # Next.js 15 application
│   ├── src/app/           # Pages (landing, playground, docs)
│   └── wrangler.toml     # Cloudflare config
│
├── packages/js/           # JavaScript SDK
│   ├── src/
│   │   ├── index.ts      # Main SDK
│   │   └── runtime/      # ONNX pipelines
│   │       ├── falcon-pipeline.ts
│   │       └── model-cache.ts
│   └── dist/             # Built output
│
├── docs/                  # Documentation
│   ├── PLANNING.md
│   └── AISTACK.md
│
└── README.md

🔧 Development

# Install dependencies
cd api && npm install
cd packages/js && npm install

# Run development server
cd api && npm run dev

# Build for production
cd api && npm run pages:build
cd api && npm run pages:deploy

📜 License

GNU General Public License v3.0 (GPL 3.0) - see LICENSE for details.


Built with ❤️ for edge AI

About

Client-side, edge-deployable AI runtime and API platform with interactive docs, playground, SDK, and example applications.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors