An AI-native computing environment where intelligence is the interface.
No apps. No desktop. No compiler. No toolchain. Just intent.
You speak. NOS generates ARM64 machine code, executes it in a sandbox, and persists the result. Every tool, every agent, every interface — conjured at runtime from nothing.
> "show me cpu temperature over the last hour"
⚡ Generating agent... 127 lines ARM64 → 508 bytes
✓ Assembled → executed in sandbox (seccomp-BPF, 256MB limit)
✓ Persisted as agents/cpu-temp-monitor (reusable)
┌─────────────────────────────────────┐
│ CPU Temperature — Last 60 min │
│ │
│ 62°C ┤ ╭──╮ │
│ 58°C ┤ ╭───╯ ╰──╮ │
│ 54°C ┤──╯ ╰──────── │
│ 50°C ┤ │
│ └────────────────────────── │
│ -60m -30m now │
└─────────────────────────────────────┘
That agent didn't exist before you asked. Now it does, forever.
NOS replaces the entire traditional software stack — applications, desktop environment, package manager, compiler toolchain — with a single AI inference loop running on an NVIDIA Jetson Orin Nano Super. The $249 board becomes a complete computer where:
- You describe intent in natural language
- AI generates ARM64 assembly (routed through a 4-tier model hierarchy)
- A thin assembler (lookup table, not a compiler) encodes mnemonics to machine code
- The sandbox executes it —
mmap(RW)→ write →mprotect(RX)→ fork → run - Results persist to a semantic store for instant reuse
There are no pre-installed applications. No email client, no file manager, no terminal emulator. They don't exist until you need them — then they exist forever.
NOS is ~17,000 lines of Rust. Its only purpose is to initialize hardware, start the inference engine, and present the prompt. Everything after that is emergent.
The compiled code is the spark. Everything after is fire.
We acknowledge the paradox: an AI-native OS requires compiled code to bootstrap. Linux runs only as a GPU driver shim. The Rust binary is PID 1. Once inference starts, every subsequent artifact — every agent, every tool, every interface element — is AI-generated ARM64 machine code.
Layer 4: User-Facing ← 100% AI-generated (agents, UI, tools)
Layer 3: NOS Core (Rust) ← PID 1, inference dispatch, ABI, sandbox
Layer 2: Inference Engine ← llama.cpp + CUDA (local) / Anthropic API (cloud)
Layer 1: Linux Kernel ← GPU driver shim only
Layer 0: Firmware ← Vendor CBoot/UEFI
Not every question needs a $0.06 API call.
| Tier | Model | Latency | Cost | Use Case |
|---|---|---|---|---|
| T0 | Pattern match | <1ms | Free | Known commands, regex |
| T1 | Qwen2.5-3B (local GPU) | ~30ms/tok | Free | Simple queries, clarification |
| T2 | Claude Sonnet (cloud) | ~12ms/tok | ~$0.003 | Assembly codegen, moderate tasks |
| T3 | Claude Opus (cloud) | ~25ms/tok | ~$0.06 | Complex codegen, debugging |
Auto-escalation: T1 confidence < 0.7 → T2. Configurable in config/escalation.toml.
80% of interactions stay on-device at zero marginal cost.
Every agent runs in a forked child process with defense in depth:
- Seccomp-BPF — allowlist of ~15 syscalls (write, read, exit, mmap, mprotect, clock_gettime, brk, ...)
- W^X enforcement — memory is never simultaneously writable and executable
- Cgroup v2 / RLIMIT_AS — 256MB memory ceiling per agent
- Pipe-based ABI — agents interact with the kernel exclusively through FD 3 (request) and FD 4 (response). No raw syscalls to the outside world.
Agents don't link against libc. They communicate with the NOS kernel through a binary pipe protocol:
Request: [service_id : u32] [function_id : u32] [arg_len : u32] [args...]
Response: [status : u32] [data_len : u32] [data...]
Services:
| ID | Service | Capabilities |
|---|---|---|
| 0 | Kernel | Log, time, store, memory, user info, clipboard |
| 1 | Renderer | Framebuffer primitives — text, rect, pixel, circle, line, gradient |
| 2 | HTTP | GET/POST through kernel (agents never touch the network directly) |
| 3 | Input | Keyboard and mouse events |
| 4 | AI | Query local or cloud models from within an agent |
| 5 | Compositor | Panel layout, window management |
| 6 | Agent | Inter-agent messaging |
Direct /dev/fb0 pixel writing. 32-bit ARGB. Up to 3840x2160. Raw evdev input. No X11. No Wayland. No display server. The AI renders directly to the screen through ABI Service 1.
Target: NVIDIA Jetson Orin Nano Super (~$249)
| Component | Spec |
|---|---|
| CPU | 6-core ARM Cortex-A78AE |
| GPU | 1024-core NVIDIA Ampere |
| Memory | 8GB unified (shared CPU/GPU) |
| Storage | NVMe SSD (semantic store) |
| Display | HDMI → framebuffer |
| Input | USB keyboard + mouse (evdev) |
Total BOM: ~$330 minimum (board + SSD + power + case).
# Clone
git clone https://github.com/evanbarke/NOS.git
cd NOS
# Build (native, for development)
cargo build --release
# OR
./build/build.sh native
# Cross-compile for Jetson Orin Nano
./build/build.sh aarch64
# Run
./target/release/nos # Interactive framebuffer mode
./target/release/nos --headless # REPL mode (stdin/stdout)
./target/release/nos --query "..." # Single query, then exit~/.nos/api_key # Anthropic API key (or set NOS_API_KEY)
~/.nos/user.toml # User profile (name, theme, accent color)
~/.nos/identity/ # Ed25519 keypair for agent signing
config/escalation.toml # Model routing thresholds, GPU config
config/system_prompt.txt # NOS personality
config/agent_api.txt # ABI v0.4 specification
nos agents # List persisted agents
nos run <agent> # Execute an agent
nos run-interactive <agent> # Run with framebuffer + keyboard input
nos inspect <agent> # Show agent metadata
nos fix <agent> # Auto-repair via AI
nos rebuild <agent> # Regenerate from stored prompt
nos fork <agent> <change> # Fork and modify an agent
nos asm <file.s> <agent-name> # Assemble .s file to agent
nos store list|search|pull|push # Agent store operations
nos shell # Headless shell modeAgents are pure ARM64 assembly. No libc, no runtime, no dependencies — just instructions and ABI calls.
; hello — Basic No-S greeting agent
_start:
stp x29, x30, [sp, #-16]!
sub sp, sp, #64
; Build "Hello from No-S!" byte by byte on the stack
mov w9, #0x48 ; 'H'
strb w9, [sp]
mov w9, #0x65 ; 'e'
strb w9, [sp, #1]
; ... (each character placed individually)
; write(stdout, msg, 17)
mov x0, #1 ; fd = stdout
mov x1, sp ; buf = stack
mov x2, #17 ; len
mov x8, #64 ; SYS_write
svc #0
; Log via ABI (Service 0, Function 1)
mov x10, #0 ; service = kernel
mov x11, #1 ; function = LOG
mov x12, sp ; payload
mov x13, #16 ; payload length
bl abi_call
; Exit
mov x0, #0
mov x8, #93 ; SYS_exit
svc #0This is what the AI generates. Not pseudocode. Not an intermediate representation. Raw ARM64 that gets assembled to bytes and executed in a sandbox.
NOS ships with 13 system agents that form the desktop environment — all ARM64 assembly, all generated through the same pipeline:
| Agent | Purpose |
|---|---|
desktop.s |
Main desktop compositor |
statusbar.s |
Top status bar (time, system info) |
dock.s |
Bottom taskbar with agent chips |
launcher.s |
Application launcher |
settings.s |
System preferences |
sysinfo.s |
System information display |
task-manager.s |
Process monitor |
monitor.s |
System monitoring dashboard |
clock.s |
Time and date display |
store-browser.s |
Semantic store explorer |
agent-inspector.s |
Agent metadata viewer |
setup.s |
First-run setup wizard |
hello.s |
Greeting demo |
~127 KB of ARM64 assembly total. Every one of these was AI-generated.
cargo test --releaseTests verify the full AI-to-binary pipeline: ARM64 instruction encoding → mmap(PROT_EXEC) → fork → execute → capture exit code. Real machine code running in a real sandbox.
Current phase: 3 — Persistent Interactive Agents
| Phase | Status | Description |
|---|---|---|
| 1 | ✓ | Boot to prompt, AI generates + executes ARM64 |
| 2 | ✓ | Framebuffer UI, compositor, input handling |
| 3 | In progress | Agent store, versioning, inter-agent messaging, federation |
| 4 | Planned | Minimal boot image (bypass Linux userspace entirely) |
| 5 | Planned | Multi-device agent migration |
Deliberately minimal. The entire system compiles from 8 crates:
libc, nix, serde, serde_json, toml, rustls, webpki-roots,
log, ed25519-dalek, rand
No web framework. No ORM. No AI library. No GUI toolkit. The release binary is size-optimized (opt-level = "s"), LTO'd, and stripped.
SPEC.md— Full technical specification (architecture, security model, bill of materials)PHASE3.md— Phase 3 agent ecosystem designconfig/agent_api.txt— ABI v0.4 specification (pipe protocol, services, syscall allowlist)CLAUDE.md— Development guide for contributors
See LICENSE for details.