Skip to content

evanbarke/NOS

Repository files navigation

NOS — No Operating System

An AI-native computing environment where intelligence is the interface.

No apps. No desktop. No compiler. No toolchain. Just intent.

You speak. NOS generates ARM64 machine code, executes it in a sandbox, and persists the result. Every tool, every agent, every interface — conjured at runtime from nothing.

> "show me cpu temperature over the last hour"

⚡ Generating agent... 127 lines ARM64 → 508 bytes
✓ Assembled → executed in sandbox (seccomp-BPF, 256MB limit)
✓ Persisted as agents/cpu-temp-monitor (reusable)

┌─────────────────────────────────────┐
│ CPU Temperature — Last 60 min       │
│                                     │
│ 62°C ┤      ╭──╮                    │
│ 58°C ┤  ╭───╯  ╰──╮                │
│ 54°C ┤──╯         ╰────────        │
│ 50°C ┤                             │
│      └──────────────────────────    │
│       -60m    -30m        now       │
└─────────────────────────────────────┘

That agent didn't exist before you asked. Now it does, forever.


What is this?

NOS replaces the entire traditional software stack — applications, desktop environment, package manager, compiler toolchain — with a single AI inference loop running on an NVIDIA Jetson Orin Nano Super. The $249 board becomes a complete computer where:

  1. You describe intent in natural language
  2. AI generates ARM64 assembly (routed through a 4-tier model hierarchy)
  3. A thin assembler (lookup table, not a compiler) encodes mnemonics to machine code
  4. The sandbox executes itmmap(RW) → write → mprotect(RX) → fork → run
  5. Results persist to a semantic store for instant reuse

There are no pre-installed applications. No email client, no file manager, no terminal emulator. They don't exist until you need them — then they exist forever.

The Bootstrap Paradox

NOS is ~17,000 lines of Rust. Its only purpose is to initialize hardware, start the inference engine, and present the prompt. Everything after that is emergent.

The compiled code is the spark. Everything after is fire.

We acknowledge the paradox: an AI-native OS requires compiled code to bootstrap. Linux runs only as a GPU driver shim. The Rust binary is PID 1. Once inference starts, every subsequent artifact — every agent, every tool, every interface element — is AI-generated ARM64 machine code.

Architecture

Layer 4: User-Facing          ← 100% AI-generated (agents, UI, tools)
Layer 3: NOS Core (Rust)      ← PID 1, inference dispatch, ABI, sandbox
Layer 2: Inference Engine     ← llama.cpp + CUDA (local) / Anthropic API (cloud)
Layer 1: Linux Kernel         ← GPU driver shim only
Layer 0: Firmware             ← Vendor CBoot/UEFI

4-Tier Model Routing

Not every question needs a $0.06 API call.

Tier Model Latency Cost Use Case
T0 Pattern match <1ms Free Known commands, regex
T1 Qwen2.5-3B (local GPU) ~30ms/tok Free Simple queries, clarification
T2 Claude Sonnet (cloud) ~12ms/tok ~$0.003 Assembly codegen, moderate tasks
T3 Claude Opus (cloud) ~25ms/tok ~$0.06 Complex codegen, debugging

Auto-escalation: T1 confidence < 0.7 → T2. Configurable in config/escalation.toml.

80% of interactions stay on-device at zero marginal cost.

Execution Sandbox

Every agent runs in a forked child process with defense in depth:

  • Seccomp-BPF — allowlist of ~15 syscalls (write, read, exit, mmap, mprotect, clock_gettime, brk, ...)
  • W^X enforcement — memory is never simultaneously writable and executable
  • Cgroup v2 / RLIMIT_AS — 256MB memory ceiling per agent
  • Pipe-based ABI — agents interact with the kernel exclusively through FD 3 (request) and FD 4 (response). No raw syscalls to the outside world.

Agent ABI v0.4

Agents don't link against libc. They communicate with the NOS kernel through a binary pipe protocol:

Request:  [service_id : u32] [function_id : u32] [arg_len : u32] [args...]
Response: [status : u32] [data_len : u32] [data...]

Services:

ID Service Capabilities
0 Kernel Log, time, store, memory, user info, clipboard
1 Renderer Framebuffer primitives — text, rect, pixel, circle, line, gradient
2 HTTP GET/POST through kernel (agents never touch the network directly)
3 Input Keyboard and mouse events
4 AI Query local or cloud models from within an agent
5 Compositor Panel layout, window management
6 Agent Inter-agent messaging

Framebuffer UI

Direct /dev/fb0 pixel writing. 32-bit ARGB. Up to 3840x2160. Raw evdev input. No X11. No Wayland. No display server. The AI renders directly to the screen through ABI Service 1.

Hardware

Target: NVIDIA Jetson Orin Nano Super (~$249)

Component Spec
CPU 6-core ARM Cortex-A78AE
GPU 1024-core NVIDIA Ampere
Memory 8GB unified (shared CPU/GPU)
Storage NVMe SSD (semantic store)
Display HDMI → framebuffer
Input USB keyboard + mouse (evdev)

Total BOM: ~$330 minimum (board + SSD + power + case).

Quick Start

# Clone
git clone https://github.com/evanbarke/NOS.git
cd NOS

# Build (native, for development)
cargo build --release
# OR
./build/build.sh native

# Cross-compile for Jetson Orin Nano
./build/build.sh aarch64

# Run
./target/release/nos                    # Interactive framebuffer mode
./target/release/nos --headless         # REPL mode (stdin/stdout)
./target/release/nos --query "..."      # Single query, then exit

Configuration

~/.nos/api_key              # Anthropic API key (or set NOS_API_KEY)
~/.nos/user.toml            # User profile (name, theme, accent color)
~/.nos/identity/            # Ed25519 keypair for agent signing
config/escalation.toml      # Model routing thresholds, GPU config
config/system_prompt.txt    # NOS personality
config/agent_api.txt        # ABI v0.4 specification

CLI

nos agents                          # List persisted agents
nos run <agent>                     # Execute an agent
nos run-interactive <agent>         # Run with framebuffer + keyboard input
nos inspect <agent>                 # Show agent metadata
nos fix <agent>                     # Auto-repair via AI
nos rebuild <agent>                 # Regenerate from stored prompt
nos fork <agent> <change>           # Fork and modify an agent
nos asm <file.s> <agent-name>       # Assemble .s file to agent
nos store list|search|pull|push     # Agent store operations
nos shell                           # Headless shell mode

What an Agent Looks Like

Agents are pure ARM64 assembly. No libc, no runtime, no dependencies — just instructions and ABI calls.

; hello — Basic No-S greeting agent
_start:
    stp x29, x30, [sp, #-16]!
    sub sp, sp, #64

    ; Build "Hello from No-S!" byte by byte on the stack
    mov w9, #0x48              ; 'H'
    strb w9, [sp]
    mov w9, #0x65              ; 'e'
    strb w9, [sp, #1]
    ; ... (each character placed individually)

    ; write(stdout, msg, 17)
    mov x0, #1                 ; fd = stdout
    mov x1, sp                 ; buf = stack
    mov x2, #17                ; len
    mov x8, #64                ; SYS_write
    svc #0

    ; Log via ABI (Service 0, Function 1)
    mov x10, #0                ; service = kernel
    mov x11, #1                ; function = LOG
    mov x12, sp                ; payload
    mov x13, #16               ; payload length
    bl abi_call

    ; Exit
    mov x0, #0
    mov x8, #93                ; SYS_exit
    svc #0

This is what the AI generates. Not pseudocode. Not an intermediate representation. Raw ARM64 that gets assembled to bytes and executed in a sandbox.

System Agents

NOS ships with 13 system agents that form the desktop environment — all ARM64 assembly, all generated through the same pipeline:

Agent Purpose
desktop.s Main desktop compositor
statusbar.s Top status bar (time, system info)
dock.s Bottom taskbar with agent chips
launcher.s Application launcher
settings.s System preferences
sysinfo.s System information display
task-manager.s Process monitor
monitor.s System monitoring dashboard
clock.s Time and date display
store-browser.s Semantic store explorer
agent-inspector.s Agent metadata viewer
setup.s First-run setup wizard
hello.s Greeting demo

~127 KB of ARM64 assembly total. Every one of these was AI-generated.

Testing

cargo test --release

Tests verify the full AI-to-binary pipeline: ARM64 instruction encoding → mmap(PROT_EXEC) → fork → execute → capture exit code. Real machine code running in a real sandbox.

Project Status

Current phase: 3 — Persistent Interactive Agents

Phase Status Description
1 Boot to prompt, AI generates + executes ARM64
2 Framebuffer UI, compositor, input handling
3 In progress Agent store, versioning, inter-agent messaging, federation
4 Planned Minimal boot image (bypass Linux userspace entirely)
5 Planned Multi-device agent migration

Dependencies

Deliberately minimal. The entire system compiles from 8 crates:

libc, nix, serde, serde_json, toml, rustls, webpki-roots,
log, ed25519-dalek, rand

No web framework. No ORM. No AI library. No GUI toolkit. The release binary is size-optimized (opt-level = "s"), LTO'd, and stripped.

Documentation

  • SPEC.md — Full technical specification (architecture, security model, bill of materials)
  • PHASE3.md — Phase 3 agent ecosystem design
  • config/agent_api.txt — ABI v0.4 specification (pipe protocol, services, syscall allowlist)
  • CLAUDE.md — Development guide for contributors

License

See LICENSE for details.

About

No Operating System

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors