Skip to content

mneri1/OpenJarvis

 
 

Repository files navigation

OpenJarvis

Personal AI, On Personal Devices.

Project Docs Python License Discord


Documentation

Project Site

Leaderboard

Roadmap

Why OpenJarvis?

Personal AI agents are exploding in popularity, but nearly all of them still route intelligence through cloud APIs. Your "personal" AI continues to depend on someone else's server. At the same time, our Intelligence Per Watt research showed that local language models already handle 88.7% of single-turn chat and reasoning queries, with intelligence efficiency improving 5.3× from 2023 to 2025. The models and hardware are increasingly ready. What has been missing is the software stack to make local-first personal AI practical.

OpenJarvis is that stack. It is an opinionated framework for local-first personal AI, built around three core ideas: shared primitives for building on-device agents; evaluations that treat energy, FLOPs, latency, and dollar cost as first-class constraints alongside accuracy; and a learning loop that improves models using local trace data. The goal is simple: make it possible to build personal AI agents that run locally by default, calling the cloud only when truly necessary. OpenJarvis aims to be both a research platform and a production foundation for local AI, in the spirit of PyTorch.

Installation

git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis
uv sync                           # core framework
uv sync --extra server             # + FastAPI server

# Build the Rust extension (requires Rust: https://rustup.rs/)
uv run maturin develop -m rust/crates/openjarvis-python/Cargo.toml

Python 3.14+: set PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1 before the maturin command.

You also need a local inference backend: Ollama, vLLM, SGLang, or llama.cpp. Alternatively, use the cloud engine with OpenAI, Anthropic, Google Gemini, OpenRouter, or MiniMax by setting the corresponding API key environment variable.

Quick Start

# 1. Install and detect hardware
git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis
uv sync
uv run jarvis init

# 2. Start Ollama and pull a model
curl -fsSL https://ollama.com/install.sh | sh
ollama serve &
ollama pull qwen3:8b

# 3. Ask a question
uv run jarvis ask "What is the capital of France?"

jarvis init auto-detects your hardware and recommends the best engine. Run uv run jarvis doctor at any time to diagnose issues.

Full documentation — including Docker deployment, cloud engines, development setup, and tutorials — at open-jarvis.github.io/OpenJarvis.

Contributing

We welcome contributions! See the Contributing Guide for incentives, contribution types, and the PR process.

Quick start for contributors:

git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis
uv sync --extra dev
uv run pre-commit install
uv run pytest tests/ -v

Browse the Roadmap for areas where help is needed. Comment "take" on any issue to get auto-assigned.

About

OpenJarvis is part of Intelligence Per Watt, a research initiative studying the efficiency of on-device AI systems. The project is developed at Hazy Research and the Scaling Intelligence Lab at Stanford SAIL.

Sponsors

Laude InstituteStanford MarloweGoogle Cloud PlatformLambda LabsOllamaIBM ResearchStanford HAI

Citation

@misc{saadfalcon2026openjarvis,
  title={OpenJarvis: Personal AI, On Personal Devices},
  author={Jon Saad-Falcon and Avanika Narayan and Herumb Shandilya and Hakki Orhun Akengin and Robby Manihani and Gabriel Bo and John Hennessy and Christopher R\'{e} and Azalia Mirhoseini},
  year={2026},
  howpublished={\url{https://scalingintelligence.stanford.edu/blogs/openjarvis/}},
}

License

Apache 2.0

About

Personal AI, On Personal Devices

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 79.9%
  • Rust 11.9%
  • TypeScript 7.8%
  • CSS 0.2%
  • Shell 0.2%
  • PLpgSQL 0.0%