Personal AI agents are exploding in popularity, but nearly all of them still route intelligence through cloud APIs. Your "personal" AI continues to depend on someone else's server. At the same time, our Intelligence Per Watt research showed that local language models already handle 88.7% of single-turn chat and reasoning queries, with intelligence efficiency improving 5.3× from 2023 to 2025. The models and hardware are increasingly ready. What has been missing is the software stack to make local-first personal AI practical.
OpenJarvis is that stack. It is an opinionated framework for local-first personal AI, built around three core ideas: shared primitives for building on-device agents; evaluations that treat energy, FLOPs, latency, and dollar cost as first-class constraints alongside accuracy; and a learning loop that improves models using local trace data. The goal is simple: make it possible to build personal AI agents that run locally by default, calling the cloud only when truly necessary. OpenJarvis aims to be both a research platform and a production foundation for local AI, in the spirit of PyTorch.
| Tool | Install |
|---|---|
| Python 3.10+ | python.org |
| uv (Python package manager) | curl -LsSf https://astral.sh/uv/install.sh | sh — or brew install uv on macOS |
| Rust | curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh |
| Git | git-scm.com — or brew install git on macOS |
macOS users: see the full macOS Installation Guide for a step-by-step walkthrough including Homebrew setup.
git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis
uv sync # core framework
uv sync --extra server # + FastAPI server
# Build the Rust extension
uv run maturin develop -m rust/crates/openjarvis-python/Cargo.tomlPython 3.14+: set
PYO3_USE_ABI3_FORWARD_COMPATIBILITY=1before thematurincommand.
You also need a local inference backend: Ollama, vLLM, SGLang, or llama.cpp. Alternatively, use the cloud engine with OpenAI, Anthropic, Google Gemini, OpenRouter, or MiniMax by setting the corresponding API key environment variable.
# 1. Install and detect hardware
git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis
uv sync
uv run jarvis init
# 2. Start Ollama and pull a model
curl -fsSL https://ollama.com/install.sh | sh
ollama serve &
ollama pull qwen3:8b
# 3. Ask a question
uv run jarvis ask "What is the capital of France?"jarvis init auto-detects your hardware and recommends the best engine. Run uv run jarvis doctor at any time to diagnose issues.
Install any preset with one command:
jarvis init --preset morning-digest-mac # or any preset below| Preset | Use Case | What it does |
|---|---|---|
morning-digest-mac |
Daily Briefing (Mac) | Spoken briefing from email, calendar, health, news with Jarvis voice |
morning-digest-linux |
Daily Briefing (Linux) | Same, with vLLM support for GPU servers |
morning-digest-minimal |
Daily Briefing (minimal) | Just Gmail + Calendar, runs on any machine |
deep-research |
Research Assistant | Multi-hop research across indexed docs with citations |
code-assistant |
Code Companion | Agent with code execution, file I/O, and shell access |
scheduled-monitor |
Persistent Monitor | Stateful agent that runs on a schedule with memory |
chat-simple |
Simple Chat | Lightweight conversation, no tools needed |
# Example: Morning Digest on Mac
jarvis init --preset morning-digest-mac
jarvis connect gdrive # one OAuth flow covers Gmail, Calendar, Tasks
jarvis digest --fresh # generate and play your first briefing
# Example: Deep Research
jarvis init --preset deep-research
jarvis memory index ./docs/ # index your documents
jarvis ask "Summarize all emails about Project X"Skills teach agents how to better use tools and improve their reasoning. Every skill is a tool — agents discover them from a catalog and invoke them on demand.
# Install skills from public sources
jarvis skill install hermes:arxiv
jarvis skill sync hermes --category research
# Use skills with any agent
jarvis ask "Use the code-explainer skill to explain this Python code: for i in range(5): print(i*2)"
# Optimize skills from your trace history
jarvis optimize skills --policy dspy
# Benchmark the impact
jarvis bench skills --max-samples 5 --seeds 42Import from Hermes Agent (~150 skills), OpenClaw (~13,700 community skills), or any GitHub repo. Skills follow the agentskills.io open standard.
See the Skills User Guide and Skills Tutorial for details.
| Agent | Type | What it does |
|---|---|---|
morning_digest |
Scheduled | Daily briefing from email, calendar, health, news — with TTS audio |
deep_research |
On-demand | Multi-hop research with citations across web and local docs |
monitor_operative |
Continuous | Long-horizon monitoring with memory, compression, and retrieval |
orchestrator |
On-demand | Multi-turn reasoning with automatic tool selection |
native_react |
On-demand | ReAct (Thought-Action-Observation) loop agent |
operative |
Continuous | Persistent autonomous agent with state management |
native_openhands |
On-demand | CodeAct — generates and executes Python code |
simple |
On-demand | Single-turn chat, no tools |
See the User Guide and Tutorials for detailed setup instructions.
Full documentation — including Docker deployment, cloud engines, development setup, and tutorials — at open-jarvis.github.io/OpenJarvis.
We welcome contributions! See the Contributing Guide for incentives, contribution types, and the PR process.
Quick start for contributors:
git clone https://github.com/open-jarvis/OpenJarvis.git
cd OpenJarvis
uv sync --extra dev
uv run pre-commit install
uv run pytest tests/ -vBrowse the Roadmap for areas where help is needed. Comment "take" on any issue to get auto-assigned.
OpenJarvis is part of Intelligence Per Watt, a research initiative studying the efficiency of on-device AI systems. The project is developed at Hazy Research and the Scaling Intelligence Lab at Stanford SAIL.
Laude Institute • Stanford Marlowe • Google Cloud Platform • Lambda Labs • Ollama • IBM Research • Stanford HAI
@misc{saadfalcon2026openjarvis,
title={OpenJarvis: Personal AI, On Personal Devices},
author={Jon Saad-Falcon and Avanika Narayan and Herumb Shandilya and Hakki Orhun Akengin and Robby Manihani and Gabriel Bo and John Hennessy and Christopher R\'{e} and Azalia Mirhoseini},
year={2026},
howpublished={\url{https://scalingintelligence.stanford.edu/blogs/openjarvis/}},
}