Open-source runtime security monitoring and control for AI agents.
Adrian is an open-source, AARM-aligned runtime security monitoring and control engine for AI agents. It analyses both agent activity logs (tool calls, actions, outputs) and reasoning traces to detect malicious, misaligned, or out-of-remit behaviour, and optionally intervene in-flight. Python SDK with a two-line install to LangChain agents.
Documentation • Dashboard • Discord • LinkedIn
ADRIAN_LAUNCH.mp4
Most agent monitoring stops at activity logs: APIs, MCP, DB interactions, tool calls, etc. Adrian enhances this by also analysing the agent's reasoning: understanding why it took an action, under what context, and what it is planning on doing next. Research by OpenAI and DeepMind found that combining behaviour and reasoning analysis like this boosts detection accuracy by around 35% and is 4x more likely to catch nuanced attacks. Adrian is the first tool to put that into a deployable security control, and it is free, forever.
Furthermore, most tools in this space are lightweight machine learning classifiers trained to spot patterns which match their training data (usually labelled prompt injection datasets). Adrian takes a different approach: it uses world models that understand risk through reasoning like a human does. It correlates behaviours across a session, holds a working understanding of what the agent is meant to be doing, and assesses each new action against that. The detection logic is closer to a human reviewer's than to pattern matching against examples it has been trained to spot. For example, if your e-commerce agent starts resetting user passwords that isn't going to appear in any training dataset, but this is a risk you should be flagging. This is where you get the meaningful security uplift that allows you to use agentic AI with confidence, and it's exactly why we made Adrian.
The fastest way to try Adrian is the managed dashboard at app.adrian.secureagentics.ai. Sign-up takes a minute and there is nothing to install beyond the SDK. To run Adrian on your own infrastructure instead, jump to Self-hosting below.
-
Sign up at app.adrian.secureagentics.ai and generate an API key.
-
Configure Adrian for your agent and your preferences (remit of your agent, audit vs block mode, alerting channels, accepted behaviours vs known-risks).
-
Install the SDK:
pip install adrian-sdk
-
Install the LangChain provider for your agent's model (the SDK auto-instruments LangChain / LangGraph; pick whichever provider matches your model):
pip install langgraph langchain-openai # or langchain-anthropic, etc.Last verified with
langchain-core==1.3.3,langgraph==1.1.2,langchain-openai==1.2.1(2026-05-08). -
Wrap your LangChain agent. The whole integration is a two-line attach:
import adrian adrian.init(api_key="adr_live_...") # Your LangChain / LangGraph code runs normally; every call is captured.
Full runnable async example at
examples/quickstart.py, with a demo agent you can swap for your own. -
Run your agent. Events appear in the dashboard within seconds, classified by severity.
Full guide: Quickstart.
Adrian supports entirely offline, data sovereign deployments using just a handful of docker commands. This repository ships everything needed to run the entire Adrian stack on a single host: the Go backend (WebSocket + dashboard API + AI engine), the Next.js dashboard, the Python SDK, and a Llama.cpp container that serves a local Gemma model. No managed cloud, no telemetry leaving the box.
Hardware support: Tested on NVIDIA GPUs with Gemma 4 (E2B / E4B) which is the model the bootstrap picker downloads by default. CPU-only is technically possible but will be slow on real workloads with those sized models.
- A host with Docker + Docker Compose v2.
- An NVIDIA GPU with recent CUDA driver and the NVIDIA Container Toolkit installed (for the bundled Llama.cpp classifier). ~10 GB free disk for the model.
-
Clone:
git clone https://github.com/secureagentics/Adrian cd Adrian -
Run bootstrap. Creates
data/adrian.db, applies migrations, generates a random admin password, and writes.env. With no--ggufflag, the bootstrap interactively offers to download the recommended on-device classifier (Gemma 4 E4B, ~5 GB, or E2B ~3 GB) into./models/.# Default: interactive picker downloads Gemma 4 E4B / E2B docker compose --profile setup run --rm setup bootstrap # Already have a GGUF under ./models/? Pass it by name docker compose --profile setup run --rm setup bootstrap \ --gguf my-model.gguf
-
Start the stack.
docker compose --profile llm up -d
-
Open the dashboard. Browse to
http://localhost:3000. Sign in withadmin@localhostplus the password the bootstrap printed; you'll be prompted to set a new one. Create an SDK API key and configure Adrian to monitor your specific agent from Settings → Agents → New key. -
Wrap your agent. The SDK lives in-tree under
sdk/. Install it into a fresh.venvvia the bundled Make target (uses uv):make sdk-install source .venv/bin/activateInstall the LangChain provider for your agent's model into the same venv:
uv pip install langgraph langchain-openai # or your chosen langchain providerLast verified with
langchain-core==1.3.3,langgraph==1.1.2,langchain-openai==1.2.1(2026-05-08).Use the same
adrian.initsnippet as in the Quickstart above. The SDK defaults tows://localhost:8080/ws, so a self-hosted setup needs nothing more than the API key - drop thews_url=line.
To reset the admin password, change the model and much more check out the dedicated Docs site.
flowchart TD
Agent[Agent runtime] --> SDK[Adrian SDK]
SDK --> Backend[Adrian backend]
Backend --> Classifier[Classifier model]
Classifier --> Verdict{Verdict}
Verdict --> Control[Control plane]
Verdict -.->|"Alert /<br>Human Review /<br>Block"| Agent
| At launch | On roadmap | |
|---|---|---|
| Frameworks |
|
|
| Alerting |
|
|
Full list: Integrations.
See CONTRIBUTING.md for the full guide. In short: sign the CLA, branch off main, follow the PR template, and use British English / no em-dashes in prose.
See CONTRIBUTORS.md for the list of people who have shaped Adrian, and how to add yourself.
Adrian is released under the Apache 2.0 licence. New source files should carry the SPDX header from LICENSE_HEADER.txt.