Private AI in one command. Strip PII from every LLM prompt, locally.
docker run -p 8080:8080 ghcr.io/svenplb/aegisexport OPENAI_BASE_URL=http://localhost:8080/v1That's it. Your prompts are now private.
Aegis sits between your app and any OpenAI-compatible API. It strips personal data from prompts before they reach the LLM, and restores it in responses. Everything runs locally. Your data never leaves your machine.
Your App ──→ Aegis Proxy ──→ LLM API
│
strips PII from prompts
restores PII in responses
Before (what hits the LLM):
Send the contract to alice@example.com.
Her SSN is 123-45-6789 and she lives
at Hauptstraße 42, 10115 Berlin.
After (what Aegis sends instead):
Send the contract to [EMAIL_1].
Her SSN is [SSN_1] and she lives
at [ADDRESS_1].
The LLM never sees the real data. Aegis restores it in the response before returning to your app.
- OpenAI API
- Ollama
- LM Studio
- llama.cpp
- Any OpenAI-compatible API
All via environment variables. No config files.
| Variable | Default | Description |
|---|---|---|
AEGIS_UPSTREAM_URL |
https://api.openai.com |
LLM API to proxy to |
AEGIS_PORT |
8080 |
Listen port |
AEGIS_LOG_LEVEL |
info |
debug / info / warn / error |
AEGIS_CACHE_TTL |
1h |
Conversation mapping TTL |
AEGIS_CACHE_SIZE |
1000 |
Max cached conversations |
AEGIS_UPSTREAM_TIMEOUT |
60s |
Upstream request timeout |
AEGIS_MAX_BODY_SIZE |
10485760 |
Max request body (bytes) |
AEGIS_SCANNER_URL |
(disabled) | Ollama URL for LLM-based scanning |
AEGIS_SCANNER_MODEL |
gemma3:4b |
LLM model for sensitivity detection |
AEGIS_SCANNER_TIMEOUT |
30s |
LLM scanner request timeout |
docker run -p 8080:8080 -e AEGIS_UPSTREAM_URL=http://host.docker.internal:11434 ghcr.io/svenplb/aegisFor deeper detection, point Aegis at a local LLM via Ollama. The LLM catches contextual sensitivity that regex can't: business secrets, emotional content, relationship context, medical implications.
AEGIS_SCANNER_URL=http://localhost:11434 AEGIS_UPSTREAM_URL=https://api.openai.com aegisThe regex scanner runs first (<5ms), then the LLM scanner fills in the gaps (~6-16s depending on model). Best-effort: if the LLM is slow or down, requests proceed with regex-only results.
16 PII types with format-specific validation:
| Type | Examples | Validation |
|---|---|---|
| alice@example.com | RFC 5322 | |
| PHONE | +49 170 1234567 | International + local |
| SSN | 123-45-6789 | 20+ national formats |
| IBAN | DE89 3704 0044 0532 0130 00 | MOD-97 checksum |
| CREDIT_CARD | 4111-1111-1111-1111 | Luhn algorithm |
| IP_ADDRESS | 192.168.1.1 | IPv4 + IPv6 |
| URL | https://example.com | Pattern match |
| DATE | 15.03.1990 | Multiple formats |
| ADDRESS | Hauptstraße 42, Berlin | Context-validated |
| FINANCIAL | EUR 1.500,00 | 15+ currency formats |
| And more... | SECRET, MEDICAL, AGE, ID_NUMBER, MAC_ADDRESS, GPS |
Strong EU coverage: Germany, Austria, Switzerland, France, Italy, Spain, Netherlands, Belgium, Poland, Portugal, Sweden, Denmark, Finland, Norway, Czech Republic, Slovakia, Ireland, UK.
- Your app sends a request to Aegis (same format as OpenAI API)
- Aegis scans every message for PII using regex patterns with format-specific validators
- PII is replaced with deterministic tokens (
[EMAIL_1],[PERSON_1], etc.) - The clean request is forwarded to the actual LLM
- The LLM response comes back with tokens
- Aegis restores the original PII in the response
- Your app gets the response as if it talked to the LLM directly
System prompts are never scanned (they contain your instructions, not user data).
- Zero data exfiltration. The proxy makes no outbound calls except to your configured upstream.
- No logging of PII. Even at debug level, message content is never logged.
- No telemetry. No analytics, no tracking, no phone-home.
- No disk writes. All state is in-memory. Restart = clean slate.
- Fail closed. If PII scanning fails, the request is blocked. Unscanned data is never forwarded.
go install github.com/svenplb/liebotsch/cmd/aegis@latestOr:
git clone https://github.com/svenplb/liebotsch.git
cd liebotsch
go build -o aegis ./cmd/aegis/
./aegisMIT