Open LLM Auth is a local gateway that exposes:
- an OpenAI-compatible chat surface,
- a universal task surface for agent/runtime backends,
- centralized auth/profile resolution,
- scoped access control,
- outbound egress enforcement,
- durable task ownership and idempotency state.
The active package is src/open_llm_auth. Duplicate trees such as src/src/open_llm_auth and packaged artifacts under pkg/ are not the live source of truth for current development.
Live entrypoints:
src/open_llm_auth/main.py: FastAPI app, static UI mount, root/chat/config pagessrc/open_llm_auth/server/routes.py:/v1/*API routessrc/open_llm_auth/server/config_routes.py:/config/*admin/config routessrc/open_llm_auth/cli.py: Typer CLI
Core subsystems:
auth/manager.py: provider resolution, profile/env/config credential lookup, fallback ordering, runtime egress validationconfig.py: persisted config model at~/.open_llm_auth/config.jsonprovider_catalog.py: builtin provider/model catalog plus provider aliases and env-var lookup rulesserver/auth.py: bearer-token verification and scope enforcementserver/task_contract.py: Agent Bridge task-contract compatibility checksserver/idempotency.pyandserver/durable_state.py: in-memory and SQLite-backed idempotency/ownership primitivesproviders/agent_bridge.py: bridge to Agent Bridge chat and task lifecycle endpoints
The repository already uses uv and a local venv in normal development.
cd /mnt/xtra/open_llm_auth
uv syncAlternative editable install:
pip install -e .CLI:
open-llm-auth serve --host 127.0.0.1 --port 8080Repo-local venv:
.venv/bin/python -m open_llm_auth.cli serve --host 127.0.0.1 --port 8080Server surfaces:
GET /healthGET /GET /chatGET /configGET /docs- static assets under
/static
The gateway no longer assumes a single global server token only.
Current auth behavior from src/open_llm_auth/server/auth.py:
- configured access tokens live in
authorization.tokens - each token can carry scopes such as
read,write,admin admin=trueimplies all three scopes- legacy admin compatibility can still use
serverTokenorOPEN_LLM_AUTH_TOKEN OPEN_LLM_AUTH_ALLOW_ANON=1enables anonymous admin access only when no configured or legacy token exists- config routes require admin scope
- task/chat routes generally require write scope
Config file location:
~/.open_llm_auth/config.json
Important top-level sections:
authProfilesandauthOrder- compatibility mirrors:
auth.profilesandauth.order providersmodels.modeandmodels.providersauthorization.tokensdurableStateegressPolicytaskContractdefaultModelserverToken
Current config behavior:
- secret-bearing fields are redacted in config API responses
- outbound provider base URLs are validated against egress policy both at config-write time and runtime resolution time
- durable task/idempotency state defaults to a SQLite file under
~/.open_llm_auth/runtime_state.sqlite3
Top-level commands:
servechatauthmodels
Auth subcommands:
auth configureauth add-api-keyauth add-tokenauth add-oauthauth login-openai-codexauth login-github-copilotauth setup-tokenauth set-orderauth set-server-tokenauth list
Model subcommands:
models set-defaultmodels list
The CLI supports more than API-key management. It includes OAuth/device flows, provider fallback ordering, a default-model setter, and a small terminal chat mode.
POST /v1/chat/completionsGET /v1/models
POST /v1/universalPOST /v1/universal/tasksGET /v1/universal/tasksGET /v1/universal/tasks/{task_id}POST /v1/universal/tasks/{task_id}/approvePOST /v1/universal/tasks/{task_id}/retryPOST /v1/universal/tasks/{task_id}/cancelGET /v1/universal/tasks/{task_id}/eventsPOST /v1/universal/tasks/{task_id}/waitGET /v1/universal/contract/status
GET /configPOST /configGET /config/builtin-providersGET /config/providersPUT /config/providers/{provider_id}DELETE /config/providers/{provider_id}GET /config/auth-profilesPUT /config/auth-profiles/{profile_id}DELETE /config/auth-profiles/{profile_id}GET /config/config-file-pathGET /config/configured-providersGET /config/providers/{provider_id}/models
ProviderManager merges builtin catalog entries with local config and then resolves credentials in this order:
- explicit preferred profile, if supplied,
- configured auth-order list for that provider,
- discovered profiles for that provider,
- provider-specific environment variables,
- provider config
api_key, - special auth paths such as AWS SDK or CLI-backed providers.
Important resolution behavior:
- provider aliases are normalized, for example
chatgpt -> openai-codex,codex -> openai-codex,bedrock -> amazon-bedrock - bare model IDs are inferred only when there is a unique match or a small heuristic fallback
- local backends such as
ollama,vllm, andamazon-bedrockdo not self-activate just because they exist in the catalog; they still need explicit config or usable credentials agent_bridgeandagentare manager-defined local bridges, not entries in the builtin provider map
The Agent Bridge adapter is more than a simple proxy.
Current behavior from src/open_llm_auth/providers/agent_bridge.py:
- base URL defaults to
http://127.0.0.1:20100/v1 - standard chat requests call
POST /chat - task creation/status/retry/approve/cancel/list/events use the Agent Bridge agent lifecycle endpoints
- mutating task operations attach contract headers such as
X-Provider-Contract-Version - streaming task output is synthesized by polling task snapshots and task events and converting them into OpenAI-style SSE chunks
- plain chat requests rebuild a bounded context block from recent transcript turns because Agent Bridge's direct chat API is single-turn
Implemented hardening that the older README did not describe:
- fail-closed auth when no token is configured
- optional scoped configured tokens instead of one all-powerful shared token
- egress policy with allow-local-provider exceptions and metadata-address denial
- durable task ownership checks for universal task routes
- durable idempotency keys for task mutations
- task-contract validation against Agent Bridge before mutating task routes
- sanitized upstream HTTP errors
- secret redaction in config responses
The active test suite lives under tests.
Important coverage areas:
tests/test_universal_gateway.pytests/test_gateway_security_hardening.pytests/test_provider_manager.pytests/test_agent_bridge_provider.pytests/test_bedrock_provider.pytests/test_anthropic_adapter.pytests/test_auth_manager_parsing.py
Typical verification:
.venv/bin/pytest -q tests