A simple proxy server for tracking LLM requests to see the raw requests and responses.
npm install
node main.jsClient
│
│ HTTP Request
▼
LLM Proxy (this server)
│
│ Forwarded Request
▼
Target LLM API (OpenAI / Anthropic / etc.)
│
│ HTTP Response
▼
LLM Proxy
│
├── Saves request & response to logs/
│
└── Returns response to client
Always include the target LLM base URL as a
urlquery parameter
http://localhost:3000/<original-path>?url=<TARGET_LLM_BASE_URL>
POST https://api.openai.com/v1/chat/completionsPOST http://localhost:3000/v1/chat/completions?url=https://api.openai.comcurl http://localhost:3000/v1/chat/completions?url=https://api.openai.com \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1",
"messages": [
{"role": "user", "content": "Explain transformers"}
]
}'✔ The response is returned exactly as OpenAI sends it
✔ A full request/response log is saved to logs/
POST http://localhost:3000/v1/messages?url=https://api.anthropic.comAll headers (API keys, version headers, etc.) are passed through unchanged.
Each log file contains:
{
"timestamp": "2026-01-22T12:30:11.123Z",
"request": {
"method": "POST",
"url": "https://api.openai.com/v1/chat/completions",
"headers": { ... },
"body": { ... }
},
"response": {
"status": 200,
"headers": { ... },
"body": { ... }
}
}| Content Type | Stored As |
|---|---|
application/json |
Parsed JSON |
text/* |
UTF-8 string |
| Binary | Base64 encoded |