High performance HTTP reverse proxy server. Built in Go, this proxy allows you to route API requests to multiple destinations while capturing full request/response data for analysis and debugging.
It was built to capture LLM traces for OpenRouter, without having to set up heavy enterprise routers like LiteLLM.
The package consists of three components components:
- Proxy Package (
server.go
): Routes requests and handles streaming - Logging Proxy Server (
logging-proxy/
): Command line tool for the proxy that logs the requests
Edit config.yaml
to configure the proxy:
server:
port: 5601
host: "localhost"
not_found: "/404/"
logging:
console: true # Enable console output for request monitoring
server_url: "http://localhost:8080" # Logging server URL
default: true # Default logging behavior for routes and unknown requests
routes:
openrouter:
pattern: "/api/v1/"
destination: "https://openrouter.ai/"
lmstudio:
pattern: "/lmstudio/"
destination: "http://127.0.0.1:1234/"
logging: false # Disable logging for this route
- server.port: Port for the proxy server (default: 5601)
- server.host: Host interface to bind to (default: localhost)
- logging.console: Enable/disable console request monitoring
- logging.server_url: URL of the logging server
- logging.default: Log unknown routes and 404 responses
- routes: Map of route configurations with pattern/destination mappings
- Start the logging proxy server:
The proxy will start on the configured port (default:
go run ./logging-proxy
5601
).
go test -v .
The project includes test.http
with example requests for manual testing using VS Code REST Client.
Example test scenarios:
-
Direct LM Studio Request (for comparison):
POST http://127.0.0.1:1234/v1/chat/completions Content-Type: application/json Authorization: Bearer sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx { "model": "liquid/lfm2-1.2b", "messages": [{"role": "user", "content": "Test message"}] }
-
Proxied Request:
POST http://127.0.0.1:5601/lmstudio/v1/chat/completions Content-Type: application/json Authorization: Bearer sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx { "model": "liquid/lfm2-1.2b", "messages": [{"role": "user", "content": "Test message"}] }
-
Streaming Request:
POST http://127.0.0.1:5601/lmstudio/v1/chat/completions Content-Type: application/json { "model": "liquid/lfm2-1.2b", "messages": [{"role": "user", "content": "Test message"}], "stream": true }
-
Setup LM Studio:
- Install and start LM Studio
- Load a model (e.g., "liquid/lfm2-1.2b")
- Enable the local server on
http://127.0.0.1:1234
-
Test the proxy:
- Start both logging server and proxy
- Use the provided
test.http
requests - Compare direct vs. proxied responses
- Check the
logs/
directory for captured traffic
- Client sends request to proxy (e.g.,
localhost:5601/lmstudio/v1/chat/completions
) - Proxy matches the route (
/lmstudio/
→http://127.0.0.1:1234/
) - Path transformation converts
/lmstudio/v1/chat/completions
→/v1/chat/completions
- Duplex streaming forwards request to destination while logging (if enabled)
- Response streaming returns data to client while logging response
- Logging server stores complete HTTP request/response data with metadata
Routes use Go's http.ServeMux pattern matching:
Pattern Types:
/lmstudio/
- Matches/lmstudio/
and all subpathsGET /lmstudio/file.txt
- Matches exactly/lmstudio/file.txt
, no subpaths, just theGET
methodGET example.com/test/{$}
- MatchesHost: example.com
, path/test
and/test/
, but not/test/foo
POST example.com/test/
- MatchesHost: example.com
and anything under/test/
"/"
- Catch-all that matches everything
Note: Wildcards (except {$}
) are not supported and will be rejected on startup.
Example:
- Request:
/lmstudio/v1/chat/completions
- Pattern:
/lmstudio/
→http://127.0.0.1:1234/
- Result:
http://127.0.0.1:1234/v1/chat/completions
In general, more specific patterns win when multiple patterns could match. If you create identical patterns the proxy will panic on startup.
Captured logs include:
- Binary files: Complete HTTP request/response data
- Metadata JSON: Request ID, timestamps, headers, processing time
- X-Proxy-Path header: Original proxy URL for replay capability
Log files are named: {timestamp}_{requestID}_{request|response}.bin
When logging.console
is enabled, you'll see real-time request monitoring:
2025-09-13 02:11:09 [092d0424] POST /lmstudio/v1/chat/completions -> http://127.0.0.1:1234/v1/chat/completions [log]
- Metadata endpoint for querying logged requests
- Web-based logging UI with live request feed
- WebSocket support for real-time monitoring
- Custom Transport implementation for simplified logging