A lightweight Node.js proxy server for OpenWebUI that provides secure API access with authentication handling and includes an embeddable chat widget for easy integration into any web application.
- π Automatic Authentication: Handles JWT token management and refresh
- π¦ Rate Limiting: Built-in protection (60 requests/minute per IP)
- π Streaming Support: Supports both streaming and non-streaming chat responses
- π¨ Embeddable Widget: Ready-to-use chat widget for any HTML page
- π‘ Proxy Architecture: Secure proxy to OpenWebUI API without exposing credentials
npm install
Create a .env
file in the project root:
OPENWEBUI_URL=http://localhost:3000
OPENWEBUI_USER=your_username
OPENWEBUI_PASS=your_password
PORT=8080
Security Tip: Create a dedicated OpenWebUI user with limited permissions for the proxy instead of using an admin account. This user only needs access to chat completions, not administrative functions.
npm start
The proxy server will be available at http://localhost:8080
Proxy endpoint for chat completions that handles authentication automatically.
Request Body:
{
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"model": "llama3",
"stream": false
}
Response:
{
"choices": [
{
"message": {
"role": "assistant",
"content": "Hello! I'm doing well, thank you for asking..."
}
}
]
}
Streaming Mode:
Set "stream": true
in the request body to receive server-sent events.
Copy the widget files to your project and include them:
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="path/to/style.css" />
</head>
<body>
<div id="chat-output"></div>
<form id="chat-form">
<input id="chat-input" type="text" placeholder="Ask me anything..." required />
<button type="submit">Send</button>
</form>
<script src="path/to/widget.js"></script>
</body>
</html>
Use the included static files served by the proxy:
<iframe
src="http://localhost:8080/static/widget.html"
width="100%"
height="400"
frameborder="0">
</iframe>
Integrate the chat functionality directly into your application:
async function sendChatMessage(message) {
const response = await fetch('/api/chat', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
messages: [{ role: 'user', content: message }],
model: 'llama3',
stream: false
})
});
const data = await response.json();
return data.choices[0].message.content;
}
Variable | Description | Default |
---|---|---|
OPENWEBUI_URL |
URL of your OpenWebUI instance | Required |
OPENWEBUI_USER |
Username for OpenWebUI authentication | Required |
OPENWEBUI_PASS |
Password for OpenWebUI authentication | Required |
PORT |
Port for the proxy server | 8080 |
The proxy includes built-in rate limiting:
- Limit: 60 requests per minute per IP address
- Endpoint: Applied to
/api/chat
only - Response: HTTP 429 when limit exceeded
- JWT tokens are automatically refreshed before expiry
- 30-second buffer before token expiration
- Tokens are assumed to have 1-hour lifetime
- Authentication failures are logged and returned as errors
openwebui-proxy/
βββ index.js # Main proxy server
βββ package.json # Dependencies and scripts
βββ static/
β βββ widget.html # Example chat widget page
β βββ widget.js # Chat widget JavaScript
β βββ style.css # Widget styling (optional)
βββ README.md # This file
Modify the model
parameter in your requests:
body: JSON.stringify({
messages: [{ role: 'user', content: userInput }],
model: 'gpt-4', // or any model available in your OpenWebUI
stream: false
})
The proxy forwards standard OpenAI-compatible headers. To add custom headers, modify the headers
object in index.js
.
The proxy includes comprehensive error handling:
- Authentication failures
- Network errors
- Invalid request formats
- Rate limit exceeded
- Keep your
.env
file secure and never commit it to version control - The proxy server should be run behind a reverse proxy (nginx, Apache) in production
- Consider implementing additional authentication for the proxy endpoints
- Rate limiting helps prevent abuse but can be adjusted based on your needs
-
Authentication Failed
- Verify
OPENWEBUI_URL
,OPENWEBUI_USER
, andOPENWEBUI_PASS
in.env
- Ensure OpenWebUI is accessible from the proxy server
- Verify
-
CORS Errors
- The proxy handles CORS automatically for the
/api/chat
endpoint - For custom implementations, ensure proper CORS headers
- The proxy handles CORS automatically for the
-
Rate Limiting
- Default limit is 60 requests/minute per IP
- Adjust in
index.js
if needed for your use case
This project is provided as-is for educational and development purposes.