Skip to content

n3xtcoder/openwebui-proxy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OpenWebUI Proxy

A lightweight Node.js proxy server for OpenWebUI that provides secure API access with authentication handling and includes an embeddable chat widget for easy integration into any web application.

Features

  • πŸ” Automatic Authentication: Handles JWT token management and refresh
  • 🚦 Rate Limiting: Built-in protection (60 requests/minute per IP)
  • 🌊 Streaming Support: Supports both streaming and non-streaming chat responses
  • 🎨 Embeddable Widget: Ready-to-use chat widget for any HTML page
  • πŸ“‘ Proxy Architecture: Secure proxy to OpenWebUI API without exposing credentials

Quick Start

1. Installation

npm install

2. Environment Configuration

Create a .env file in the project root:

OPENWEBUI_URL=http://localhost:3000
OPENWEBUI_USER=your_username
OPENWEBUI_PASS=your_password
PORT=8080

Security Tip: Create a dedicated OpenWebUI user with limited permissions for the proxy instead of using an admin account. This user only needs access to chat completions, not administrative functions.

3. Run the Server

npm start

The proxy server will be available at http://localhost:8080

API Endpoints

POST /api/chat

Proxy endpoint for chat completions that handles authentication automatically.

Request Body:

{
  "messages": [
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ],
  "model": "llama3",
  "stream": false
}

Response:

{
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "Hello! I'm doing well, thank you for asking..."
      }
    }
  ]
}

Streaming Mode: Set "stream": true in the request body to receive server-sent events.

Embedding the Chat Widget

Option 1: Direct Integration

Copy the widget files to your project and include them:

<!DOCTYPE html>
<html>
<head>
  <link rel="stylesheet" href="path/to/style.css" />
</head>
<body>
  <div id="chat-output"></div>
  <form id="chat-form">
    <input id="chat-input" type="text" placeholder="Ask me anything..." required />
    <button type="submit">Send</button>
  </form>
  
  <script src="path/to/widget.js"></script>
</body>
</html>

Option 2: Hosted Widget

Use the included static files served by the proxy:

<iframe 
  src="http://localhost:8080/static/widget.html" 
  width="100%" 
  height="400"
  frameborder="0">
</iframe>

Option 3: Custom Integration

Integrate the chat functionality directly into your application:

async function sendChatMessage(message) {
  const response = await fetch('/api/chat', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      messages: [{ role: 'user', content: message }],
      model: 'llama3',
      stream: false
    })
  });
  
  const data = await response.json();
  return data.choices[0].message.content;
}

Configuration

Environment Variables

Variable Description Default
OPENWEBUI_URL URL of your OpenWebUI instance Required
OPENWEBUI_USER Username for OpenWebUI authentication Required
OPENWEBUI_PASS Password for OpenWebUI authentication Required
PORT Port for the proxy server 8080

Rate Limiting

The proxy includes built-in rate limiting:

  • Limit: 60 requests per minute per IP address
  • Endpoint: Applied to /api/chat only
  • Response: HTTP 429 when limit exceeded

Token Management

  • JWT tokens are automatically refreshed before expiry
  • 30-second buffer before token expiration
  • Tokens are assumed to have 1-hour lifetime
  • Authentication failures are logged and returned as errors

Project Structure

openwebui-proxy/
β”œβ”€β”€ index.js           # Main proxy server
β”œβ”€β”€ package.json       # Dependencies and scripts
β”œβ”€β”€ static/
β”‚   β”œβ”€β”€ widget.html    # Example chat widget page
β”‚   β”œβ”€β”€ widget.js      # Chat widget JavaScript
β”‚   └── style.css      # Widget styling (optional)
└── README.md          # This file

Development

Adding Custom Models

Modify the model parameter in your requests:

body: JSON.stringify({
  messages: [{ role: 'user', content: userInput }],
  model: 'gpt-4', // or any model available in your OpenWebUI
  stream: false
})

Custom Headers

The proxy forwards standard OpenAI-compatible headers. To add custom headers, modify the headers object in index.js.

Error Handling

The proxy includes comprehensive error handling:

  • Authentication failures
  • Network errors
  • Invalid request formats
  • Rate limit exceeded

Security Considerations

  • Keep your .env file secure and never commit it to version control
  • The proxy server should be run behind a reverse proxy (nginx, Apache) in production
  • Consider implementing additional authentication for the proxy endpoints
  • Rate limiting helps prevent abuse but can be adjusted based on your needs

Troubleshooting

Common Issues

  1. Authentication Failed

    • Verify OPENWEBUI_URL, OPENWEBUI_USER, and OPENWEBUI_PASS in .env
    • Ensure OpenWebUI is accessible from the proxy server
  2. CORS Errors

    • The proxy handles CORS automatically for the /api/chat endpoint
    • For custom implementations, ensure proper CORS headers
  3. Rate Limiting

    • Default limit is 60 requests/minute per IP
    • Adjust in index.js if needed for your use case

License

This project is provided as-is for educational and development purposes.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published