Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 5
sidebar_position: 6
title: "Getting Started with Functions"
---

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---

sidebar_position: 4
sidebar_position: 5
title: "Starting with OpenAI-Compatible Servers"

---
Expand Down
38 changes: 38 additions & 0 deletions docs/getting-started/quick-start/starting-with-vllm.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
sidebar_position: 4
title: "Starting With vLLM"
---

## Overview

vLLM provides an OpenAI-compatible API, making it easy to connect to Open WebUI. This guide will show you how to connect your vLLM server.

---

## Step 1: Set Up Your vLLM Server

Make sure your vLLM server is running and accessible. The default API base URL is typically:

```
http://localhost:8000/v1
```

For remote servers, use the appropriate hostname or IP address.

---

## Step 2: Add the API Connection in Open WebUI

1. Go to ⚙️ **Admin Settings**.
2. Navigate to **Connections > OpenAI > Manage** (look for the wrench icon).
3. Click ➕ **Add New Connection**.
4. Fill in the following:
- **API URL**: `http://localhost:8000/v1` (or your vLLM server URL)
- **API Key**: Leave empty (vLLM typically doesn't require an API key for local connections)
5. Click **Save**.

---

## Step 3: Start Using Models

Select any model that's available on your vLLM server from the Model Selector and start chatting.
Loading