diff --git a/docs/getting-started/quick-start/starting-with-functions.mdx b/docs/getting-started/quick-start/starting-with-functions.mdx index 63c44447b..74fa12817 100644 --- a/docs/getting-started/quick-start/starting-with-functions.mdx +++ b/docs/getting-started/quick-start/starting-with-functions.mdx @@ -1,5 +1,5 @@ --- -sidebar_position: 5 +sidebar_position: 6 title: "Getting Started with Functions" --- diff --git a/docs/getting-started/quick-start/starting-with-openai-compatible.mdx b/docs/getting-started/quick-start/starting-with-openai-compatible.mdx index c255bcb45..a1e30d948 100644 --- a/docs/getting-started/quick-start/starting-with-openai-compatible.mdx +++ b/docs/getting-started/quick-start/starting-with-openai-compatible.mdx @@ -1,6 +1,6 @@ --- -sidebar_position: 4 +sidebar_position: 5 title: "Starting with OpenAI-Compatible Servers" --- diff --git a/docs/getting-started/quick-start/starting-with-vllm.mdx b/docs/getting-started/quick-start/starting-with-vllm.mdx new file mode 100644 index 000000000..6a6483ea8 --- /dev/null +++ b/docs/getting-started/quick-start/starting-with-vllm.mdx @@ -0,0 +1,38 @@ +--- +sidebar_position: 4 +title: "Starting With vLLM" +--- + +## Overview + +vLLM provides an OpenAI-compatible API, making it easy to connect to Open WebUI. This guide will show you how to connect your vLLM server. + +--- + +## Step 1: Set Up Your vLLM Server + +Make sure your vLLM server is running and accessible. The default API base URL is typically: + +``` +http://localhost:8000/v1 +``` + +For remote servers, use the appropriate hostname or IP address. + +--- + +## Step 2: Add the API Connection in Open WebUI + +1. Go to ⚙️ **Admin Settings**. +2. Navigate to **Connections > OpenAI > Manage** (look for the wrench icon). +3. Click ➕ **Add New Connection**. +4. Fill in the following: + - **API URL**: `http://localhost:8000/v1` (or your vLLM server URL) + - **API Key**: Leave empty (vLLM typically doesn't require an API key for local connections) +5. Click **Save**. + +--- + +## Step 3: Start Using Models + +Select any model that's available on your vLLM server from the Model Selector and start chatting.