Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,16 @@ For more details on enterprise solutions and branding customizations, [click her

**A:** You can access the **File Manager** by going to **Settings > Data Controls > Manage Files > Manage**. This dashboard allows you to search through all your uploaded documents, view their details, and delete them. Deleting a file here also automatically cleans up any associated Knowledge Base entries and vector embeddings.

### Q: I get "The prompt is too long" / "context length exceeded" after a while in a chat. How do I fix it?

**A:** This error comes from the **model provider**, not from Open WebUI — the provider counts the tokens of everything you sent (system prompt + the *entire* chat history + attached files + tool calls + your new message) and rejects the request once it exceeds the model's context window. The "prompt" the model sees is the whole conversation, not just your latest message.

Open WebUI intentionally does **not** ship a built-in context trimmer. Every model has a different tokenizer and a different context window, and every deployment wants a different truncation policy (by tokens, by turns, by message count, file-attachments-first, summarize-and-replace, per-model budgets, and so on). There is no single policy that is correct for every user, so we expose the hook instead of choosing one for you.

Context management is done with [filter Functions](/features/extensibility/plugin/functions/filter): `inlet()` receives the full `body["messages"]` on every request and can modify it freely (drop old turns, enforce a turn limit, summarize, trim attachments, etc.). Many community-maintained context filters are already available one-click on [openwebui.com](https://openwebui.com/) — browse, install, and tune the valves. If none fits, copy the closest one into **Admin Panel → Functions** and edit it.

For the full write-up with examples, see [Context Window / Prompt Too Long](/troubleshooting/context-window).

### Q: Can I use Open WebUI offline, in air-gapped networks, or in extreme environments like outer space?

**A:** **Yes.** Open WebUI is a self-hosted, **internet-independent AI platform** designed to work in **air-gapped networks**, **remote deployments**, and any environment where cloud-based systems are impractical or impossible. Whether you need to **run an LLM without internet**, deploy a **private AI with no cloud dependency**, or operate a **local AI chatbot offline**, Open WebUI supports all of these out of the box. It runs entirely on local hardware and does not make external calls by default.
Expand Down
4 changes: 4 additions & 0 deletions docs/features/channels/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,10 @@ With [**native function calling**](/features/extensibility/plugin/tools#tool-cal

This removes the need to manually bridge information between private chats and shared channels. The AI does it for you.

:::tip Community action: Forward to Channel
If you want a one-click path from a chat message into a channel, the community **[Forward to Channel](https://openwebui.com/posts/b60c1f03-e29c-47c0-862c-3741a382616e)** action adds a button to each assistant message that posts the reply (or a selection) into a channel of your choice. Useful for promoting good answers from private chats into team-visible spaces without copy-paste.
:::

---

## Getting Started
Expand Down
2 changes: 0 additions & 2 deletions docs/features/chat-conversations/chat-features/chatshare.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,6 @@ Note: You can change the permission level of your shared chats on the community

:::

Example of a shared chat to the community platform website: https://openwebui.com/c/iamg30/5e3c569f-905e-4d68-a96d-8a99cc65c90f

#### Copying a Share Link

When you select `Copy Link`, a unique share link is generated that can be shared with others.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Open WebUI offers powerful code execution capabilities directly within your chat

## Key Features

- **Code Interpreter Capability**: Enable models to autonomously write and execute Python code as part of their responses. Works with both Default Mode (XML-based) and Native Mode (tool calling via `execute_code`).
- **Code Interpreter Capability**: Enable models to autonomously write and execute Python code as part of their responses. Runs via the `execute_code` tool in Native (Agentic) Mode — the only supported tool-calling mode. An older XML-based integration exists for legacy Default Mode but is unsupported; new deployments should use Native Mode.

- **Python Code Execution**: Run Python scripts directly in your browser using Pyodide, or on a server using Jupyter. Supports popular libraries like pandas and matplotlib with no setup required.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,4 @@ Controls what happens when you click a follow-up prompt.

## Regenerating Follow-Ups

If you want to regenerate follow-up suggestions for a specific response, you can use the [Regenerate Followups](https://openwebui.com/f/silentoplayz/regenerate_followups) action button from the community.
If you want to regenerate follow-up suggestions for a specific response, you can use the [Regenerate Follow-ups](https://openwebui.com/posts/9b5ac6d6-dfd6-4cad-bc1d-5518b138f22d) action button from the community.
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,9 @@ To unlock these features, your model must support native tool calling and have s
5. **Use a Quality Model**: Ensure you're using a frontier model with strong reasoning capabilities for best results.

:::tip Model Capability, Default Features, and Chat Toggle
In **Native Mode**, the `search_web` and `fetch_url` tools require both the **Web Search** capability to be enabled *and* **Web Search** to be checked under **Default Features** in the model settings (or toggled on in the chat). If either is missing, the tools will not be injected — even though other builtin tools may still appear.
In **Native Mode** (the supported mode), the `search_web` and `fetch_url` tools require both the **Web Search** capability to be enabled *and* **Web Search** to be checked under **Default Features** in the model settings (or toggled on in the chat). If either is missing, the tools will not be injected — even though other builtin tools may still appear.

In **Default Mode** (non-native), the chat toggle controls whether web search is performed via RAG-style injection.
Default Mode's RAG-style injection behavior is documented here only for legacy deployments. Default Mode is no longer supported; all models should be configured for Native Mode.

**Important**: If you disable the `web_search` capability on a model but use Native Mode, the tools won't be available even if you manually toggle Web Search on in the chat.
:::
Expand Down
2 changes: 1 addition & 1 deletion docs/features/extensibility/plugin/development/events.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,7 @@ While this event can technically be emitted from any plugin type (tools, pipes,
* **Chat Overview**: Favorited messages (pins) are highlighted in the conversation overview, making it easier for users to locate key information later.

#### Example: "Pin Message" Action
For a practical implementation of this event in a real-world plugin, see the **[Pin Message Action on Open WebUI Community](https://openwebui.com/posts/pin_message_action_143594d1)**. This action demonstrates how to toggle the favorite status in the database and immediately sync the UI using the `chat:message:favorite` event.
For a practical implementation of this event in a real-world plugin, see the **[Pin Message Action on Open WebUI Community](https://openwebui.com/posts/143594d1-0838-4f9a-9af2-b94d2952f7ba)**. This action demonstrates how to toggle the favorite status in the database and immediately sync the UI using the `chat:message:favorite` event.

---

Expand Down
13 changes: 13 additions & 0 deletions docs/features/extensibility/plugin/development/rich-ui.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -247,6 +247,19 @@ If your Rich UI embed needs to trigger downloads, interact with Open WebUI's fro
As an alternative for ephemeral interactions that need full page access, consider using the [`execute` event](/features/extensibility/plugin/development/events#execute-works-with-both-__event_call__-and-__event_emitter__) instead, which runs unsandboxed in the main page context.
:::

:::tip Community Showcase: Streaming Rich UI with same-origin
If you want to see how far Rich UI can go when same-origin is enabled, take a look at the community **[Inline Visualizer v2](https://github.com/Classic298/open-webui-plugins)** tool (also on the community site via the [Show-and-tell discussion](https://github.com/open-webui/open-webui/discussions/23901)).

It demonstrates patterns that aren't in the basic docs:

- **Live streaming HTML/SVG.** The tool returns an empty wrapper; the model then emits markup inline between plain-text `@@@VIZ-START / @@@VIZ-END` markers in its normal response. A same-origin observer inside the iframe tails the parent chat's DOM, extracts the growing block, and reconciles new nodes into the iframe as tokens arrive — so dashboards and diagrams paint live, token-by-token, instead of popping in at the end of the stream.
- **Bidirectional bridges.** `sendPrompt(text)` turns any clickable node into a follow-up user message. `saveState(k, v)` / `loadState(k, fallback)` proxies parent `localStorage` scoped per-message so sliders and toggles survive reloads. `copyText`, `toast(msg, kind)`, and `openLink` round it out.
- **A shipped design system.** Theme-aware CSS variables, a 9-ramp color palette, SVG utility classes, auto light/dark adaptation, and 230 localized strings across 46 languages — all delivered from a single tool with no core changes.
- **Incremental DOM reconciliation.** A safe-cut HTML parser flushes the longest valid prefix on every tick; the reconciler only appends new nodes so existing elements never re-mount and animations never re-trigger during the stream.

This is a useful reference when you're trying to decide whether a generative-UI / streaming-UI feature needs a core change or can live purely in plugin-land. (Spoiler: almost always the latter.)
:::

## Rendering Position

- **Tool embeds** inside a tool call result render **inline** at the tool call indicator (the "View Result from..." line)
Expand Down
2 changes: 1 addition & 1 deletion docs/features/extensibility/plugin/functions/action.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Action functions should always be defined as `async`. The backend is progressive

Actions are admin-managed functions that extend the chat interface with custom interactive capabilities. When a message is generated by a model that has actions configured, these actions appear as clickable buttons above the message.

A scaffold of Action code can be found [in the community section](https://openwebui.com/f/hub/custom_action/). For more Action Function examples built by the community, visit [https://openwebui.com/search](https://openwebui.com/search).
A minimal scaffold is shown in the [Function Structure](#function-structure) section below. For real-world Action examples built by the community, browse [openwebui.com](https://openwebui.com/).

An example of a graph visualization Action can be seen in the video below.

Expand Down
91 changes: 50 additions & 41 deletions docs/features/extensibility/plugin/functions/pipe.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,8 @@ Let's dive into a practical example where we'll create a Pipe that proxies reque

```python
from pydantic import BaseModel, Field
import requests
import httpx


class Pipe:
class Valves(BaseModel):
Expand All @@ -157,40 +158,37 @@ class Pipe:
def __init__(self):
self.valves = self.Valves()

def pipes(self):
if self.valves.OPENAI_API_KEY:
try:
headers = {
"Authorization": f"Bearer {self.valves.OPENAI_API_KEY}",
"Content-Type": "application/json",
}
async def pipes(self):
if not self.valves.OPENAI_API_KEY:
return [{"id": "error", "name": "API Key not provided."}]

r = requests.get(
headers = {
"Authorization": f"Bearer {self.valves.OPENAI_API_KEY}",
"Content-Type": "application/json",
}

try:
async with httpx.AsyncClient() as client:
r = await client.get(
f"{self.valves.OPENAI_API_BASE_URL}/models", headers=headers
)
r.raise_for_status()
models = r.json()
return [
{
"id": model["id"],
"name": f'{self.valves.NAME_PREFIX}{model.get("name", model["id"])}',
}
for model in models["data"]
if "gpt" in model["id"]
]

except Exception as e:
return [
{
"id": "error",
"name": "Error fetching models. Please check your API Key.",
},
]
else:

return [
{
"id": model["id"],
"name": f'{self.valves.NAME_PREFIX}{model.get("name", model["id"])}',
}
for model in models["data"]
if "gpt" in model["id"]
]
except Exception:
return [
{
"id": "error",
"name": "API Key not provided.",
},
"name": "Error fetching models. Please check your API Key.",
}
]

async def pipe(self, body: dict, __user__: dict):
Expand All @@ -205,24 +203,35 @@ class Pipe:

# Update the model id in the body
payload = {**body, "model": model_id}
try:
r = requests.post(
url=f"{self.valves.OPENAI_API_BASE_URL}/chat/completions",
json=payload,
headers=headers,
stream=True,
)

r.raise_for_status()
url = f"{self.valves.OPENAI_API_BASE_URL}/chat/completions"

try:
if body.get("stream", False):
return r.iter_lines()
else:
async def event_stream():
async with httpx.AsyncClient(timeout=None) as client:
async with client.stream(
"POST", url, json=payload, headers=headers
) as r:
r.raise_for_status()
async for line in r.aiter_lines():
yield line

return event_stream()

async with httpx.AsyncClient(timeout=None) as client:
r = await client.post(url, json=payload, headers=headers)
r.raise_for_status()
return r.json()
except Exception as e:
return f"Error: {e}"
```

:::tip Use an async HTTP client
This example uses [`httpx.AsyncClient`](https://www.python-httpx.org/async/) instead of `requests` because both `pipes()` and `pipe()` run inside Open WebUI's async event loop. Calling the synchronous `requests` library from an `async def` method blocks the loop for the full duration of the HTTP request (and, for streaming, the entire stream), which starves every other concurrent request on the instance. `httpx` is async-native, already a dependency, and a drop-in replacement for the common patterns.

If you must use a synchronous third-party library in an async handler, wrap the blocking call with `await anyio.to_thread.run_sync(...)` so it runs on a worker thread instead of the event loop.
:::

### Detailed Breakdown

#### Valves Configuration
Expand Down Expand Up @@ -261,8 +270,8 @@ class Pipe:
1. **Prepare Headers**: Sets up the headers with the API key and content type.
2. **Extract Model ID**: Extracts the actual model ID from the selected model name.
3. **Prepare Payload**: Updates the body with the correct model ID.
4. **Make API Request**: Sends a POST request to the OpenAI API's chat completions endpoint.
5. **Handle Streaming**: If `stream` is `True`, returns an iterable of lines.
4. **Make API Request**: Sends a POST request to the OpenAI API's chat completions endpoint via an `httpx.AsyncClient`.
5. **Handle Streaming**: If `stream` is `True`, returns an async generator that yields SSE lines from the upstream response.
6. **Error Handling**: Catches exceptions and returns an error message.

### Extending the Proxy Pipe
Expand Down
10 changes: 10 additions & 0 deletions docs/features/extensibility/plugin/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,16 @@ title: "Tools & Functions (Plugins)"

Getting started with Tools and Functions is easy because everything’s already built into the core system! You just **click a button** and **import these features directly from the community**, so there’s no coding or deep technical work required.

:::tip Plugins can do *way* more than you think — and way more than is shown here
The pages that follow document every capability the plugin system exposes: every class shape, every lifecycle method, every `__arg__`, every event type, every return contract, every hook that touches the pipeline. That surface is *complete*.

What's **not** documented — because it can't be — is **what to use it for**. The ideas. The creative combinations. The "huh, I didn't realize you could do that with just an inlet filter and a `saveState` bridge" moments. Those live in the community's heads, not in these docs.

These are **developer docs**. The primitives are all here; the creativity is on you (and on the thousands of community plugins that have already stretched the system into shapes nobody on the core team predicted — live-streaming HTML dashboards, per-user cost enforcement, summarize-and-replace context managers, bidirectional interactive UIs, entire embedded design systems, in-chat MCP apps, forensic watermarking, and so on).

If you're weighing a feature request and thinking *"this needs a core change,"* ask *"can this be a plugin?"* first. Almost always the answer is yes.
:::

## What are "Tools" and "Functions"?

Let's start by thinking of **Open WebUI** as a "base" software that can do many tasks related to using Large Language Models (LLMs). But sometimes, you need extra features or abilities that don't come *out of the box*—this is where **tools** and **functions** come into play.
Expand Down
Loading
Loading