diff --git a/docs/enterprise/customers/_category_.json b/docs/enterprise/customers/_category_.json
new file mode 100644
index 0000000000..a737fd3211
--- /dev/null
+++ b/docs/enterprise/customers/_category_.json
@@ -0,0 +1,8 @@
+{
+ "label": "Customer Stories",
+ "position": 0,
+ "link": {
+ "type": "generated-index",
+ "slug": "/enterprise/customers"
+ }
+}
diff --git a/docs/enterprise/customers/samsung-semiconductor.mdx b/docs/enterprise/customers/samsung-semiconductor.mdx
new file mode 100644
index 0000000000..47b76ed961
--- /dev/null
+++ b/docs/enterprise/customers/samsung-semiconductor.mdx
@@ -0,0 +1,116 @@
+---
+sidebar_position: 1
+title: "Samsung Semiconductor Inc."
+description: "How Samsung Semiconductor built a secure, self-hosted AI platform with Open WebUI to boost research and development efficiency, cutting workflows from days to hours."
+keywords: ["Samsung Semiconductor AI", "Open WebUI case study", "on-prem AI platform", "enterprise LLM", "private AI deployment", "Kubernetes AI", "RAG internal data", "AI productivity", "secure generative AI", "LLM for R&D"]
+---
+
+# Samsung Semiconductor Accelerates R&D With Private, On-Prem AI Platform Powered by Open WebUI
+
+
+
+## Overview
+
+:::info
+How Samsung Semiconductor built a secure, self-hosted AI platform with **Open WebUI** to boost research and development efficiency, cutting workflows **from days to hours**.
+:::
+
+### At a Glance
+
+- **Users**: 1,000 - 4,999 employees
+- **Region**: United States (data residency enforced)
+- **Industry**: Semiconductor
+- **Deployment**: On-prem Kubernetes cluster
+- **Models**: Internal LLMs + SLMs
+- **Time-to-deploy**: 14-day pilot, full rollout in 30 days
+- **Adoption**: 40% active use in first week, stabilized at 5-10% daily actives
+- **Key Results**: 30% faster development cycles, seamless internal adoption
+
+
+## About Samsung Semiconductor Inc.
+
+[Samsung Semiconductor Inc. (SSI)](https://semiconductor.samsung.com/) delivers cutting-edge semiconductor solutions including DRAM, SSD, processors, and image sensors. With innovation at its core, the company supports global technology leaders and powers advancements across data centers, mobile devices, and AI systems.
+
+## The Challenge: Secure, Flexible AI at Scale
+
+As teams across SSI began experimenting with generative AI tools, leadership identified a need for a **self-hosted AI interface** that balanced **innovation with control**.
+
+The goal: provide employees a trusted environment to work with large language models (LLMs) without compromising data security or compliance.
+
+**Key Requirements**
+
+- **Simple, reliable** chatbot deployment
+- Integration with internal **Active Directory (SSO)**
+- **Full audit trails** and exportable logs
+- **Strict data residency** and internal networking
+- **Control** over plugin access and guardrails
+
+SaaS-based AI tools offered speed but lacked flexibility and governance. SSI required a platform they could host, audit, and evolve, without vendor lock-in.
+
+## The Solution: Open WebUI on Kubernetes
+Open WebUI was selected for its **open architecture, flexibility, and rapid proof-of-concept capabilities**. Within two weeks, SSI’s AI/ML engineering team had a production-ready deployment running inside their secure on-prem Kubernetes cluster.
+
+**Architecture Highlights**
+- **Compute / Orchestration**: Internal orchestration on Kubernetes
+- **Storage / Database**: Internal DB (encrypted and managed in-house)
+- **Networking**: Fully internal network isolation
+- **Logging / Monitoring**: SSI’s internal observability stack
+- **Security Controls**: Data residency enforced; internal user access controls
+
+
+> “Open WebUI gave us control across security, models, and UX, without vendor lock-in.” — Software Engineering, Samsung Semiconductor, Inc.
+
+## Adoption & Enablement
+
+To ensure smooth adoption, SSI launched a **14-day pilot with 100 analysts**. Following rapid success, Open WebUI was rolled out company-wide with two live training sessions and ongoing IT help desk support.
+Within 30 days:
+
+- 80% of targeted staff had adopted the platform
+- Daily active users stabilized at 5-10% of total employees
+- R&D teams reported **significant productivity improvements**
+
+> “Open WebUI provides users with an environment similar to commercial tools, giving them a sense of familiarity, and at the same time, it has the advantage of improving usability with its simple and intuitive design.” — AI/ML Engineering, Samsung Semiconductor, Inc.
+
+## Results: Speed, Adoption, and Control
+
+
+
+
+
+### 01. Development Speed
+
+R&D and software development cycles shortened by **30%**, accelerating iteration and innovation.
+
+### 02. Adoption & Usability
+
+Over **40% of employees** became active users within the first week, citing simplicity and responsiveness.
+
+### 03. Security & Compliance
+
+- **Fully private, on-premises deployment** ensured compliance with internal data residency requirements.
+- Built-in access control and logging supported **governance without slowing** teams.
+
+## Why Samsung Chose Open WebUI
+
+Open WebUI stood out as the only solution offering:
+- **Complete control** over models, data, and extensions
+- **Self-hosted flexibility** with enterprise-grade UI
+- **Rapid deployment** and minimal IT overhead
+- **High compatibility** with existing AI applications
+
+The platform empowered teams to collaborate confidently and securely, accelerating the path from idea to insight.
+
+## What’s Next
+
+Samsung Semiconductor plans to continue expanding its AI infrastructure with Open WebUI, integrating additional internal models and optimizing RAG performance over proprietary research datasets.
+
+
+---
+
+:::tip
+
+**Looking for an [Enterprise Plan](https://docs.openwebui.com/enterprise)?** — **[Speak with Our Sales Team Today!](https://docs.openwebui.com/enterprise)**
+
+Get **enhanced capabilities**, including **custom theming and branding**, **Service Level Agreement (SLA) support**, **Long-Term Support (LTS) versions**, and **more!**
+
+:::
\ No newline at end of file
diff --git a/docs/enterprise/index.mdx b/docs/enterprise/index.mdx
index 48d0b4d33d..3b4118fc94 100644
--- a/docs/enterprise/index.mdx
+++ b/docs/enterprise/index.mdx
@@ -1,36 +1,53 @@
---
-sidebar_position: 2000
+sidebar_position: 400
title: "🏢 Open WebUI for Enterprises"
---
import { Testimonals } from "@site/src/components/Testimonals";
-:::tip
+## The AI Platform Powering the World’s Leading Organizations
+
+In the rapidly advancing AI landscape, staying ahead isn't just a competitive advantage, it’s a necessity. Open WebUI is the **fastest-growing AI platform** designed for **seamless enterprise deployment**, helping organizations leverage cutting-edge AI capabilities with **unmatched efficiency**.
-## Built for Everyone, Backed by the Community
+
-Open WebUI is completely free to use as-is, with no restrictions or hidden limits.
+:::tip
+
+Open WebUI is **completely free to use as-is**, with no restrictions or hidden limits.
-It is **independently developed** and **sustained** by its users. **Optional** licenses are available to **support** ongoing development while providing **additional benefits** for businesses.
+We are **sustained** by our users. **Optional** licenses are available to **support** ongoing development while providing **additional benefits** for businesses.
:::
-## The AI Platform Powering the World’s Leading Organizations
+---
+
+
+#### Featured Customer Stories
+Discover how [**Samsung Semiconductor Inc.**](https://semiconductor.samsung.com/) built a secure, self-hosted AI platform using Open WebUI, reducing complex workflows from *days to hours* while maintaining strict data-security requirements.
+
+[Read the full story →](/enterprise/customers/samsung-semiconductor)
+
+
-In the rapidly advancing AI landscape, staying ahead isn't just a competitive advantage—it’s a necessity. Open WebUI is the **fastest-growing AI platform** designed for **seamless enterprise deployment**, helping organizations leverage cutting-edge AI capabilities with **unmatched efficiency**.
+Explore how other organizations are driving real impact with Open WebUI.
-## **Let’s Talk**
+[View all customer stories →](/enterprise/customers)
+
+---
+
+## Let’s Talk
+
+**sales@openwebui.com** — Send us your deployment **end user count (seats)**, and let’s explore how we can work together!
:::info
-Enterprise licenses and partnership opportunities are available exclusively to registered entities and organizations. At this time, we are unable to accommodate individual users. We appreciate your understanding and interest.
+Enterprise licenses and partnership opportunities are available **exclusively to registered entities and organizations**. At this time, we are unable to accommodate individual users. We appreciate your understanding and interest.
-To help us respond quickly and efficiently to your inquiry, **please use your official work email address**—**Personal email accounts (e.g. gmail.com, hotmail.com, icloud.com, yahoo.com etc.) are often flagged by our system** and will not be answered.
+To help us respond quickly and efficiently to your inquiry, **please use your official work email address**, Personal email accounts (e.g. gmail.com, hotmail.com, icloud.com, yahoo.com etc.) are often flagged by our system and will not be answered.
:::
-📧 **sales@openwebui.com** — Send us your deployment **end user count (seats)**, and let’s explore how we can work together! Support available in **English & Korean (한국어), with more languages coming soon!**
Take your AI strategy to the next level with our **premium enterprise solutions**, crafted for organizations that demand **expert consulting, tailored deployment, and dedicated support.**
@@ -72,95 +89,63 @@ Thank you for understanding and respecting our partnership process.
---
-
-
----
-
## Why Enterprises Choose Open WebUI
-### 🚀 **Faster AI Innovation, No Vendor Lock-In**
+#### 🚀 Faster AI Innovation, No Vendor Lock-In
Unlike proprietary AI platforms that dictate your roadmap, **Open WebUI puts you in control**. Deploy **on-premise, in a private cloud, or hybrid environments**—without restrictive contracts.
-### 🔒 **Enterprise-Grade Security & Compliance**
+#### 🔒 Enterprise-Grade Security & Compliance
Security is a business-critical requirement. Open WebUI is built to support **SOC 2, HIPAA, GDPR, FedRAMP, and ISO 27001 compliance**, ensuring enterprise security best practices with **on-premise and air-gapped deployments**.
-### ⚡ **Reliable, Scalable, and Performance-Optimized**
+#### ⚡ Reliable, Scalable, and Performance-Optimized
Built for large-scale enterprise deployments with **multi-node high availability**, Open WebUI can be configured to ensure **99.99% uptime**, optimized workloads, and **scalability across regions and business units**.
-### 💡 **Fully Customizable & Modular**
+#### 💡 Fully Customizable & Modular
Customize every aspect of Open WebUI to fit your enterprise’s needs. **White-label, extend, and integrate** seamlessly with **your existing systems**, including **LDAP, Active Directory, and custom AI models**.
-### 🌍 **Thriving Ecosystem with Continuous Innovation**
+#### 🌍 Thriving Ecosystem with Continuous Innovation
With one of the **fastest iteration cycles in AI**, Open WebUI ensures your organization stays ahead with **cutting-edge features** and **continuous updates**—no waiting for long release cycles.
---
-## **Exclusive Enterprise Features & Services**
+## Exclusive Enterprise Features & Services
Open WebUI’s enterprise solutions provide mission-critical businesses with **a suite of advanced capabilities and dedicated support**, including:
-### 🔧 **Enterprise-Grade Support & SLAs**
-
-✅ **Priority SLA Support** – **24/7 support — Available in English and Korean (한국어)** with dedicated response times for mission-critical issues.
-
-✅ **Dedicated Account Manager** – A **single point of contact** for guidance, onboarding, and strategy.
-
-✅ **Exclusive Office Hours with Core Engineers** – Directly work with the engineers evolving Open WebUI.
+#### 🔧 Enterprise-Grade Support & SLAs
+- **Priority SLA Support** – **24/7 support** with dedicated response times for mission-critical issues.
+- **Dedicated Account Manager** – A **single point of contact** for guidance, onboarding, and strategy.
+- **Exclusive Office Hours with Core Engineers** – Directly work with the engineers evolving Open WebUI.
+#### ⚙ Customization & AI Model Optimization
+- **Custom Theming & Branding** – White-label Open WebUI to **reflect your enterprise identity**.
+- **Custom AI Model Integration & Fine-Tuning** – Integrate **proprietary** or **third-party** AI models tailored for your workflows.
+- **Private Feature Development** – Work directly with our team to **build custom features** specific to your organization’s needs.
-### ⚙ **Customization & AI Model Optimization**
+#### 🛡️ Advanced Security & Compliance
+- **On-Premise & Air-Gapped Deployments** – Full control over data, hosted in **your infrastructure**.
+- **Security Hardening & Compliance Audits** – Receive **customized compliance assessments** and configurations.
+- **Role-Based Access Control (RBAC)** – Enterprise-ready **SSO, LDAP, and IAM** integration.
-✅ **Custom Theming & Branding** – White-label Open WebUI to **reflect your enterprise identity**.
+#### 🏗️ Operational Reliability & Deployment Services
+- **Managed Deployments** – Our team helps you **deploy Open WebUI effortlessly**, whether **on-premise, hybrid, or cloud**.
+- **Version Stability & Long-Term Maintenance** – Enterprise customers receive **LTS (Long-Term Support) versions** for managed **stability and security** over time.
+- **Enterprise Backups & Disaster Recovery** – High availability with structured backup plans and rapid recovery strategies.
-✅ **Custom AI Model Integration & Fine-Tuning** – Integrate **proprietary** or **third-party** AI models tailored for your workflows.
-
-✅ **Private Feature Development** – Work directly with our team to **build custom features** specific to your organization’s needs.
-
-
-### 🛡️ **Advanced Security & Compliance**
-
-✅ **On-Premise & Air-Gapped Deployments** – Full control over data, hosted in **your infrastructure**.
-
-✅ **Security Hardening & Compliance Audits** – Receive **customized compliance assessments** and configurations.
-
-✅ **Role-Based Access Control (RBAC)** – Enterprise-ready **SSO, LDAP, and IAM** integration.
-
-
-### 🏗️ **Operational Reliability & Deployment Services**
-
-✅ **Managed Deployments** – Our team helps you **deploy Open WebUI effortlessly**, whether **on-premise, hybrid, or cloud**.
-
-✅ **Version Stability & Long-Term Maintenance** – Enterprise customers receive **LTS (Long-Term Support) versions** for managed **stability and security** over time.
-
-✅ **Enterprise Backups & Disaster Recovery** – High availability with structured backup plans and rapid recovery strategies.
-
-
-### 📚 **Enterprise Training, Workshops & Consulting**
-
-✅ **AI Training & Enablement** – Expert-led **workshops for your engineering and data science teams**.
-
-✅ **Operational AI Consulting** – On-demand **architecture, optimization, and deployment consulting**.
-
-✅ **Strategic AI Roadmap Planning** – Work with our experts to **define your AI transformation strategy**.
+#### 📚 Enterprise Training, Workshops & Consulting
+- **AI Training & Enablement** – Expert-led **workshops for your engineering and data science teams**.
+- **Operational AI Consulting** – On-demand **architecture, optimization, and deployment consulting**.
+- **Strategic AI Roadmap Planning** – Work with our experts to **define your AI transformation strategy**.
---
-## **Keep Open WebUI Thriving: Support Continuous Innovation**
-
-:::tip
+## Keep Open WebUI Thriving ❤️
Even if you **don’t need an enterprise license**, consider becoming a **sponsor** to help fund continuous development.
-It’s an **investment in stability, longevity, and ongoing improvements**. A well-funded Open WebUI means **fewer bugs, fewer security concerns, and a more feature-rich platform** that stays ahead of industry trends. The cost of sponsoring is **a fraction of what it would take to build, maintain, and support an equivalent AI system internally.**
-:::
-
-
-You can use Open WebUI for free, no strings attached. However, building, maintaining, supporting, and evolving such a powerful AI platform requires **significant effort, time, and resources**. Infrastructure costs, security updates, continuous improvements, and keeping up with the latest AI advancements all demand **dedicated engineering, operational, and research efforts**.
-
-If Open WebUI helps your business save time, money, or resources, we **encourage** you to consider supporting its development. As an **independently funded** project, sponsorship enables us to maintain **a fast iteration cycle to keep up with the rapid advancements in AI**. Your support directly contributes to critical features, security enhancements, performance improvements, and integrations that benefit everyone—including **you**. Open WebUI will continue to offer the same feature set without requiring an enterprise license, ensuring **accessibility for all users**.
-
-💙 **[Sponsor Open WebUI](https://github.com/sponsors/tjbck)** – Join our existing backers in keeping Open WebUI thriving.
+Open WebUI is free to use, but building, maintaining, and improving a platform of this scale takes real time, resources, and ongoing engineering work. Sponsorships help fund stability, security, new features, and long-term development—at a fraction of what it would cost to build and maintain an equivalent system in-house.
-Whether through **enterprise partnerships, contributions, or financial backing**, your support plays a crucial role in sustaining this powerful AI platform for businesses **worldwide**.
+💙 **[Sponsor Open WebUI](https://github.com/sponsors/tjbck)**
+Your support—through sponsorships, contributions, or enterprise partnerships—helps keep Open WebUI strong and evolving for users around the world.
\ No newline at end of file
diff --git a/docs/faq.mdx b/docs/faq.mdx
index 8fe272627c..b35bd853fa 100644
--- a/docs/faq.mdx
+++ b/docs/faq.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1200
-title: "📋 FAQ"
+title: "❓ FAQ"
---
import { TopBanners } from "@site/src/components/TopBanners";
@@ -98,7 +98,7 @@ To do this, configure your **Ollama model params** to allow a larger context win
### **Q: Is MCP (Model Context Protocol) supported in Open WebUI?**
-**A:** [Yes, Open WebUI officially supports MCP Tool Servers—but exclusively through an **OpenAPI-compliant proxy**](/openapi-servers/mcp) ([openapi-servers](https://github.com/open-webui/openapi-servers)) for optimal compatibility, security, and maintainability.
+**A:** [Yes, Open WebUI officially supports MCP Tool Servers—but exclusively through an **OpenAPI-compliant proxy**](/features/plugin/tools/openapi-servers/mcp) ([openapi-servers](https://github.com/open-webui/openapi-servers)) for optimal compatibility, security, and maintainability.
To bridge MCP (and other backend protocols), we provide a purpose-built proxy implementation available at: 👉 [https://github.com/open-webui/mcpo](https://github.com/open-webui/mcpo)
@@ -117,6 +117,10 @@ In summary: MCP is supported — as long as the MCP Tool Server is fronted by an
To stay informed, you can follow release notes and announcements on our [GitHub Releases page](https://github.com/open-webui/open-webui/releases).
+### **Q: Why is the frontend integrated into the same Docker image? Isn't this unscalable or problematic?**
+
+The assumption that bundling the frontend with the backend is unscalable comes from a misunderstanding of how modern Single-Page Applications work. Open WebUI’s frontend is a static SPA, meaning it consists only of HTML, CSS, and JavaScript files with no runtime coupling to the backend. Because these files are static, lightweight, and require no separate server, including them in the same image has no impact on scalability. This approach simplifies deployment, ensures every replica serves the exact same assets, and eliminates unnecessary moving parts. If you prefer, you can still host the SPA on any CDN or static hosting service and point it to a remote backend, but packaging both together is the standard and most practical method for containerized SPAs.
+
### **Q: Is Open WebUI scalable for large organizations or enterprise deployments?**
**A:** Yes—**Open WebUI is architected for massive scalability and production readiness.** It’s already trusted in deployments supporting extremely high user counts—**think tens or even hundreds of thousands of seats**—used by universities, multinational enterprises, and major organizations worldwide.
diff --git a/docs/features/audio/_category_.json b/docs/features/audio/_category_.json
new file mode 100644
index 0000000000..9d8d35a0b2
--- /dev/null
+++ b/docs/features/audio/_category_.json
@@ -0,0 +1,7 @@
+{
+ "label": "Speech-to-Text & Text-to-Speech",
+ "position": 500,
+ "link": {
+ "type": "generated-index"
+ }
+}
diff --git a/docs/tutorials/speech-to-text/_category_.json b/docs/features/audio/speech-to-text/_category_.json
similarity index 65%
rename from docs/tutorials/speech-to-text/_category_.json
rename to docs/features/audio/speech-to-text/_category_.json
index 38926e7532..78252bfaff 100644
--- a/docs/tutorials/speech-to-text/_category_.json
+++ b/docs/features/audio/speech-to-text/_category_.json
@@ -1,5 +1,5 @@
{
- "label": "🎤 Speech To Text",
+ "label": "Speech To Text",
"position": 5,
"link": {
"type": "generated-index"
diff --git a/docs/tutorials/speech-to-text/env-variables.md b/docs/features/audio/speech-to-text/env-variables.md
similarity index 100%
rename from docs/tutorials/speech-to-text/env-variables.md
rename to docs/features/audio/speech-to-text/env-variables.md
diff --git a/docs/tutorials/speech-to-text/stt-config.md b/docs/features/audio/speech-to-text/stt-config.md
similarity index 98%
rename from docs/tutorials/speech-to-text/stt-config.md
rename to docs/features/audio/speech-to-text/stt-config.md
index fd4f9159ea..fcc3d6fc95 100644
--- a/docs/tutorials/speech-to-text/stt-config.md
+++ b/docs/features/audio/speech-to-text/stt-config.md
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🗨️ Configuration"
+title: "Configuration"
---
Open Web UI supports both local, browser, and remote speech to text.
diff --git a/docs/tutorials/text-to-speech/Kokoro-FastAPI-integration.md b/docs/features/audio/text-to-speech/Kokoro-FastAPI-integration.md
similarity index 98%
rename from docs/tutorials/text-to-speech/Kokoro-FastAPI-integration.md
rename to docs/features/audio/text-to-speech/Kokoro-FastAPI-integration.md
index 09279f8d51..6db39faef9 100644
--- a/docs/tutorials/text-to-speech/Kokoro-FastAPI-integration.md
+++ b/docs/features/audio/text-to-speech/Kokoro-FastAPI-integration.md
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🗨️ Kokoro-FastAPI Using Docker"
+title: "Kokoro-FastAPI Using Docker"
---
:::warning
diff --git a/docs/tutorials/text-to-speech/_category_.json b/docs/features/audio/text-to-speech/_category_.json
similarity index 63%
rename from docs/tutorials/text-to-speech/_category_.json
rename to docs/features/audio/text-to-speech/_category_.json
index 0b92cd96a3..d171b331fc 100644
--- a/docs/tutorials/text-to-speech/_category_.json
+++ b/docs/features/audio/text-to-speech/_category_.json
@@ -1,5 +1,5 @@
{
- "label": "🗨️ Text-to-Speech",
+ "label": "Text-to-Speech",
"position": 5,
"link": {
"type": "generated-index"
diff --git a/docs/tutorials/text-to-speech/chatterbox-tts-api-integration.md b/docs/features/audio/text-to-speech/chatterbox-tts-api-integration.md
similarity index 99%
rename from docs/tutorials/text-to-speech/chatterbox-tts-api-integration.md
rename to docs/features/audio/text-to-speech/chatterbox-tts-api-integration.md
index 5b0ebb06bc..17990eb457 100644
--- a/docs/tutorials/text-to-speech/chatterbox-tts-api-integration.md
+++ b/docs/features/audio/text-to-speech/chatterbox-tts-api-integration.md
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🗨️ Chatterbox TTS — Voice Cloning"
+title: "Chatterbox TTS — Voice Cloning"
---
# Chatterbox TTS — Voice Cloning
diff --git a/docs/tutorials/text-to-speech/kokoro-web-integration.md b/docs/features/audio/text-to-speech/kokoro-web-integration.md
similarity index 98%
rename from docs/tutorials/text-to-speech/kokoro-web-integration.md
rename to docs/features/audio/text-to-speech/kokoro-web-integration.md
index e2d6d6cfad..5801618404 100644
--- a/docs/tutorials/text-to-speech/kokoro-web-integration.md
+++ b/docs/features/audio/text-to-speech/kokoro-web-integration.md
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🗨️ Kokoro Web - Effortless TTS for Open WebUI"
+title: "Kokoro Web - Effortless TTS for Open WebUI"
---
:::warning
diff --git a/docs/tutorials/text-to-speech/openai-edge-tts-integration.md b/docs/features/audio/text-to-speech/openai-edge-tts-integration.md
similarity index 99%
rename from docs/tutorials/text-to-speech/openai-edge-tts-integration.md
rename to docs/features/audio/text-to-speech/openai-edge-tts-integration.md
index 0fdf43958f..7bd30a307c 100644
--- a/docs/tutorials/text-to-speech/openai-edge-tts-integration.md
+++ b/docs/features/audio/text-to-speech/openai-edge-tts-integration.md
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🗨️ Edge TTS Using Docker"
+title: "Edge TTS Using Docker"
---
:::warning
diff --git a/docs/tutorials/text-to-speech/openedai-speech-integration.md b/docs/features/audio/text-to-speech/openedai-speech-integration.md
similarity index 99%
rename from docs/tutorials/text-to-speech/openedai-speech-integration.md
rename to docs/features/audio/text-to-speech/openedai-speech-integration.md
index b0942bb73c..b4813e71f9 100644
--- a/docs/tutorials/text-to-speech/openedai-speech-integration.md
+++ b/docs/features/audio/text-to-speech/openedai-speech-integration.md
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🗨️ Openedai-speech Using Docker"
+title: "Openedai-speech Using Docker"
---
:::warning
diff --git a/docs/features/auth/_category_.json b/docs/features/auth/_category_.json
index 274c99a4b4..f8a4c15a54 100644
--- a/docs/features/auth/_category_.json
+++ b/docs/features/auth/_category_.json
@@ -1,4 +1,7 @@
{
- "label": "🔐 Federated Authentication",
- "position": 0
+ "label": "Federated Authentication",
+ "position": 0,
+ "link": {
+ "type": "generated-index"
+ }
}
diff --git a/docs/features/auth/ldap.mdx b/docs/features/auth/ldap.mdx
index 3beaf7991e..0250006326 100644
--- a/docs/features/auth/ldap.mdx
+++ b/docs/features/auth/ldap.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🖥️ LDAP Authentication"
+title: "LDAP Authentication"
---
# OpenLDAP Integration
diff --git a/docs/features/auth/scim.mdx b/docs/features/auth/scim.mdx
index c20276aed8..4b56430b1b 100644
--- a/docs/features/auth/scim.mdx
+++ b/docs/features/auth/scim.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🚀 SCIM 2.0"
+title: "SCIM 2.0"
---
# SCIM 2.0 Support
diff --git a/docs/features/auth/sso/index.mdx b/docs/features/auth/sso/index.mdx
index 037baadc82..4c5075da01 100644
--- a/docs/features/auth/sso/index.mdx
+++ b/docs/features/auth/sso/index.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "⚡️ SSO (OAuth, OIDC, Trusted Header)"
+title: "SSO (OAuth, OIDC, Trusted Header)"
---
:::info
diff --git a/docs/features/auth/sso/keycloak.mdx b/docs/features/auth/sso/keycloak.mdx
index a4a5c7ef78..a3d1e761d7 100644
--- a/docs/features/auth/sso/keycloak.mdx
+++ b/docs/features/auth/sso/keycloak.mdx
@@ -1,5 +1,5 @@
---
-title: "🔑 Keycloak"
+title: "Keycloak"
---
:::warning
diff --git a/docs/features/channels/index.md b/docs/features/channels/index.md
index 8947af1e45..a7fb0f2e35 100644
--- a/docs/features/channels/index.md
+++ b/docs/features/channels/index.md
@@ -1,6 +1,6 @@
---
-sidebar_position: 7
-title: "📢 Channels"
+sidebar_position: 1000
+title: "Channels"
---
-Soon...
\ No newline at end of file
+Soon...
diff --git a/docs/features/chat-features/chat-params.md b/docs/features/chat-features/chat-params.md
index 3b0cdb1808..f2238cbb4b 100644
--- a/docs/features/chat-features/chat-params.md
+++ b/docs/features/chat-features/chat-params.md
@@ -1,6 +1,6 @@
---
sidebar_position: 4
-title: "⚙️ Chat Parameters"
+title: "Chat Parameters"
---
Within Open WebUI, there are three levels to setting a **System Prompt** and **Advanced Parameters**: per-chat basis, per-model basis, and per-account basis. This hierarchical system allows for flexibility while maintaining structured administration and control.
diff --git a/docs/features/chat-features/chatshare.md b/docs/features/chat-features/chatshare.md
index a99d77567d..e77f5e1a61 100644
--- a/docs/features/chat-features/chatshare.md
+++ b/docs/features/chat-features/chatshare.md
@@ -1,6 +1,6 @@
---
sidebar_position: 4
-title: "🗨️ Chat Sharing"
+title: "Chat Sharing"
---
### Enabling Community Sharing
diff --git a/docs/features/chat-features/code-execution/artifacts.md b/docs/features/chat-features/code-execution/artifacts.md
index cf54a6aaa8..30f3080c25 100644
--- a/docs/features/chat-features/code-execution/artifacts.md
+++ b/docs/features/chat-features/code-execution/artifacts.md
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🏺 Artifacts"
+title: "Artifacts"
---
## What are Artifacts and how do I use them in Open WebUI?
diff --git a/docs/features/chat-features/code-execution/index.md b/docs/features/chat-features/code-execution/index.md
index 2bff856f08..d2ef064980 100644
--- a/docs/features/chat-features/code-execution/index.md
+++ b/docs/features/chat-features/code-execution/index.md
@@ -1,6 +1,6 @@
---
sidebar_position: 5
-title: "🐍 Code Execution"
+title: "Code Execution"
---
Open WebUI offers powerful code execution capabilities directly within your chat interface, enabling you to transform ideas into actionable results without leaving the platform.
diff --git a/docs/features/chat-features/code-execution/mermaid.md b/docs/features/chat-features/code-execution/mermaid.md
index 1bb3a7cded..40a4a0e800 100644
--- a/docs/features/chat-features/code-execution/mermaid.md
+++ b/docs/features/chat-features/code-execution/mermaid.md
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🌊 MermaidJS Rendering"
+title: "MermaidJS Rendering"
---
## 🌊 MermaidJS Rendering Support in Open WebUI
diff --git a/docs/features/chat-features/code-execution/python.md b/docs/features/chat-features/code-execution/python.md
index ddedd1f97d..3b869ec5c6 100644
--- a/docs/features/chat-features/code-execution/python.md
+++ b/docs/features/chat-features/code-execution/python.md
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🐍 Python Code Execution"
+title: "Python Code Execution"
---
# 🐍 Python Code Execution
diff --git a/docs/features/chat-features/conversation-organization.md b/docs/features/chat-features/conversation-organization.md
index 54b26288ee..aaa477223e 100644
--- a/docs/features/chat-features/conversation-organization.md
+++ b/docs/features/chat-features/conversation-organization.md
@@ -1,6 +1,6 @@
---
sidebar_position: 4
-title: "🗂️ Organizing Conversations"
+title: "Organizing Conversations"
---
Open WebUI provides powerful organization features that help users manage their conversations. You can easily categorize and tag conversations, making it easier to find and retrieve them later. The two primary ways to organize conversations are through **Folders** and **Tags**.
diff --git a/docs/features/chat-features/index.mdx b/docs/features/chat-features/index.mdx
index 6edff57e17..93501123d7 100644
--- a/docs/features/chat-features/index.mdx
+++ b/docs/features/chat-features/index.mdx
@@ -1,6 +1,6 @@
---
-sidebar_position: 1
-title: "💬 Chat Features"
+sidebar_position: 800
+title: "Chat Features"
---
# Chat Features Overview
diff --git a/docs/features/chat-features/url-params.md b/docs/features/chat-features/url-params.md
index 0a36f74a70..ab4195391e 100644
--- a/docs/features/chat-features/url-params.md
+++ b/docs/features/chat-features/url-params.md
@@ -1,6 +1,6 @@
---
sidebar_position: 5
-title: "🔗 URL Parameters"
+title: "URL Parameters"
---
In Open WebUI, chat sessions can be customized through various URL parameters. These parameters allow you to set specific configurations, enable features, and define model settings on a per-chat basis. This approach provides flexibility and control over individual chat sessions directly from the URL.
@@ -47,7 +47,7 @@ The following table lists the available URL parameters, their function, and exam
### 4. **Web Search**
-- **Description**: Enabling `web-search` allows the chat session to access [web search](/category/-web-search) functionality.
+- **Description**: Enabling `web-search` allows the chat session to access [web search](/category/web-search/) functionality.
- **How to Set**: Set this parameter to `true` to enable web search.
- **Example**: `/?web-search=true`
- **Behavior**: If enabled, the chat can retrieve web search results as part of its responses.
diff --git a/docs/features/evaluation/index.mdx b/docs/features/evaluation/index.mdx
index 027ecdd6a6..8f1a3e445c 100644
--- a/docs/features/evaluation/index.mdx
+++ b/docs/features/evaluation/index.mdx
@@ -1,6 +1,6 @@
---
-sidebar_position: 6
-title: "📝 Evaluation"
+sidebar_position: 1100
+title: "Evaluation"
---
## Why Should I Evaluate Models?
diff --git a/docs/features/image-generation-and-editing/_category_.json b/docs/features/image-generation-and-editing/_category_.json
new file mode 100644
index 0000000000..306aa86953
--- /dev/null
+++ b/docs/features/image-generation-and-editing/_category_.json
@@ -0,0 +1,7 @@
+{
+ "label": "Create & Edit Images",
+ "position": 400,
+ "link": {
+ "type": "generated-index"
+ }
+}
diff --git a/docs/features/image-generation-and-editing/automatic1111.md b/docs/features/image-generation-and-editing/automatic1111.md
new file mode 100644
index 0000000000..a84a348829
--- /dev/null
+++ b/docs/features/image-generation-and-editing/automatic1111.md
@@ -0,0 +1,39 @@
+---
+sidebar_position: 1
+title: "AUTOMATIC1111"
+---
+
+:::warning
+This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+:::
+
+Open WebUI supports image generation through the **AUTOMATIC1111** [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API). Here are the steps to get started:
+
+### Initial Setup
+
+1. Ensure that you have [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) installed.
+2. Launch AUTOMATIC1111 with additional flags to enable API access:
+
+```python
+/webui.sh --api --listen
+```
+
+3. For Docker installation of WebUI with the environment variables preset, use the following command:
+
+```docker
+docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e AUTOMATIC1111_BASE_URL=http://host.docker.internal:7860/ -e ENABLE_IMAGE_GENERATION=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
+```
+
+### Setting Up Open WebUI with AUTOMATIC1111
+
+1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
+2. Set the `Image Generation Engine` field to `Default (Automatic1111)`.
+3. In the API URL field, enter the address where AUTOMATIC1111's API is accessible:
+
+
+
+```txt
+http://:7860/
+```
+
+If you're running a Docker installation of Open WebUI and AUTOMATIC1111 on the same host, use `http://host.docker.internal:7860/` as your address.
diff --git a/docs/features/image-generation-and-editing/comfyui.md b/docs/features/image-generation-and-editing/comfyui.md
new file mode 100644
index 0000000000..40cf5da5ff
--- /dev/null
+++ b/docs/features/image-generation-and-editing/comfyui.md
@@ -0,0 +1,273 @@
+---
+sidebar_position: 2
+title: "ComfyUI"
+---
+
+:::warning
+This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+:::
+
+ComfyUI is a powerful and modular node-based GUI for Stable Diffusion. It gives users a high degree of control over the image generation process. Learn more or download it from its [GitHub page](https://github.com/comfyanonymous/ComfyUI).
+
+To run ComfyUI and make it accessible to Open WebUI, you must start it with the `--listen` flag to bind to `0.0.0.0`. This allows it to accept connections from other computers on your network.
+
+Once running, the ComfyUI interface will be available at `http://:8188`.
+
+## Connecting ComfyUI to Open WebUI
+
+Since Open WebUI typically runs inside Docker, you must ensure the container can reach the host-based ComfyUI application via `host.docker.internal`.
+
+1. **Host Binding Check:** Ensure ComfyUI is running with the `--listen 0.0.0.0` flag (Step 5).
+2. **Firewall Check:** If the host firewall (UFW) is active, ensure port 8188 is allowed (`sudo ufw allow 8188/tcp`).
+
+3. **Docker Run Command (Linux Native Docker):**
+For Linux users not running Docker Desktop, you must explicitly map the host gateway when running the Open WebUI container.
+
+```docker
+docker run -d -p 3000:8080 \
+ --add-host=host.docker.internal:host-gateway \
+ -e COMFYUI_BASE_URL=http://host.docker.internal:8188/ \
+ -e ENABLE_IMAGE_GENERATION=True \
+ -v open-webui:/app/backend/data \
+ --name open-webui \
+ --restart always \
+ ghcr.io/open-webui/open-webui:main
+```
+
+Once you have ComfyUI installed and running, you can connect it to Open WebUI from the admin settings.
+
+## Create Image (Image Generation)
+
+1. **Navigate to Image Settings:** In Open WebUI, go to the **Admin Panel** > **Settings** > **Images**.
+
+2. **Enable and Configure ComfyUI:**
+ - Ensure the **Image Generation** toggle at the top of the page is enabled.
+ - Under the **Create Image** section, set the **Image Generation Engine** to `ComfyUI`.
+ - **Model**: Select the base model to be used for generating the image.
+ - **Image Size**: Defines the resolution of the generated image (e.g., 512x512, 1024x1024).
+ - **Steps**: The number of sampling steps; higher values can improve image quality but take longer to process.
+ - **Image Prompt Generation**: When enabled, this feature uses a language model to automatically generate a more detailed and creative prompt based on your initial input, which can lead to better image results.
+ - In the **ComfyUI Base URL** field, enter the address of your running ComfyUI instance (e.g., `http://host.docker.internal:8188/`).
+ - Click the **refresh icon** (🔄) next to the URL field to verify the connection. A success message should appear.
+ - If your ComfyUI instance requires an API key, enter it in the **ComfyUI API Key** field.
+
+ 
+
+3. **Upload Your ComfyUI Workflow:**
+ - First, you need to export a workflow from ComfyUI in the correct format. In the ComfyUI interface, click the ComfyUI logo at the top left and click **Settings**. Then toggle **"Dev Mode"** with a description that states "Enable dev mode options (API save, etc.)"**.
+ - While still in ComfyUI, load the **image generation** workflow you want to use, and then click the **"Save (API Format)"** button. This will prompt you to give a name to the file. Name it something memorable and download the file.
+ - Back in Open WebUI, under the **ComfyUI Workflow** section, click **Upload**. Select the JSON workflow file you just downloaded.
+
+ 
+
+4. **Map Workflow Nodes:**
+ - After the workflow is imported, you must map the node IDs from your workflow to the corresponding fields in Open WebUI (e.g., `Prompt`, `Model`, `Seed`). This tells Open WebUI which inputs in your ComfyUI workflow to control.
+ - You can find the node ID by clicking on a node in ComfyUI and viewing its details.
+
+ 
+
+ :::info
+ You may need to adjust an `Input Key` within Open WebUI's `ComfyUI Workflow Nodes` section to match a node in your workflow. For example, the default `seed` key might need to be changed to `noise_seed` depending on your workflow's structure.
+ :::
+
+ :::tip
+ Some workflows, such as ones that use any of the Flux models, may utilize multiple nodes IDs that is necessary to fill in for their node entry fields within Open WebUI. If a node entry field requires multiple IDs, the node IDs should be comma separated (e.g., `1` or `1, 2`).
+ :::
+
+5. **Save Configuration:**
+ - Click the **Save** button at the bottom of the page to finalize the configuration. You can now use ComfyUI for image generation in Open WebUI.
+
+## Edit Image
+
+Open WebUI also supports image editing through ComfyUI, allowing you to modify existing images.
+
+1. **Navigate to Image Settings:** In Open WebUI, go to the **Admin Panel** > **Settings** > **Images**.
+
+2. **Configure Image Editing:**
+ - Under the **Edit Image** section, set the **Image Edit Engine** to `ComfyUI`.
+ - **Model**: Select the model to be used for the editing task.
+ - **Image Size**: Specify the desired resolution for the output image.
+ - **ComfyUI Base URL** and **API Key**: These fields are shared with the image generation settings.
+ - **ComfyUI Workflow**: Upload a separate workflow file specifically designed for image editing tasks. The process is the same as for image generation.
+ - **Map Workflow Nodes**: Just as with image generation, you must map the node IDs from your editing workflow to the corresponding fields in Open WebUI. Common fields for editing workflows include `Image`, `Prompt`, and `Model`.
+
+ 
+
+
+### Deeper Dive: Mapping ComfyUI Nodes to Open WebUI
+
+Understanding the node ID mapping is often the biggest hurdle in integrating ComfyUI with an external service like Open WebUI. Integrating ComfyUI via API requires mapping Open WebUI's generic controls (e.g., "Model," "Width," "Prompt") to specific node inputs within your static ComfyUI workflow JSON.
+
+#### 1. Identifying Node IDs and Input Keys in ComfyUI
+
+Before configuring Open WebUI, you must examine your exported workflow JSON files directly in a text editor. The Node ID is the unique number ComfyUI uses to identify the node in the JSON structure. The top-level keys in the JSON object are the node IDs.
+
+**Identify the Input Key (The Parameter Name)**
+
+The Input Key is the exact parameter name within that node's JSON structure that you need to change (e.g., `seed`, `width`, `unet_name`).
+
+1. **Examine the JSON**: Look at your API workflow JSON (`workflow_api.json`).
+2. **Find the Node ID**: Locate the section corresponding to the node's ID (e.g., `"37"`).
+3. **Identify the Key**: Within the `"inputs"` block, find the variable you want to control.
+
+**Example: unet_name Node (ID 37)**
+
+```json
+"37": {
+ "inputs": {
+ "unet_name": "qwen_image_fp8_e4m3fn.safetensors",
+ "weight_dtype": "default"
+ },
+ "class_type": "UNETLoader",
+ "_meta": {
+ "title": "Load Diffusion Model"
+ }
+},
+```
+
+
+
+In this example, the Input Keys are `seed` and `steps`.
+
+#### 2. Mapping in Open WebUI
+
+In the Open WebUI settings under **ComfyUI Workflow Nodes**, you will see a list of hard-coded parameters (e.g., `Prompt`, `Model`, `Seed`). For each parameter, you must provide two pieces of information from your workflow:
+
+- **Input Key (Left Field)**: The specific parameter name from the node's `inputs` block in your workflow JSON (e.g., `text`, `unet_name`, `seed`).
+- **Node ID (Right Field)**: The corresponding ID of the node you want to control (e.g., `6`, `39`, `3`).
+
+This tells Open WebUI to find the node with the given ID and modify the value of the specified input key.
+
+**Example: Mapping KSampler Seed**
+
+Let's say you want to control the `seed` in your KSampler node, which has an ID of `3`. In the `Seed` section of the Open WebUI settings:
+
+| Open WebUI Parameter | Input Key (Left Field) | Node ID (Right Field) |
+|----------------------|------------------------|-----------------------|
+| `Seed` | `seed` | `3` |
+
+#### 3. Handling Complex/Multimodal Nodes (Qwen Example)
+
+For specialized nodes, the Input Key may not be a simple text.
+
+| Parameter | Input Key (Left Field) | Node ID (Right Field) | Note |
+|-------------|------------------------|-----------------------|--------------------------------------------------------------------------------------------------|
+| **Prompt** | `prompt` | `76` | The key is still `prompt`, but it targets the specialized TextEncodeQwenImageEdit node (76). |
+| **Model** | `unet_name` | `37` | You must use the exact input key `unet_name` to control the model file name in the UNETLoader. |
+| **Image Input** | `image` | `78` | The key is `image`. This passes the filename of the source image to the LoadImage node. |
+
+#### 4. Troubleshooting Mismatch Errors
+
+If ComfyUI stalls or gives a validation error, consult the log and the JSON structure:
+
+| Error Type | Cause & Debugging | Solution |
+|---------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `Value not in list: unet_name: 'xyz.safetensors'` | You mapped the correct node ID (e.g., 37), but the value being passed (e.g., `xyz.safetensors`) is not a valid model name for that node type (e.g., accidentally sending a VAE model to a UNET loader). | Correct the model name set in Open WebUI for either image generation or editing, ensuring both model names matche the type of model the ComfyUI node is expecting. |
+| `Missing input ` | Your workflow requires an input (e.g., `cfg` or `sampler_name`), but Open WebUI did not send a value because the field was not mapped. | Either hardcode the value in the workflow JSON, or map the required input key to the correct node ID. |
+
+By meticulously matching the Node ID and the specific Input Key, you ensure Open WebUI correctly overwrites the default values in your workflow JSON before submitting the prompt to ComfyUI.
+
+## Example Setup: Qwen Image Generation and Editing
+
+This section provides a supplementary guide on setting up the Qwen models for both image generation and editing.
+
+### Qwen Image Generation
+
+#### Model Download
+
+- **Diffusion Model**: [qwen_image_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_fp8_e4m3fn.safetensors)
+- **Text Encoder**: [qwen_2.5_vl_7b_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors)
+- **VAE**: [qwen_image_vae.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/vae/qwen_image_vae.safetensors)
+
+#### Model Storage Location
+
+```
+📂 ComfyUI/
+├── 📂 models/
+│ ├── 📂 diffusion_models/
+│ │ └── qwen_image_fp8_e4m3fn.safetensors
+│ ├── 📂 vae/
+│ │ └── qwen_image_vae.safetensors
+│ └── 📂 text_encoders/
+│ └── qwen_2.5_vl_7b_fp8_scaled.safetensors
+```
+
+### Qwen Image Editing
+
+#### Model Download
+
+- **Diffusion Model**: [qwen_image_edit_fp8_e4m3fn.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image-Edit_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_edit_fp8_e4m3fn.safetensors)
+- **Text Encoder**: [qwen_2.5_vl_7b_fp8_scaled.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors)
+- **VAE**: [qwen_image_vae.safetensors](https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/vae/qwen_image_vae.safetensors)
+
+#### Model Storage Location
+
+```
+📂 ComfyUI/
+├── 📂 models/
+│ ├── 📂 diffusion_models/
+│ │ └── qwen_image_edit_fp8_e4m3fn.safetensors
+│ ├── 📂 vae/
+│ │ └── qwen_image_vae.safetensors
+│ └── 📂 text_encoders/
+│ └── qwen_2.5_vl_7b_fp8_scaled.safetensors
+```
+
+## Example Setup: FLUX.1 Image Generation
+
+This section provides a supplementary guide on setting up the FLUX.1 models for image generation.
+
+### FLUX.1 Dev
+
+#### Model Download
+
+- **Diffusion Model**: [flux1-dev.safetensors](https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors)
+- **Text Encoder 1**: [clip_l.safetensors](https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors?download=true)
+- **Text Encoder 2**: [t5xxl_fp16.safetensors](https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors?download=true) (Recommended when your VRAM is greater than 32GB)
+- **VAE**: [ae.safetensors](https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors?download=true)
+
+#### Model Storage Location
+
+```
+📂 ComfyUI/
+├── 📂 models/
+│ ├── 📂 diffusion_models/
+│ │ └── flux1-dev.safetensors
+│ ├── 📂 vae/
+│ │ └── ae.safetensors
+│ └── 📂 text_encoders/
+│ ├── clip_l.safetensors
+│ └── t5xxl_fp16.safetensors
+```
+
+### FLUX.1 Schnell
+
+#### Model Download
+
+- **Diffusion Model**: [flux1-schnell.safetensors](https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/flux1-schnell.safetensors)
+- **Text Encoder 1**: [clip_l.safetensors](https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors?download=true)
+- **Text Encoder 2**: [t5xxl_fp8_e4m3fn.safetensors](https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors?download=true) (Recommended when your VRAM is greater than 32GB)
+- **VAE**: [ae.safetensors](https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors?download=true)
+
+#### Model Storage Location
+
+```
+📂 ComfyUI/
+├── 📂 models/
+│ ├── 📂 diffusion_models/
+│ │ └── flux1-schnell.safetensors
+│ ├── 📂 vae/
+│ │ └── ae.safetensors
+│ └── 📂 text_encoders/
+│ ├── clip_l.safetensors
+│ └── t5xxl_fp8_e4m3fn.safetensors
+```
+
+## Configuring with SwarmUI
+
+SwarmUI utilizes ComfyUI as its backend. In order to get Open WebUI to work with SwarmUI you will have to append `ComfyBackendDirect` to the `ComfyUI Base URL`. Additionally, you will want to setup SwarmUI with LAN access. After aforementioned adjustments, setting up SwarmUI to work with Open WebUI will be the same as the steps for [ComfyUI Image Generation](#comfyui-image-generation) outlined above.
+
+
+### SwarmUI API URL
+
+The address you will input as the ComfyUI Base URL will look like: `http://:7801/ComfyBackendDirect`
diff --git a/docs/features/image-generation-and-editing/gemini.md b/docs/features/image-generation-and-editing/gemini.md
new file mode 100644
index 0000000000..ae0c5b759b
--- /dev/null
+++ b/docs/features/image-generation-and-editing/gemini.md
@@ -0,0 +1,68 @@
+---
+sidebar_position: 5
+title: "Gemini"
+---
+
+:::warning
+This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+:::
+
+Open WebUI also supports image generation through the **Google AI Studio API** also known as the **Gemini API**.
+
+### Initial Setup
+
+1. Obtain an [API key](https://aistudio.google.com/api-keys) from Google AI Studio.
+2. You may need to create a project and enable the `Generative Language API` in addition to adding billing information.
+
+### Configuring Open WebUI
+
+1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
+2. Set the `Image Generation Engine` field to `Gemini`.
+3. Set the `API Base URL` to `https://generativelanguage.googleapis.com/v1beta`.
+4. Enter your Google AI Studio [API key](https://aistudio.google.com/api-keys).
+5. Enter the model you wish to use from these [available models](https://ai.google.dev/gemini-api/docs/imagen#model-versions).
+6. Set the image size to one of the available [image sizes](https://ai.google.dev/gemini-api/docs/image-generation#aspect_ratios).
+
+
+
+:::info
+
+This feature appears to only work for models supported with this endpoint: `https://generativelanguage.googleapis.com/v1beta/models/:predict`.
+This is the OpenAI **BETA** endpoint, which Google provides for experimental OpenAI compatibility.
+
+Google Imagen models use this endpoint while Gemini models use a different endpoint ending with `:generateContent`
+
+Imagen model endpoint example:
+
+- `https://generativelanguage.googleapis.com/v1beta/models/imagen-4.0-generate-001:predict`.
+- [Documentation for Imagen models](https://ai.google.dev/gemini-api/docs/imagen)
+
+Gemini model endpoint example:
+
+- `https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-image:generateContent`.
+- [Documentation for Gemini models](https://ai.google.dev/gemini-api/docs/image-generation)
+
+Trying to call a Gemini model, such as gemini-2.5-flash-image aka *Nano Banana* would result in an error due to the difference in supported endpoints for Image Generation.
+
+`400: [ERROR: models/gemini-2.5-flash-image is not found for API version v1beta, or is not supported for predict. Call ListModels to see the list of available models and their supported methods.]`
+
+:::
+
+### LiteLLM Proxy with Gemini Endpoints
+
+Image generation with a LiteLLM proxy using Gemini or Imagen endpoints is supported with Open WebUI. Configure the Image Generation as follows:
+
+1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
+2. Set the `Image Generation Engine` field to `Open AI`.
+3. Change the API endpoint URL to `https://:/v1`.
+4. Enter your LiteLLM API key.
+5. The API version can be left blank.
+6. Enter the image model name as it appears in your LiteLLM configuration.
+7. Set the image size to one of the available sizes for the selected model.
+
+:::tip
+
+To find your LiteLLM connection information, navigate to the **Admin Panel** > **Settings** > **Connections** menu.
+Your connection information will be listed under the Gemini API connection.
+
+:::
\ No newline at end of file
diff --git a/docs/features/image-generation-and-editing/image-router.md b/docs/features/image-generation-and-editing/image-router.md
new file mode 100644
index 0000000000..2a6034ac4e
--- /dev/null
+++ b/docs/features/image-generation-and-editing/image-router.md
@@ -0,0 +1,24 @@
+---
+sidebar_position: 4
+title: "Image Router"
+---
+
+:::warning
+This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+:::
+
+Open WebUI also supports image generation through the **Image Router APIs**. Image Router is an [open source](https://github.com/DaWe35/image-router) image generation proxy that unifies most popular models into a single API.
+
+### Initial Setup
+
+1. Obtain an [API key](https://imagerouter.io/api-keys) from Image Router.
+
+### Configuring Open WebUI
+
+1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
+2. Set the `Image Generation Engine` field to `Open AI` (Image Router uses the same syntax as OpenAI).
+3. Change the API endpoint URL to `https://api.imagerouter.io/v1/openai`
+4. Enter your Image Router API key.
+5. Enter the model you wish to use. Do not use the dropdown to select models, enter the model name instead. For more information, [see all models](https://imagerouter.io/models).
+
+
diff --git a/docs/features/image-generation-and-editing/openai.md b/docs/features/image-generation-and-editing/openai.md
new file mode 100644
index 0000000000..016a6a6d8f
--- /dev/null
+++ b/docs/features/image-generation-and-editing/openai.md
@@ -0,0 +1,65 @@
+---
+sidebar_position: 3
+title: "OpenAI"
+---
+
+:::warning
+This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+:::
+
+Open WebUI also supports image generation through the **OpenAI APIs**. This option includes a selector for choosing between DALL·E 2, DALL·E 3, and GPT-Image-1 each supporting different image sizes.
+
+### Initial Setup
+
+1. Obtain an [API key](https://platform.openai.com/api-keys) from OpenAI.
+
+### Configuring Open WebUI
+
+1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
+2. Set the `Image Generation Engine` field to `Open AI`.
+3. Enter your OpenAI API key.
+4. Choose the model you wish to use. Note that image size options will depend on the selected model:
+ - **DALL·E 2**: Supports `256x256`, `512x512`, or `1024x1024` images.
+ - **DALL·E 3**: Supports `1024x1024`, `1792x1024`, or `1024x1792` images.
+ - **GPT-Image-1**: Supports `auto`, `1024x1024`, `1536x1024`, or `1024x1536` images.
+
+
+
+### Azure OpenAI
+
+Image generation with Azure OpenAI Dall-E or GPT-Image is supported with Open WebUI. Configure the Image Generation as follows:
+
+1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
+2. Set the `Image Generation Engine` field to `Open AI` (Azure OpenAI uses the same syntax as OpenAI).
+3. Change the API endpoint URL to `https://.cognitiveservices.azure.com/openai/deployments//`. Set the instance and model id as you find it in the settings of the Azure AI Foundry.
+4. Configure the API version to the value you find in the settings of the Azure AI Fountry.
+5. Enter your Azure OpenAI API key.
+
+
+
+:::tip
+
+Alternative API endpoint URL tutorial: `https://.openai.azure.com/openai/deployments//` - you can find your endpoint name on https://ai.azure.com/resource/overview, and model name on https://ai.azure.com/resource/deployments.
+You can also copy Target URI from your deployment detailed page, but remember to delete strings after model name.
+For example, if your Target URI is `https://test.openai.azure.com/openai/deployments/gpt-image-1/images/generations?api-version=2025-04-01-preview`, the API endpoint URL in Open WebUI should be `https://test.openai.azure.com/openai/deployments/gpt-image-1/`.
+
+:::
+
+### LiteLLM Proxy with OpenAI Endpoints
+
+Image generation with a LiteLLM proxy using OpenAI endpoints is supported with Open WebUI. Configure the Image Generation as follows:
+
+1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
+2. Set the `Image Generation Engine` field to `Open AI`.
+3. Change the API endpoint URL to `https://:/v1`.
+4. Enter your LiteLLM API key.
+5. The API version can be left blank.
+6. Enter the image model name as it appears in your LiteLLM configuration.
+7. Set the image size to one of the available sizes for the selected model.
+
+:::tip
+
+To find your LiteLLM connection information, navigate to the **Admin Panel** > **Settings** > **Connections** menu.
+Your connection information will be listed under the OpenAI API connection.
+
+:::
diff --git a/docs/features/image-generation-and-editing/usage.md b/docs/features/image-generation-and-editing/usage.md
new file mode 100644
index 0000000000..2d777f113e
--- /dev/null
+++ b/docs/features/image-generation-and-editing/usage.md
@@ -0,0 +1,30 @@
+---
+sidebar_position: 6
+title: "Usage"
+---
+
+Before you can use image generation, you must ensure that the **Image Generation** toggle is enabled in the **Admin Panel** > **Settings** > **Images** menu.
+
+## Using Image Generation
+
+### Method 1
+
+1. Toggle the `Image Generation` switch to on.
+2. Enter your image generation prompt.
+3. Click `Send`.
+
+
+
+### Method 2
+
+
+
+1. First, use a text generation model to write a prompt for image generation.
+2. After the response has finished, you can click the Picture icon to generate an image.
+3. After the image has finished generating, it will be returned automatically in chat.
+
+:::tip
+
+You can also edit the LLM's response and enter your image generation prompt as the message to send off for image generation instead of using the actual response provided by the LLM.
+
+:::
diff --git a/docs/features/index.mdx b/docs/features/index.mdx
index 89af2b124d..7431140f41 100644
--- a/docs/features/index.mdx
+++ b/docs/features/index.mdx
@@ -1,5 +1,5 @@
---
-sidebar_position: 400
+sidebar_position: 200
title: "⭐ Features"
---
diff --git a/docs/features/interface/_category_.json b/docs/features/interface/_category_.json
index 1e34d5f7db..5898d63644 100644
--- a/docs/features/interface/_category_.json
+++ b/docs/features/interface/_category_.json
@@ -1,4 +1,7 @@
{
- "label": "🔌 Interface",
- "position": 4
+ "label": "Interface",
+ "position": 900,
+ "link": {
+ "type": "generated-index"
+ }
}
diff --git a/docs/features/interface/banners.md b/docs/features/interface/banners.md
index 2d23a19459..41f67397ba 100644
--- a/docs/features/interface/banners.md
+++ b/docs/features/interface/banners.md
@@ -1,6 +1,6 @@
---
sidebar_position: 13
-title: "🔰 Customizable Banners"
+title: "Customizable Banners"
---
## Overview
diff --git a/docs/features/interface/webhooks.md b/docs/features/interface/webhooks.md
index 3546202c70..e6bc640d95 100644
--- a/docs/features/interface/webhooks.md
+++ b/docs/features/interface/webhooks.md
@@ -1,6 +1,6 @@
---
sidebar_position: 17
-title: "🪝 Webhook Integrations"
+title: "Webhook Integrations"
---
## Overview
diff --git a/docs/features/mcp.mdx b/docs/features/mcp.mdx
index 02d737d5dc..9c1ffad1af 100644
--- a/docs/features/mcp.mdx
+++ b/docs/features/mcp.mdx
@@ -1,5 +1,6 @@
---
-title: 🔌 Model Context Protocol (MCP)
+title: Model Context Protocol (MCP)
+sidebar_position: 1200
---
Open WebUI natively supports **MCP (Model Context Protocol)** starting in **v0.6.31**. This page shows how to enable it quickly, harden it for production, and troubleshoot common snags.
diff --git a/docs/features/pipelines/_category_.json b/docs/features/pipelines/_category_.json
index 512ed53bec..f05e70eb04 100644
--- a/docs/features/pipelines/_category_.json
+++ b/docs/features/pipelines/_category_.json
@@ -1,4 +1,4 @@
{
- "label": "⚡ Pipelines",
- "position": 900
+ "label": "Pipelines",
+ "position": 999999
}
diff --git a/docs/features/pipelines/filters.md b/docs/features/pipelines/filters.md
index bd4d51c644..05c48a7e0d 100644
--- a/docs/features/pipelines/filters.md
+++ b/docs/features/pipelines/filters.md
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🚰 Filters"
+title: "Filters"
---
## Filters
diff --git a/docs/features/pipelines/index.mdx b/docs/features/pipelines/index.mdx
index 4595e1830d..7e55807f8e 100644
--- a/docs/features/pipelines/index.mdx
+++ b/docs/features/pipelines/index.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1000
-title: "⚡ Pipelines"
+title: "Pipelines"
---
import { TopBanners } from "@site/src/components/TopBanners";
diff --git a/docs/features/pipelines/pipes.md b/docs/features/pipelines/pipes.md
index 3790980cd4..4e66e73357 100644
--- a/docs/features/pipelines/pipes.md
+++ b/docs/features/pipelines/pipes.md
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🔧 Pipes"
+title: "Pipes"
---
## Pipes
diff --git a/docs/features/pipelines/tutorials.md b/docs/features/pipelines/tutorials.md
index 5875116713..bbaed36d38 100644
--- a/docs/features/pipelines/tutorials.md
+++ b/docs/features/pipelines/tutorials.md
@@ -1,6 +1,6 @@
---
sidebar_position: 7
-title: "📖 Pipeline Tutorials"
+title: "Tutorials"
---
## Pipeline Tutorials
diff --git a/docs/features/pipelines/valves.md b/docs/features/pipelines/valves.md
index 2d5e78ab3f..f99aee731e 100644
--- a/docs/features/pipelines/valves.md
+++ b/docs/features/pipelines/valves.md
@@ -1,11 +1,11 @@
---
sidebar_position: 3
-title: "⚙️ Valves"
+title: "Valves"
---
## Valves
-`Valves` (see the dedicated [Valves & UserValves](/features/plugin/valves) page) can also be set for `Pipeline`. In short, `Valves` are input variables that are set per pipeline.
+`Valves` (see the dedicated [Valves & UserValves](/features/plugin/development/valves) page) can also be set for `Pipeline`. In short, `Valves` are input variables that are set per pipeline.
`Valves` are set as a subclass of the `Pipeline` class, and initialized as part of the `__init__` method of the `Pipeline` class.
diff --git a/docs/tutorials/web-search/_category_.json b/docs/features/plugin/development/_category_.json
similarity index 50%
rename from docs/tutorials/web-search/_category_.json
rename to docs/features/plugin/development/_category_.json
index e76e19c54b..404bd0f2b7 100644
--- a/docs/tutorials/web-search/_category_.json
+++ b/docs/features/plugin/development/_category_.json
@@ -1,6 +1,6 @@
{
- "label": "🌐 Web Search",
- "position": 6,
+ "label": "Development",
+ "position": 800,
"link": {
"type": "generated-index"
}
diff --git a/docs/features/plugin/events/index.mdx b/docs/features/plugin/development/events.mdx
similarity index 98%
rename from docs/features/plugin/events/index.mdx
rename to docs/features/plugin/development/events.mdx
index d0e7bc8a15..09f66ab11c 100644
--- a/docs/features/plugin/events/index.mdx
+++ b/docs/features/plugin/development/events.mdx
@@ -1,9 +1,9 @@
---
sidebar_position: 3
-title: "⛑️ Events"
+title: "Events"
---
-# ⛑️ Events: Using `__event_emitter__` and `__event_call__` in Open WebUI
+# 🔔 Events: Using `__event_emitter__` and `__event_call__` in Open WebUI
Open WebUI's plugin architecture is not just about processing input and producing output—**it's about real-time, interactive communication with the UI and users**. To make your Tools, Functions, and Pipes more dynamic, Open WebUI provides a built-in event system via the `__event_emitter__` and `__event_call__` helpers.
@@ -92,7 +92,7 @@ Below is a comprehensive table of **all supported `type` values** for events, al
| `chat:message:files`,
`files` | Set or overwrite message files (for uploads, output) | `{files: [...]}` |
| `chat:title` | Set (or update) the chat conversation title | Topic string OR `{title: ...}` |
| `chat:tags` | Update the set of tags for a chat | Tag array or object |
-| `source`,
`citation` | Add a source/citation, or code execution result | For code: See [below.](/docs/features/plugin/events/index.mdx#source-or-citation-and-code-execution) |
+| `source`,
`citation` | Add a source/citation, or code execution result | For code: See [below.](/features/plugin/development/events#source-or-citation-and-code-execution) |
| `notification` | Show a notification ("toast") in the UI | `{type: "info" or "success" or "error" or "warning", content: "..."}` |
| `confirmation`
(needs `__event_call__`) | Ask for confirmation (OK/Cancel dialog) | `{title: "...", message: "..."}` |
| `input`
(needs `__event_call__`) | Request simple user input ("input box" dialog) | `{title: "...", message: "...", placeholder: "...", value: ...}` |
diff --git a/docs/tutorials/tips/special_arguments.mdx b/docs/features/plugin/development/reserved-args.mdx
similarity index 99%
rename from docs/tutorials/tips/special_arguments.mdx
rename to docs/features/plugin/development/reserved-args.mdx
index 4508b1b9ba..1f389e7fa6 100644
--- a/docs/tutorials/tips/special_arguments.mdx
+++ b/docs/features/plugin/development/reserved-args.mdx
@@ -1,6 +1,6 @@
---
-sidebar_position: 20
-title: "💡 Special Arguments"
+sidebar_position: 999
+title: "Reserved Arguments"
---
:::warning
@@ -9,7 +9,7 @@ This tutorial is a community contribution and is not supported by the Open WebUI
:::
-# 💡 Special Arguments
+# 🪄 Special Arguments
When developping your own `Tools`, `Functions` (`Filters`, `Pipes` or `Actions`), `Pipelines` etc, you can use special arguments explore the full spectrum of what Open-WebUI has to offer.
diff --git a/docs/features/plugin/valves/index.mdx b/docs/features/plugin/development/valves.mdx
similarity index 99%
rename from docs/features/plugin/valves/index.mdx
rename to docs/features/plugin/development/valves.mdx
index 22326c0ec9..b91ff07016 100644
--- a/docs/features/plugin/valves/index.mdx
+++ b/docs/features/plugin/development/valves.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🔄 Valves"
+title: "Valves"
---
## Valves
diff --git a/docs/features/plugin/functions/action.mdx b/docs/features/plugin/functions/action.mdx
index 17cbddf1f7..80577608e3 100644
--- a/docs/features/plugin/functions/action.mdx
+++ b/docs/features/plugin/functions/action.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🎬 Action Function"
+title: "Action Function"
---
Action functions allow you to write custom buttons that appear in the message toolbar for end users to interact with. This feature enables more interactive messaging, allowing users to grant permission before a task is performed, generate visualizations of structured data, download an audio snippet of chats, and many other use cases.
diff --git a/docs/features/plugin/functions/filter.mdx b/docs/features/plugin/functions/filter.mdx
index 7002d994a6..95e68e5a74 100644
--- a/docs/features/plugin/functions/filter.mdx
+++ b/docs/features/plugin/functions/filter.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🪄 Filter Function"
+title: "Filter Function"
---
# 🪄 Filter Function: Modify Inputs and Outputs
diff --git a/docs/features/plugin/functions/index.mdx b/docs/features/plugin/functions/index.mdx
index 70cc2cda30..558bcfe7d9 100644
--- a/docs/features/plugin/functions/index.mdx
+++ b/docs/features/plugin/functions/index.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🧰 Functions"
+title: "Functions"
---
## 🚀 What Are Functions?
diff --git a/docs/features/plugin/functions/pipe.mdx b/docs/features/plugin/functions/pipe.mdx
index 01361cc783..1919745ccc 100644
--- a/docs/features/plugin/functions/pipe.mdx
+++ b/docs/features/plugin/functions/pipe.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🚰 Pipe Function"
+title: "Pipe Function"
---
# 🚰 Pipe Function: Create Custom "Agents/Models"
diff --git a/docs/features/plugin/index.mdx b/docs/features/plugin/index.mdx
index 71ee65daec..f8e6dd4341 100644
--- a/docs/features/plugin/index.mdx
+++ b/docs/features/plugin/index.mdx
@@ -1,6 +1,6 @@
---
-sidebar_position: 0
-title: "🛠️ Tools & Functions (Plugins)"
+sidebar_position: 300
+title: "Tools & Functions (Plugins)"
---
# 🛠️ Tools & Functions
diff --git a/docs/features/plugin/migration/index.mdx b/docs/features/plugin/migration/index.mdx
index c88d49638a..8602a54fdf 100644
--- a/docs/features/plugin/migration/index.mdx
+++ b/docs/features/plugin/migration/index.mdx
@@ -1,6 +1,6 @@
---
-sidebar_position: 4
-title: "🚚 Migrating Tools & Functions: 0.4 to 0.5"
+sidebar_position: 9999
+title: "Migrating Tools & Functions: 0.4 to 0.5"
---
# 🚚 Migration Guide: Open WebUI 0.4 to 0.5
diff --git a/docs/features/plugin/tools/development.mdx b/docs/features/plugin/tools/development.mdx
index 92f98d7585..5bfe2a2ad0 100644
--- a/docs/features/plugin/tools/development.mdx
+++ b/docs/features/plugin/tools/development.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🛠️ Development"
+title: "Development"
---
## Writing A Custom Toolkit
@@ -52,7 +52,7 @@ Each tool must have type hints for arguments. The types may also be nested, such
### Valves and UserValves - (optional, but HIGHLY encouraged)
-Valves and UserValves are used for specifying customizable settings of the Tool, you can read more on the dedicated [Valves & UserValves page](/features/plugin/valves/index.mdx).
+Valves and UserValves are used for specifying customizable settings of the Tool, you can read more on the dedicated [Valves & UserValves page](/features/plugin/development/valves).
### Optional Arguments
Below is a list of optional arguments your tools can depend on:
diff --git a/docs/features/plugin/tools/index.mdx b/docs/features/plugin/tools/index.mdx
index 1bebe40909..a7db57ebe9 100644
--- a/docs/features/plugin/tools/index.mdx
+++ b/docs/features/plugin/tools/index.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "⚙️ Tools"
+title: "Tools"
---
# ⚙️ What are Tools?
diff --git a/docs/openapi-servers/faq.mdx b/docs/features/plugin/tools/openapi-servers/faq.mdx
similarity index 99%
rename from docs/openapi-servers/faq.mdx
rename to docs/features/plugin/tools/openapi-servers/faq.mdx
index b34ef65f64..85e77a9f50 100644
--- a/docs/openapi-servers/faq.mdx
+++ b/docs/features/plugin/tools/openapi-servers/faq.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 10
-title: "❓ FAQ"
+title: "FAQ"
---
#### 🌐 Q: Why isn't my local OpenAPI tool server accessible from the WebUI interface?
diff --git a/docs/openapi-servers/index.mdx b/docs/features/plugin/tools/openapi-servers/index.mdx
similarity index 98%
rename from docs/openapi-servers/index.mdx
rename to docs/features/plugin/tools/openapi-servers/index.mdx
index 4841404aaf..fbe3aee8c4 100644
--- a/docs/openapi-servers/index.mdx
+++ b/docs/features/plugin/tools/openapi-servers/index.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 400
-title: "🔨 OpenAPI Tool Servers"
+title: "OpenAPI Tool Servers"
---
import { TopBanners } from "@site/src/components/TopBanners";
diff --git a/docs/openapi-servers/mcp.mdx b/docs/features/plugin/tools/openapi-servers/mcp.mdx
similarity index 99%
rename from docs/openapi-servers/mcp.mdx
rename to docs/features/plugin/tools/openapi-servers/mcp.mdx
index e7bd1f58c7..20dbea48a1 100644
--- a/docs/openapi-servers/mcp.mdx
+++ b/docs/features/plugin/tools/openapi-servers/mcp.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🛰️ MCP Support"
+title: "MCP Support"
---
This documentation explains how to easily set up and deploy the [**MCP (Model Context Protocol)-to-OpenAPI proxy server** (mcpo)](https://github.com/open-webui/mcpo) provided by Open WebUI. Learn how you can effortlessly expose MCP-based tool servers using standard, familiar OpenAPI endpoints suitable for end-users and developers.
diff --git a/docs/openapi-servers/open-webui.mdx b/docs/features/plugin/tools/openapi-servers/open-webui.mdx
similarity index 99%
rename from docs/openapi-servers/open-webui.mdx
rename to docs/features/plugin/tools/openapi-servers/open-webui.mdx
index c6f8f24d63..cc9861c18d 100644
--- a/docs/openapi-servers/open-webui.mdx
+++ b/docs/features/plugin/tools/openapi-servers/open-webui.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🔗 Open WebUI Integration"
+title: "Open WebUI Integration"
---
## Overview
diff --git a/docs/features/rag/document-extraction/apachetika.md b/docs/features/rag/document-extraction/apachetika.md
index e67e69790d..af00ab6bab 100644
--- a/docs/features/rag/document-extraction/apachetika.md
+++ b/docs/features/rag/document-extraction/apachetika.md
@@ -1,6 +1,6 @@
---
sidebar_position: 4000
-title: "🪶 Apache Tika Extraction"
+title: "Apache Tika Extraction"
---
:::warning
diff --git a/docs/features/rag/document-extraction/docling.md b/docs/features/rag/document-extraction/docling.md
index 2be419feae..7184db371b 100644
--- a/docs/features/rag/document-extraction/docling.md
+++ b/docs/features/rag/document-extraction/docling.md
@@ -1,6 +1,6 @@
---
sidebar_position: 4000
-title: "🐤 Docling Document Extraction"
+title: "Docling Document Extraction"
---
:::warning
diff --git a/docs/features/rag/document-extraction/index.md b/docs/features/rag/document-extraction/index.md
index 7b83c48836..d9f3473d63 100644
--- a/docs/features/rag/document-extraction/index.md
+++ b/docs/features/rag/document-extraction/index.md
@@ -1,6 +1,6 @@
---
sidebar_position: 6
-title: "📄 Document Extraction"
+title: "Document Extraction"
---
## Document Extraction in Open WebUI
diff --git a/docs/features/rag/document-extraction/mistral-ocr.md b/docs/features/rag/document-extraction/mistral-ocr.md
index 188f9e7b2a..e5b6643964 100644
--- a/docs/features/rag/document-extraction/mistral-ocr.md
+++ b/docs/features/rag/document-extraction/mistral-ocr.md
@@ -1,6 +1,6 @@
---
sidebar_position: 4000
-title: "👁️ Mistral OCR"
+title: "Mistral OCR"
---
:::warning
diff --git a/docs/features/rag/index.md b/docs/features/rag/index.md
index 703461be59..9895c9def7 100644
--- a/docs/features/rag/index.md
+++ b/docs/features/rag/index.md
@@ -1,17 +1,12 @@
---
-sidebar_position: 1
-title: "🔎 Retrieval Augmented Generation (RAG)"
+sidebar_position: 200
+title: "Retrieval Augmented Generation (RAG)"
---
:::warning
If you're using **Ollama**, note that it **defaults to a 2048-token context length**. This severely limits **Retrieval-Augmented Generation (RAG) performance**, especially for web search, because retrieved data may **not be used at all** or only partially processed.
-**Why This Is Critical for Web Search:**
-Web pages typically contain 4,000-8,000+ tokens even after content extraction, including main content, navigation elements, headers, footers, and metadata. With only 2048 tokens available, you're getting less than half the page content, often missing the most relevant information. Even 4096 tokens is frequently insufficient for comprehensive web content analysis.
-
-**To Fix This:** Navigate to **Admin Panel > Models > Settings** (of your Ollama model) > **Advanced Parameters** and **increase the context length to 8192+ (or rather, more than 16000) tokens**. This setting specifically applies to Ollama models. For OpenAI and other integrated models, ensure you're using a model with sufficient built-in context length (e.g., GPT-4 Turbo with 128k tokens).
-
:::
Retrieval Augmented Generation (RAG) is a cutting-edge technology that enhances the conversational capabilities of chatbots by incorporating context from diverse sources. It works by retrieving relevant information from a wide range of sources such as local and remote documents, web content, and even multimedia sources like YouTube videos. The retrieved text is then combined with a predefined RAG template and prefixed to the user's prompt, providing a more informed and contextually relevant response.
@@ -26,6 +21,12 @@ You can also load documents into the workspace area with their access by startin
## Web Search for RAG
+:::warning
+**Context Length Warning for Ollama Users:** Web pages typically contain 4,000-8,000+ tokens even after content extraction, including main content, navigation elements, headers, footers, and metadata. With only 2048 tokens available, you're getting less than half the page content, often missing the most relevant information. Even 4096 tokens is frequently insufficient for comprehensive web content analysis.
+
+**To Fix This:** Navigate to **Admin Panel > Models > Settings** (of your Ollama model) > **Advanced Parameters** and **increase the context length to 8192+ (or rather, more than 16000) tokens**. This setting specifically applies to Ollama models. For OpenAI and other integrated models, ensure you're using a model with sufficient built-in context length (e.g., GPT-4 Turbo with 128k tokens).
+:::
+
For web content integration, start a query in a chat with `#`, followed by the target URL. Click on the formatted URL in the box that appears above the chat box. Once selected, a document icon appears above `Send a message`, indicating successful retrieval. Open WebUI fetches and parses information from the URL if it can.
:::tip
diff --git a/docs/features/rbac/groups.md b/docs/features/rbac/groups.md
index 111b14a63d..ffbbcc2c4a 100644
--- a/docs/features/rbac/groups.md
+++ b/docs/features/rbac/groups.md
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🔐 Groups"
+title: "Groups"
---
Groups allow administrators to
diff --git a/docs/features/rbac/index.mdx b/docs/features/rbac/index.mdx
index 0829ea4c21..656d018bf7 100644
--- a/docs/features/rbac/index.mdx
+++ b/docs/features/rbac/index.mdx
@@ -1,6 +1,6 @@
---
-sidebar_position: 1
-title: "🪪 Role-Based Access Control (RBAC)"
+sidebar_position: 100
+title: "Role-Based Access Control (RBAC)"
---
## Access Control
diff --git a/docs/features/rbac/permissions.md b/docs/features/rbac/permissions.md
index 39c426b6a9..03acdbe10a 100644
--- a/docs/features/rbac/permissions.md
+++ b/docs/features/rbac/permissions.md
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🔒 Permissions"
+title: "Permissions"
---
The `Permissions` section of the `Workspace` within Open WebUI allows administrators to configure access controls and feature availability for users. This powerful system enables fine-grained control over what users can access and modify within the application.
diff --git a/docs/features/rbac/roles.md b/docs/features/rbac/roles.md
index 47ae67185f..25b2e29adc 100644
--- a/docs/features/rbac/roles.md
+++ b/docs/features/rbac/roles.md
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🔑 Roles"
+title: "Roles"
---
Open WebUI implements a structured role-based access control system with three primary user roles:
diff --git a/docs/features/web-search/_category_.json b/docs/features/web-search/_category_.json
new file mode 100644
index 0000000000..aea5c0ddde
--- /dev/null
+++ b/docs/features/web-search/_category_.json
@@ -0,0 +1,7 @@
+{
+ "label": "Web Search",
+ "position": 600,
+ "link": {
+ "type": "generated-index"
+ }
+}
diff --git a/docs/tutorials/web-search/bing.md b/docs/features/web-search/bing.md
similarity index 100%
rename from docs/tutorials/web-search/bing.md
rename to docs/features/web-search/bing.md
diff --git a/docs/tutorials/web-search/brave.md b/docs/features/web-search/brave.md
similarity index 100%
rename from docs/tutorials/web-search/brave.md
rename to docs/features/web-search/brave.md
diff --git a/docs/tutorials/web-search/ddgs.mdx b/docs/features/web-search/ddgs.mdx
similarity index 100%
rename from docs/tutorials/web-search/ddgs.mdx
rename to docs/features/web-search/ddgs.mdx
diff --git a/docs/tutorials/web-search/exa.md b/docs/features/web-search/exa.md
similarity index 100%
rename from docs/tutorials/web-search/exa.md
rename to docs/features/web-search/exa.md
diff --git a/docs/tutorials/web-search/external.md b/docs/features/web-search/external.md
similarity index 100%
rename from docs/tutorials/web-search/external.md
rename to docs/features/web-search/external.md
diff --git a/docs/tutorials/web-search/google-pse.md b/docs/features/web-search/google-pse.md
similarity index 100%
rename from docs/tutorials/web-search/google-pse.md
rename to docs/features/web-search/google-pse.md
diff --git a/docs/tutorials/web-search/jina.md b/docs/features/web-search/jina.md
similarity index 100%
rename from docs/tutorials/web-search/jina.md
rename to docs/features/web-search/jina.md
diff --git a/docs/tutorials/web-search/kagi.md b/docs/features/web-search/kagi.md
similarity index 100%
rename from docs/tutorials/web-search/kagi.md
rename to docs/features/web-search/kagi.md
diff --git a/docs/tutorials/web-search/mojeek.md b/docs/features/web-search/mojeek.md
similarity index 100%
rename from docs/tutorials/web-search/mojeek.md
rename to docs/features/web-search/mojeek.md
diff --git a/docs/tutorials/web-search/ollama-cloud.mdx b/docs/features/web-search/ollama-cloud.mdx
similarity index 100%
rename from docs/tutorials/web-search/ollama-cloud.mdx
rename to docs/features/web-search/ollama-cloud.mdx
diff --git a/docs/tutorials/web-search/perplexity.mdx b/docs/features/web-search/perplexity.mdx
similarity index 100%
rename from docs/tutorials/web-search/perplexity.mdx
rename to docs/features/web-search/perplexity.mdx
diff --git a/docs/tutorials/web-search/perplexity_search.mdx b/docs/features/web-search/perplexity_search.mdx
similarity index 100%
rename from docs/tutorials/web-search/perplexity_search.mdx
rename to docs/features/web-search/perplexity_search.mdx
diff --git a/docs/tutorials/web-search/searchapi.md b/docs/features/web-search/searchapi.md
similarity index 100%
rename from docs/tutorials/web-search/searchapi.md
rename to docs/features/web-search/searchapi.md
diff --git a/docs/tutorials/web-search/searxng.md b/docs/features/web-search/searxng.md
similarity index 100%
rename from docs/tutorials/web-search/searxng.md
rename to docs/features/web-search/searxng.md
diff --git a/docs/tutorials/web-search/serpapi.md b/docs/features/web-search/serpapi.md
similarity index 100%
rename from docs/tutorials/web-search/serpapi.md
rename to docs/features/web-search/serpapi.md
diff --git a/docs/tutorials/web-search/serper.md b/docs/features/web-search/serper.md
similarity index 100%
rename from docs/tutorials/web-search/serper.md
rename to docs/features/web-search/serper.md
diff --git a/docs/tutorials/web-search/serply.md b/docs/features/web-search/serply.md
similarity index 100%
rename from docs/tutorials/web-search/serply.md
rename to docs/features/web-search/serply.md
diff --git a/docs/tutorials/web-search/serpstack.md b/docs/features/web-search/serpstack.md
similarity index 100%
rename from docs/tutorials/web-search/serpstack.md
rename to docs/features/web-search/serpstack.md
diff --git a/docs/tutorials/web-search/tavily.md b/docs/features/web-search/tavily.md
similarity index 100%
rename from docs/tutorials/web-search/tavily.md
rename to docs/features/web-search/tavily.md
diff --git a/docs/tutorials/web-search/yacy.md b/docs/features/web-search/yacy.md
similarity index 100%
rename from docs/tutorials/web-search/yacy.md
rename to docs/features/web-search/yacy.md
diff --git a/docs/features/workspace/index.mdx b/docs/features/workspace/index.mdx
index 8a4db2b6e5..c6ded82273 100644
--- a/docs/features/workspace/index.mdx
+++ b/docs/features/workspace/index.mdx
@@ -1,6 +1,6 @@
---
-sidebar_position: 0
-title: "🖥️ Workspace"
+sidebar_position: 700
+title: "Workspace"
---
The Workspace in Open WebUI provides a comprehensive environment for managing your AI interactions and configurations. It consists of several key components:
diff --git a/docs/features/workspace/knowledge.md b/docs/features/workspace/knowledge.md
index 40cd51c980..9de1e3a857 100644
--- a/docs/features/workspace/knowledge.md
+++ b/docs/features/workspace/knowledge.md
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🧠 Knowledge"
+title: "Knowledge"
---
Knowledge part of Open WebUI is like a memory bank that makes your interactions even more powerful and context-aware. Let's break down what "Knowledge" really means in Open WebUI, how it works, and why it’s incredibly helpful for enhancing your experience.
diff --git a/docs/features/workspace/models.md b/docs/features/workspace/models.md
index 6f1804ca1b..f14d49e4d3 100644
--- a/docs/features/workspace/models.md
+++ b/docs/features/workspace/models.md
@@ -1,6 +1,6 @@
---
sidebar_position: 0
-title: "🤖 Models"
+title: "Models"
---
The `Models` section of the `Workspace` within Open WebUI is a powerful tool that allows you to create and manage custom models tailored to specific purposes. This section serves as a central hub for all your modelfiles, providing a range of features to edit, clone, share, export, and hide your models.
diff --git a/docs/features/workspace/prompts.md b/docs/features/workspace/prompts.md
index ffdd2cd254..99f8549180 100644
--- a/docs/features/workspace/prompts.md
+++ b/docs/features/workspace/prompts.md
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "📚 Prompts"
+title: "Prompts"
---
The `Prompts` section of the `Workspace` within Open WebUI enables users to create, manage, and share custom prompts. This feature streamlines interactions with AI models by allowing users to save frequently used prompts and easily access them through slash commands.
diff --git a/docs/getting-started/advanced-topics/development.md b/docs/getting-started/advanced-topics/development.md
index ced43eda82..158e3b5ed4 100644
--- a/docs/getting-started/advanced-topics/development.md
+++ b/docs/getting-started/advanced-topics/development.md
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🛠️ Local Development Guide"
+title: "Local Development Guide"
---
# Ready to Contribute to Open WebUI? Let's Get Started! 🚀
diff --git a/docs/getting-started/advanced-topics/https-encryption.md b/docs/getting-started/advanced-topics/https-encryption.md
index 1e2f1920ea..8931174187 100644
--- a/docs/getting-started/advanced-topics/https-encryption.md
+++ b/docs/getting-started/advanced-topics/https-encryption.md
@@ -1,6 +1,6 @@
---
sidebar_position: 6
-title: "🔒 Enabling HTTPS Encryption"
+title: "Enabling HTTPS Encryption"
---
# Secure Your Open WebUI with HTTPS 🔒
diff --git a/docs/getting-started/advanced-topics/index.mdx b/docs/getting-started/advanced-topics/index.mdx
index 6d68dbd537..99164250ea 100644
--- a/docs/getting-started/advanced-topics/index.mdx
+++ b/docs/getting-started/advanced-topics/index.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 4
-title: "📚 Advanced Topics"
+title: "Advanced Topics"
---
# 📚 Advanced Topics
diff --git a/docs/getting-started/advanced-topics/logging.md b/docs/getting-started/advanced-topics/logging.md
index 0c33977a00..f85c43686f 100644
--- a/docs/getting-started/advanced-topics/logging.md
+++ b/docs/getting-started/advanced-topics/logging.md
@@ -1,6 +1,6 @@
---
sidebar_position: 5
-title: "📜 Logging in Open WebUI"
+title: "Logging in Open WebUI"
---
# Understanding Open WebUI Logging 🪵
diff --git a/docs/getting-started/advanced-topics/monitoring/index.md b/docs/getting-started/advanced-topics/monitoring/index.md
index 4cc2c3eb58..45956b03cf 100644
--- a/docs/getting-started/advanced-topics/monitoring/index.md
+++ b/docs/getting-started/advanced-topics/monitoring/index.md
@@ -1,6 +1,6 @@
---
sidebar_position: 6
-title: "📊 Monitoring Your Open WebUI"
+title: "Monitoring Your Open WebUI"
---
# Keep Your Open WebUI Healthy with Monitoring 🩺
diff --git a/docs/getting-started/advanced-topics/monitoring/otel.md b/docs/getting-started/advanced-topics/monitoring/otel.md
index 8d523bb5b2..ece076aedb 100644
--- a/docs/getting-started/advanced-topics/monitoring/otel.md
+++ b/docs/getting-started/advanced-topics/monitoring/otel.md
@@ -1,6 +1,6 @@
---
sidebar_position: 7
-title: "🔭 OpenTelemetry"
+title: "OpenTelemetry"
---
Open WebUI supports **distributed tracing and metrics** export via the OpenTelemetry (OTel) protocol (OTLP). This enables integration with modern observability stacks such as **Grafana LGTM (Loki, Grafana, Tempo, Mimir)**, as well as **Jaeger**, **Tempo**, and **Prometheus** to monitor requests, database/Redis queries, response times, and more in real-time.
diff --git a/docs/getting-started/advanced-topics/network-diagrams.mdx b/docs/getting-started/advanced-topics/network-diagrams.mdx
index fb8472e6e3..d09a30f37b 100644
--- a/docs/getting-started/advanced-topics/network-diagrams.mdx
+++ b/docs/getting-started/advanced-topics/network-diagrams.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🕸️ Network Diagrams"
+title: "Network Diagrams"
---
Here, we provide clear and structured diagrams to help you understand how various components of the network interact within different setups. This documentation is designed to assist both macOS/Windows and Linux users. Each scenario is illustrated using Mermaid diagrams to show how the interactions are set up depending on the different system configurations and deployment strategies.
diff --git a/docs/getting-started/api-endpoints.md b/docs/getting-started/api-endpoints.md
index 400e285130..5a56a3a98c 100644
--- a/docs/getting-started/api-endpoints.md
+++ b/docs/getting-started/api-endpoints.md
@@ -1,6 +1,6 @@
---
sidebar_position: 400
-title: "🔗 API Endpoints"
+title: "API Endpoints"
---
This guide provides essential information on how to interact with the API endpoints effectively to achieve seamless integration and automation using our models. Please note that this is an experimental setup and may undergo future updates for enhancement.
diff --git a/docs/getting-started/env-configuration.mdx b/docs/getting-started/env-configuration.mdx
index bdcd9a0e41..5769164a85 100644
--- a/docs/getting-started/env-configuration.mdx
+++ b/docs/getting-started/env-configuration.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 4
-title: "🌍 Environment Variable Configuration"
+title: "Environment Variable Configuration"
---
## Overview
@@ -1101,7 +1101,7 @@ If `OFFLINE_MODE` is enabled, this `ENABLE_VERSION_UPDATE_CHECK` flag is always
- OAuth authentication providers
- Web search and RAG with external APIs
-Read more about `offline mode` in the [offline mode guide](/docs/tutorials/offline-mode.md).
+Read more about `offline mode` in the [offline mode guide](/tutorials/offline-mode).
:::
@@ -1121,7 +1121,26 @@ Read more about `offline mode` in the [offline mode guide](/docs/tutorials/offli
- Type: `str`
- Default: `*`
-- Description: Sets the allowed origins for Cross-Origin Resource Sharing (CORS).
+- Description: Sets the allowed origins for Cross-Origin Resource Sharing (CORS). Smicolon ';' separated list of allowed origins.
+
+:::warning
+
+**This variable is required to be set**, otherwise you may experience Websocket issues and weird "\{\}" responses or "Unexpected token 'd', "data: \{"id"... is not valid JSON".
+
+:::
+
+:::info
+
+If you experience Websocket issues, check the logs of Open WebUI.
+If you see lines like this `engineio.base_server:_log_error_once:354 - https://yourdomain.com is not an accepted origin.` then you need to configure your CORS_ALLOW_ORIGIN more broadly.
+
+Example:
+CORS_ALLOW_ORIGIN: "https://yourdomain.com;http://yourdomain.com;https://yourhostname;http://youripaddress;http://localhost:3000"
+
+Add all valid IPs, Domains and Hostnames one might access your Open WebUI to the variable.
+Once you did, no more websocket issues or warnings in the console should occur.
+
+:::
#### `CORS_ALLOW_CUSTOM_SCHEME`
@@ -1303,23 +1322,26 @@ If you want to use Milvus, be careful when upgrading Open WebUI (crate backups a
:::
-#### `MILVUS_URI`
+#### `MILVUS_URI` **(Required)**
- Type: `str`
- Default: `${DATA_DIR}/vector_db/milvus.db`
+- Example (Remote): `http://your-server-ip:19530`
- Description: Specifies the URI for connecting to the Milvus vector database. This can point to a local or remote Milvus server based on the deployment configuration.
#### `MILVUS_DB`
- Type: `str`
- Default: `default`
+- Example: `default`
- Description: Specifies the database to connect to within a Milvus instance.
-#### `MILVUS_TOKEN`
+#### `MILVUS_TOKEN` **(Required for remote connections with authentication)**
- Type: `str`
- Default: `None`
-- Description: Specifies an optional connection token for Milvus.
+- Example: `root:password` (format: `username:password`)
+- Description: Specifies an optional connection token for Milvus. Required when connecting to a remote Milvus server with authentication enabled. Format is `username:password`.
#### `MILVUS_INDEX_TYPE`
@@ -2026,7 +2048,7 @@ When configuring `RAG_FILE_MAX_SIZE` and `RAG_FILE_MAX_COUNT`, ensure that the v
- Type: `int`
- Default: `1`
-- Description: Sets the batch size for embedding in RAG (Retrieval-Augmented Generator) models.
+- Description: Controls how many text chunks are embedded in a single API request when using external embedding providers (Ollama, OpenAI, or Azure OpenAI). Higher values (20-100+; max 16000) process documents faster by sending more API requests, but may exceed API rate limits, while lower values (1-10) are more stable but slower. Default is 1 (safest option if you are API rate limit constrained, but slowest option). This setting only applies to external embedding engines, not the default SentenceTransformers engine.
- Persistence: This environment variable is a `PersistentConfig` variable.
#### `RAG_EMBEDDING_CONTENT_PREFIX`
diff --git a/docs/getting-started/index.md b/docs/getting-started/index.md
index a1e0e490e9..2b1c4b1a25 100644
--- a/docs/getting-started/index.md
+++ b/docs/getting-started/index.md
@@ -1,5 +1,5 @@
---
-sidebar_position: 200
+sidebar_position: 100
title: "🚀 Getting Started"
---
diff --git a/docs/getting-started/quick-start/index.mdx b/docs/getting-started/quick-start/index.mdx
index 4a6eb8f183..1cf8dbe40e 100644
--- a/docs/getting-started/quick-start/index.mdx
+++ b/docs/getting-started/quick-start/index.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "⏱️ Quick Start"
+title: "Quick Start"
---
import Tabs from '@theme/Tabs';
diff --git a/docs/getting-started/quick-start/starting-with-functions.mdx b/docs/getting-started/quick-start/starting-with-functions.mdx
index 2e6d902005..63c44447b1 100644
--- a/docs/getting-started/quick-start/starting-with-functions.mdx
+++ b/docs/getting-started/quick-start/starting-with-functions.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 5
-title: "🔌 Getting Started with Functions"
+title: "Getting Started with Functions"
---
## Overview
diff --git a/docs/getting-started/quick-start/starting-with-llama-cpp.mdx b/docs/getting-started/quick-start/starting-with-llama-cpp.mdx
index 4051c57190..8e19312c71 100644
--- a/docs/getting-started/quick-start/starting-with-llama-cpp.mdx
+++ b/docs/getting-started/quick-start/starting-with-llama-cpp.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🦙Starting with Llama.cpp"
+title: "Starting with Llama.cpp"
---
## Overview
diff --git a/docs/getting-started/quick-start/starting-with-ollama.mdx b/docs/getting-started/quick-start/starting-with-ollama.mdx
index fa2a29ec4d..1f2ff78f59 100644
--- a/docs/getting-started/quick-start/starting-with-ollama.mdx
+++ b/docs/getting-started/quick-start/starting-with-ollama.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "👉 Starting With Ollama"
+title: "Starting With Ollama"
---
## Overview
diff --git a/docs/getting-started/quick-start/starting-with-openai-compatible.mdx b/docs/getting-started/quick-start/starting-with-openai-compatible.mdx
index 01cd5befce..c255bcb45d 100644
--- a/docs/getting-started/quick-start/starting-with-openai-compatible.mdx
+++ b/docs/getting-started/quick-start/starting-with-openai-compatible.mdx
@@ -1,7 +1,7 @@
---
sidebar_position: 4
-title: "🌐 Starting with OpenAI-Compatible Servers"
+title: "Starting with OpenAI-Compatible Servers"
---
diff --git a/docs/getting-started/quick-start/starting-with-openai.mdx b/docs/getting-started/quick-start/starting-with-openai.mdx
index 1d12f22a5c..b07c301745 100644
--- a/docs/getting-started/quick-start/starting-with-openai.mdx
+++ b/docs/getting-started/quick-start/starting-with-openai.mdx
@@ -1,7 +1,7 @@
---
sidebar_position: 2
-title: "🤖 Starting With OpenAI"
+title: "Starting With OpenAI"
---
diff --git a/docs/getting-started/quick-start/tab-docker/DockerCompose.md b/docs/getting-started/quick-start/tab-docker/DockerCompose.md
index 466461b24f..da1dbdc250 100644
--- a/docs/getting-started/quick-start/tab-docker/DockerCompose.md
+++ b/docs/getting-started/quick-start/tab-docker/DockerCompose.md
@@ -2,8 +2,6 @@
Using Docker Compose simplifies the management of multi-container Docker applications.
-If you don't have Docker installed, check out our [Docker installation tutorial](docs/tutorials/docker-install.md).
-
Docker Compose requires an additional package, `docker-compose-v2`.
:::warning
diff --git a/docs/getting-started/updating.mdx b/docs/getting-started/updating.mdx
index c813ca3e27..ed070c4593 100644
--- a/docs/getting-started/updating.mdx
+++ b/docs/getting-started/updating.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 300
-title: "🔄 Updating Open WebUI"
+title: "Updating Open WebUI"
---
## Why isn't my Open WebUI updating?
diff --git a/docs/security.mdx b/docs/security.mdx
index e7b09fd08e..bb444d5937 100644
--- a/docs/security.mdx
+++ b/docs/security.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1500
-title: "🔒 Security Policy"
+title: "🛡️ Security Policy"
---
import { TopBanners } from "@site/src/components/TopBanners";
diff --git a/docs/sponsorships.mdx b/docs/sponsorships.mdx
index 1ed453e298..ebd832f49c 100644
--- a/docs/sponsorships.mdx
+++ b/docs/sponsorships.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1800
-title: "🌐 Sponsorships"
+title: "💖 Sponsorships"
---
import { TopBanners } from "@site/src/components/TopBanners";
diff --git a/docs/troubleshooting/compatibility.mdx b/docs/troubleshooting/compatibility.mdx
index 9484299ea5..f95da1f0cb 100644
--- a/docs/troubleshooting/compatibility.mdx
+++ b/docs/troubleshooting/compatibility.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 0
-title: "🌐 Browser Compatibility"
+title: "Browser Compatibility"
---
Open WebUI is designed for and tested on modern browsers. To ensure the best experience, we recommend using the following browser versions or later:
diff --git a/docs/troubleshooting/connection-error.mdx b/docs/troubleshooting/connection-error.mdx
index b171160626..cc8f0f06c8 100644
--- a/docs/troubleshooting/connection-error.mdx
+++ b/docs/troubleshooting/connection-error.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 0
-title: "🚧 Server Connectivity Issues"
+title: "Server Connectivity Issues"
---
We're here to help you get everything set up and running smoothly. Below, you'll find step-by-step instructions tailored for different scenarios to solve common connection issues with Ollama and external servers like Hugging Face.
diff --git a/docs/troubleshooting/index.mdx b/docs/troubleshooting/index.mdx
index f9b973f7ed..0e0f218d30 100644
--- a/docs/troubleshooting/index.mdx
+++ b/docs/troubleshooting/index.mdx
@@ -1,5 +1,5 @@
---
-sidebar_position: 600
+sidebar_position: 300
title: "🛠️ Troubleshooting"
---
import { TopBanners } from "@site/src/components/TopBanners";
diff --git a/docs/troubleshooting/microphone-error.mdx b/docs/troubleshooting/microphone-error.mdx
index 3641d231a3..59446d57f5 100644
--- a/docs/troubleshooting/microphone-error.mdx
+++ b/docs/troubleshooting/microphone-error.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🎙️ Troubleshooting Microphone Access"
+title: "Troubleshooting Microphone Access"
---
Ensuring your application has the proper microphone access is crucial for functionality that depends on audio input. This guide covers how to manage and troubleshoot microphone permissions, particularly under secure contexts.
diff --git a/docs/troubleshooting/password-reset.mdx b/docs/troubleshooting/password-reset.mdx
index 0b635d79a4..e9b419bc3d 100644
--- a/docs/troubleshooting/password-reset.mdx
+++ b/docs/troubleshooting/password-reset.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🔑 Reset Admin Password"
+title: "Reset Admin Password"
---
# Resetting Your Admin Password 🗝️
diff --git a/docs/troubleshooting/rag.mdx b/docs/troubleshooting/rag.mdx
index 13db09f0b3..09d17888e0 100644
--- a/docs/troubleshooting/rag.mdx
+++ b/docs/troubleshooting/rag.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🧠 Troubleshooting RAG (Retrieval-Augmented Generation)"
+title: "Troubleshooting RAG (Retrieval-Augmented Generation)"
---
Retrieval-Augmented Generation (RAG) enables language models to reason over external content—documents, knowledge bases, and more—by retrieving relevant info and feeding it into the model. But when things don't work as expected (e.g., the model "hallucinates" or misses relevant info), it's often not the model's fault—it's a context issue.
diff --git a/docs/troubleshooting/sso.mdx b/docs/troubleshooting/sso.mdx
index 1554497290..c100790753 100644
--- a/docs/troubleshooting/sso.mdx
+++ b/docs/troubleshooting/sso.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 4
-title: "🔐 Troubleshooting OAUTH / SSO Issues"
+title: "Troubleshooting OAUTH / SSO Issues"
---
OAUTH or Single Sign-On (SSO) lets you secure Open WebUI with modern authentication, but when users encounter login problems, the solution is often simple—if you know where to look. Most of the time, one of these key issues below is the culprit. Here's how to hunt them down and fix SSO headaches fast! 🚦
diff --git a/docs/tutorials/_category_.json b/docs/tutorials/_category_.json
index c881f1d4e7..c0b3c0e9f3 100644
--- a/docs/tutorials/_category_.json
+++ b/docs/tutorials/_category_.json
@@ -1,5 +1,5 @@
{
- "label": "📝 Tutorials",
+ "label": "🎓 Tutorials",
"position": 800,
"link": {
"type": "generated-index"
diff --git a/docs/tutorials/deployment/index.mdx b/docs/tutorials/deployment/index.mdx
index 7f767bcdc7..147ac7d152 100644
--- a/docs/tutorials/deployment/index.mdx
+++ b/docs/tutorials/deployment/index.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1000
-title: "☁️ Deployment"
+title: "Deployment"
---
import { TopBanners } from "@site/src/components/TopBanners";
diff --git a/docs/tutorials/docker-install.md b/docs/tutorials/docker-install.md
deleted file mode 100644
index bbdba19e94..0000000000
--- a/docs/tutorials/docker-install.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-sidebar_position: 4
-title: 🐳 Installing Docker
----
-
-:::info
-
-**Looking to install Open WebUI?**
-
-This page covers Docker installation only. For **Open WebUI installation instructions via Docker**, please visit our [Quick Start Guide](https://docs.openwebui.com/getting-started/quick-start/) which provides comprehensive setup instructions.
-
-:::
-
-:::warning
-
-This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
-
-:::
-
-## Installing Docker
-
-## For Windows and Mac Users
-
-- Download Docker Desktop from [Docker's official website](https://www.docker.com/products/docker-desktop).
-- Follow the installation instructions on the website.
-- After installation, **open Docker Desktop** to ensure it's running properly.
-
----
-
-## For Ubuntu Users
-
-1. **Open your terminal.**
-
-2. **Set up Docker's apt repository:**
-
- ```bash
- sudo apt-get update
- sudo apt-get install ca-certificates curl
- sudo install -m 0755 -d /etc/apt/keyrings
- sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
- sudo chmod a+r /etc/apt/keyrings/docker.asc
- echo \
- "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
- $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
- sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
- ```
-
-:::note
-
-If using an **Ubuntu derivative** (e.g., Linux Mint), use `UBUNTU_CODENAME` instead of `VERSION_CODENAME`.
-
-:::
-
-3. **Install Docker Engine:**
-
- ```bash
- sudo apt-get update
- sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
- ```
-
-4. **Verify Docker Installation:**
-
- ```bash
- sudo docker run hello-world
- ```
-
----
-
-## For Other Linux Distributions
-
-For other Linux distributions, refer to the [official Docker documentation](https://docs.docker.com/engine/install/).
-
----
-
-## Install and Verify Ollama
-
-1. **Download Ollama** from [https://ollama.com/](https://ollama.com/).
-
-2. **Verify Ollama Installation:**
- - Open a browser and navigate to:
- [http://127.0.0.1:11434/](http://127.0.0.1:11434/).
- - Note: The port may vary based on your installation.
diff --git a/docs/tutorials/https/_category_.json b/docs/tutorials/https/_category_.json
index 7a03c9ef70..254b95d438 100644
--- a/docs/tutorials/https/_category_.json
+++ b/docs/tutorials/https/_category_.json
@@ -1,5 +1,5 @@
{
- "label": "🔒 HTTPS",
+ "label": "HTTPS",
"position": 200,
"link": {
"type": "generated-index"
diff --git a/docs/tutorials/https/caddy.md b/docs/tutorials/https/caddy.md
index 80044bed6d..b1431a15c5 100644
--- a/docs/tutorials/https/caddy.md
+++ b/docs/tutorials/https/caddy.md
@@ -1,6 +1,6 @@
---
sidebar_position: 202
-title: "🔒 HTTPS using Caddy"
+title: "HTTPS using Caddy"
---
:::warning
diff --git a/docs/tutorials/https/haproxy.md b/docs/tutorials/https/haproxy.md
index c18a546531..8463374d59 100644
--- a/docs/tutorials/https/haproxy.md
+++ b/docs/tutorials/https/haproxy.md
@@ -1,6 +1,6 @@
---
sidebar_position: 201
-title: "🔒 HTTPS using HAProxy"
+title: "HTTPS using HAProxy"
---
:::warning
diff --git a/docs/tutorials/https/nginx.md b/docs/tutorials/https/nginx.md
index ab4936311d..4ee0cb16c9 100644
--- a/docs/tutorials/https/nginx.md
+++ b/docs/tutorials/https/nginx.md
@@ -1,6 +1,6 @@
---
sidebar_position: 200
-title: "🔒 HTTPS using Nginx"
+title: "HTTPS using Nginx"
---
:::warning
diff --git a/docs/tutorials/images.md b/docs/tutorials/images.md
deleted file mode 100644
index c366b7e5d0..0000000000
--- a/docs/tutorials/images.md
+++ /dev/null
@@ -1,280 +0,0 @@
----
-sidebar_position: 6
-title: "🎨 Image Generation"
----
-
-:::warning
-
-This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
-
-:::
-
-# 🎨 Image Generation
-
-Open WebUI supports image generation through three backends: **AUTOMATIC1111**, **ComfyUI**, and **OpenAI DALL·E**. This guide will help you set up and use either of these options.
-
-## AUTOMATIC1111
-
-Open WebUI supports image generation through the **AUTOMATIC1111** [API](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API). Here are the steps to get started:
-
-### Initial Setup
-
-1. Ensure that you have [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) installed.
-2. Launch AUTOMATIC1111 with additional flags to enable API access:
-
- ```python
- ./webui.sh --api --listen
- ```
-
-3. For Docker installation of WebUI with the environment variables preset, use the following command:
-
- ```docker
- docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e AUTOMATIC1111_BASE_URL=http://host.docker.internal:7860/ -e ENABLE_IMAGE_GENERATION=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- ```
-
-### Setting Up Open WebUI with AUTOMATIC1111
-
-1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
-2. Set the `Image Generation Engine` field to `Default (Automatic1111)`.
-3. In the API URL field, enter the address where AUTOMATIC1111's API is accessible:
-
- ```txt
- http://:7860/
- ```
-
- If you're running a Docker installation of Open WebUI and AUTOMATIC1111 on the same host, use `http://host.docker.internal:7860/` as your address.
-
-## ComfyUI
-
-ComfyUI provides an alternative interface for managing and interacting with image generation models. Learn more or download it from its [GitHub page](https://github.com/comfyanonymous/ComfyUI). Below are the setup instructions to get ComfyUI running alongside your other tools.
-
-### Initial Setup
-
-1. Download and extract the ComfyUI software package from [GitHub](https://github.com/comfyanonymous/ComfyUI) to your desired directory.
-2. To start ComfyUI, run the following command:
-
- ```python
- python main.py
- ```
-
- For systems with low VRAM, launch ComfyUI with additional flags to reduce memory usage:
-
- ```python
- python main.py --lowvram
- ```
-
-3. For Docker installation of WebUI with the environment variables preset, use the following command:
-
- ```docker
- docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e COMFYUI_BASE_URL=http://host.docker.internal:7860/ -e ENABLE_IMAGE_GENERATION=True -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- ```
-
-### Setting Up Open WebUI with ComfyUI
-
-#### Setting Up FLUX.1 Models
-
-1. **Model Checkpoints**:
-
-- Download either the `FLUX.1-schnell` or `FLUX.1-dev` model from the [black-forest-labs HuggingFace page](https://huggingface.co/black-forest-labs).
-- Place the model checkpoint(s) in both the `models/checkpoints` and `models/unet` directories of ComfyUI. Alternatively, you can create a symbolic link between `models/checkpoints` and `models/unet` to ensure both directories contain the same model checkpoints.
-
-2. **VAE Model**:
-
-- Download `ae.safetensors` VAE from [here](https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors).
-- Place it in the `models/vae` ComfyUI directory.
-
-3. **CLIP Model**:
-
-- Download `clip_l.safetensors` from [here](https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main).
-- Place it in the `models/clip` ComfyUI directory.
-
-4. **T5XXL Model**:
-
-- Download either the `t5xxl_fp16.safetensors` or `t5xxl_fp8_e4m3fn.safetensors` model from [here](https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main).
-- Place it in the `models/clip` ComfyUI directory.
-
-To integrate ComfyUI into Open WebUI, follow these steps:
-
-#### Step 1: Configure Open WebUI Settings
-
-1. Navigate to the **Admin Panel** in Open WebUI.
-2. Click on **Settings** and then select the **Images** tab.
-3. In the `Image Generation Engine` field, choose `ComfyUI`.
-4. In the **API URL** field, enter the address where ComfyUI's API is accessible, following this format: `http://:8188/`.
- - Set the environment variable `COMFYUI_BASE_URL` to this address to ensure it persists within the WebUI.
-
-#### Step 2: Verify the Connection and Enable Image Generation
-
-1. Ensure ComfyUI is running and that you've successfully verified the connection to Open WebUI. You won't be able to proceed without a successful connection.
-2. Once the connection is verified, toggle on **Image Generation (Experimental)**. More options will be presented to you.
-3. Continue to step 3 for the final configuration steps.
-
-#### Step 3: Configure ComfyUI Settings and Import Workflow
-
-1. Enable developer mode within ComfyUI. To do this, look for the gear icon above the **Queue Prompt** button within ComfyUI and enable the `Dev Mode` toggle.
-2. Export the desired workflow from ComfyUI in `API format` using the `Save (API Format)` button. The file will be downloaded as `workflow_api.json` if done correctly.
-3. Return to Open WebUI and click the **Click here to upload a workflow.json file** button.
-4. Select the `workflow_api.json` file to import the exported workflow from ComfyUI into Open WebUI.
-5. After importing the workflow, you must map the `ComfyUI Workflow Nodes` according to the imported workflow node IDs.
-6. Set `Set Default Model` to the name of the model file being used, such as `flux1-dev.safetensors`
-
-:::info
-
-You may need to adjust an `Input Key` or two within Open WebUI's `ComfyUI Workflow Nodes` section to match a node within your workflow.
-For example, `seed` may need to be renamed to `noise_seed` to match a node ID within your imported workflow.
-
-:::
-
-:::tip
-
-Some workflows, such as ones that use any of the Flux models, may utilize multiple nodes IDs that is necessary to fill in for their node entry fields within Open WebUI. If a node entry field requires multiple IDs, the node IDs should be comma separated (e.g., `1` or `1, 2`).
-
-:::
-
-6. Click `Save` to apply the settings and enjoy image generation with ComfyUI integrated into Open WebUI!
-
-After completing these steps, your ComfyUI setup should be integrated with Open WebUI, and you can use the Flux.1 models for image generation.
-
-### Configuring with SwarmUI
-
-SwarmUI utilizes ComfyUI as its backend. In order to get Open WebUI to work with SwarmUI you will have to append `ComfyBackendDirect` to the `ComfyUI Base URL`. Additionally, you will want to setup SwarmUI with LAN access. After aforementioned adjustments, setting up SwarmUI to work with Open WebUI will be the same as [Step one: Configure Open WebUI Settings](https://github.com/open-webui/docs/edit/main/docs/features/images.md#step-1-configure-open-webui-settings) as outlined above.
-
-
-#### SwarmUI API URL
-
-The address you will input as the ComfyUI Base URL will look like: `http://:7801/ComfyBackendDirect`
-
-## OpenAI
-
-Open WebUI also supports image generation through the **OpenAI APIs**. This option includes a selector for choosing between DALL·E 2, DALL·E 3, and GPT-Image-1 each supporting different image sizes.
-
-### Initial Setup
-
-1. Obtain an [API key](https://platform.openai.com/api-keys) from OpenAI.
-
-### Configuring Open WebUI
-
-1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
-2. Set the `Image Generation Engine` field to `Open AI`.
-3. Enter your OpenAI API key.
-4. Choose the model you wish to use. Note that image size options will depend on the selected model:
- - **DALL·E 2**: Supports `256x256`, `512x512`, or `1024x1024` images.
- - **DALL·E 3**: Supports `1024x1024`, `1792x1024`, or `1024x1792` images.
- - **GPT-Image-1**: Supports `auto`, `1024x1024`, `1536x1024`, or `1024x1536` images.
-
-### Azure OpenAI
-
-Image generation with Azure OpenAI Dall-E or GPT-Image is supported with Open WebUI. Configure the Image Generation as follows:
-
-1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
-2. Set the `Image Generation Engine` field to `Open AI` (Azure OpenAI uses the same syntax as OpenAI).
-3. Change the API endpoint URL to `https://.cognitiveservices.azure.com/openai/deployments//`. Set the instance and model id as you find it in the settings of the Azure AI Foundry.
-4. Configure the API version to the value you find in the settings of the Azure AI Fountry.
-5. Enter your Azure OpenAI API key.
-
-:::tip
-
-Alternative API endpoint URL tutorial: `https://.openai.azure.com/openai/deployments//` - you can find your endpoint name on https://ai.azure.com/resource/overview, and model name on https://ai.azure.com/resource/deployments.
-You can also copy Target URI from your deployment detailed page, but remember to delete strings after model name.
-For example, if your Target URI is `https://test.openai.azure.com/openai/deployments/gpt-image-1/images/generations?api-version=2025-04-01-preview`, the API endpoint URL in Open WebUI should be `https://test.openai.azure.com/openai/deployments/gpt-image-1/`.
-
-:::
-
-### LiteLLM Proxy with OpenAI Endpoints
-
-Image generation with a LiteLLM proxy using OpenAI endpoints is supported with Open WebUI. Configure the Image Generation as follows:
-
-1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
-2. Set the `Image Generation Engine` field to `Open AI`.
-3. Change the API endpoint URL to `https://:/v1`.
-4. Enter your LiteLLM API key.
-5. The API version can be left blank.
-6. Enter the image model name as it appears in your LiteLLM configuration.
-7. Set the image size to one of the available sizes for the selected model.
-
-:::tip
-
-To find your LiteLLM connection information, navigate to the **Admin Panel** > **Settings** > **Connections** menu.
-Your connection information will be listed under the OpenAI API connection.
-
-:::
-
-## Image Router
-
-Open WebUI also supports image generation through the **Image Router APIs**. Image Router is an [open source](https://github.com/DaWe35/image-router) image generation proxy that unifies most popular models into a single API.
-
-### Initial Setup
-
-1. Obtain an [API key](https://imagerouter.io/api-keys) from Image Router.
-
-### Configuring Open WebUI
-
-1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
-2. Set the `Image Generation Engine` field to `Open AI` (Image Router uses the same syntax as OpenAI).
-3. Change the API endpoint URL to `https://api.imagerouter.io/v1/openai`
-4. Enter your Image Router API key.
-5. Enter the model you wish to use. Do not use the dropdown to select models, enter the model name instead. For more information, [see all models](https://imagerouter.io/models).
-
-## Gemini
-
-Open WebUI also supports image generation through the **Google Studio API**.
-
-### Initial Setup
-
-1. Obtain an [API key](https://aistudio.google.com/api-keys) from Google AI Studio.
-2. You may need to create a project and enable the `Generative Language API` in addition to adding billing information.
-
-### Configuring Open WebUI
-
-1. In Open WebUI, navigate to the **Admin Panel** > **Settings** > **Images** menu.
-2. Set the `Image Generation Engine` field to `Gemini`.
-3. Set the `API Base URL` to `https://generativelanguage.googleapis.com/v1beta`.
-4. Enter your Google AI Studio [API key](https://aistudio.google.com/api-keys).
-5. Enter the model you wish to use from these [available models](https://ai.google.dev/gemini-api/docs/imagen#model-versions).
-6. Set the image size to one of the available [image sizes](https://ai.google.dev/gemini-api/docs/image-generation#aspect_ratios).
-
-:::info
-
-This feature appears to only work for models supported with this endpoint: `https://generativelanguage.googleapis.com/v1beta/models/:predict`.
-
-Google Imagen models use this endpoint while Gemini models use a different endpoint ending with `:generateContent`
-
-Imagen model endpoint example:
- - `https://generativelanguage.googleapis.com/v1beta/models/imagen-4.0-generate-001:predict`.
- - [Documentation for Imagen models](https://ai.google.dev/gemini-api/docs/imagen)
-
-Gemini model endpoint example:
- - `https://generativelanguage.googleapis.com/v1beta/models/gemini-2.5-flash-image:generateContent`.
- - [Documentation for Gemini models](https://ai.google.dev/gemini-api/docs/image-generation)
-
-Trying to call a Gemini model, such as gemini-2.5-flash-image aka *Nano Banana* will result in an error due to the difference in supported endpoints.
-
-`400: [ERROR: models/gemini-2.5-flash-image is not found for API version v1beta, or is not supported for predict. Call ListModels to see the list of available models and their supported methods.]`
-
-:::
-
-## Using Image Generation
-
-### Method 1
-
-1. Toggle the `Image Generation` switch to on.
-2. Enter your image generation prompt.
-3. Click `Send`.
-
-
-
-### Method 2
-
-
-
-1. First, use a text generation model to write a prompt for image generation.
-2. After the response has finished, you can click the Picture icon to generate an image.
-3. After the image has finished generating, it will be returned automatically in chat.
-
-:::tip
-
-You can also edit the LLM's response and enter your image generation prompt as the message
- to send off for image generation instead of using the actual response provided by the
- LLM.
-
-:::
diff --git a/docs/tutorials/integrations/_category_.json b/docs/tutorials/integrations/_category_.json
index b2ba55f386..36497a8d43 100644
--- a/docs/tutorials/integrations/_category_.json
+++ b/docs/tutorials/integrations/_category_.json
@@ -1,5 +1,5 @@
{
- "label": "🔗 Integrations",
+ "label": "Integrations",
"position": 2,
"link": {
"type": "generated-index"
diff --git a/docs/tutorials/integrations/amazon-bedrock.md b/docs/tutorials/integrations/amazon-bedrock.md
index d0040b01d3..9ff014462c 100644
--- a/docs/tutorials/integrations/amazon-bedrock.md
+++ b/docs/tutorials/integrations/amazon-bedrock.md
@@ -1,6 +1,6 @@
---
sidebar_position: 31
-title: "🛌 Integrate with Amazon Bedrock"
+title: "Integrate with Amazon Bedrock"
---
:::warning
diff --git a/docs/tutorials/integrations/azure-openai/index.mdx b/docs/tutorials/integrations/azure-openai/index.mdx
index 61782fb304..e4b2b0bc79 100644
--- a/docs/tutorials/integrations/azure-openai/index.mdx
+++ b/docs/tutorials/integrations/azure-openai/index.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "✨ Azure OpenAI with EntraID"
+title: "Azure OpenAI with EntraID"
---
:::warning
diff --git a/docs/tutorials/integrations/backend-controlled-ui-compatible-flow.md b/docs/tutorials/integrations/backend-controlled-ui-compatible-flow.md
index 16feca2dbd..fef44d2b61 100644
--- a/docs/tutorials/integrations/backend-controlled-ui-compatible-flow.md
+++ b/docs/tutorials/integrations/backend-controlled-ui-compatible-flow.md
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🔄 Backend-Controlled, UI-Compatible API Flow"
+title: "Backend-Controlled, UI-Compatible API Flow"
---
:::warning
diff --git a/docs/tutorials/integrations/browser-search-engine.md b/docs/tutorials/integrations/browser-search-engine.md
index 3849bde1b9..173a6519d7 100644
--- a/docs/tutorials/integrations/browser-search-engine.md
+++ b/docs/tutorials/integrations/browser-search-engine.md
@@ -1,6 +1,6 @@
---
sidebar_position: 16
-title: "🌐 Browser Search Engine"
+title: "Browser Search Engine"
---
:::warning
diff --git a/docs/tutorials/integrations/continue-dev.md b/docs/tutorials/integrations/continue-dev.md
index c8aa902e78..bf52149a51 100644
--- a/docs/tutorials/integrations/continue-dev.md
+++ b/docs/tutorials/integrations/continue-dev.md
@@ -1,6 +1,6 @@
---
sidebar_position: 13
-title: "⚛️ Continue.dev VS Code Extension with Open WebUI"
+title: "Continue.dev VS Code Extension with Open WebUI"
---
:::warning
diff --git a/docs/tutorials/integrations/custom-ca.md b/docs/tutorials/integrations/custom-ca.md
index a3a12dbdd4..bb66d20226 100644
--- a/docs/tutorials/integrations/custom-ca.md
+++ b/docs/tutorials/integrations/custom-ca.md
@@ -1,6 +1,6 @@
---
sidebar_position: 14
-title: "🛃 Setting up with Custom CA Store"
+title: "Setting up with Custom CA Store"
---
:::warning
diff --git a/docs/tutorials/integrations/deepseekr1-dynamic.md b/docs/tutorials/integrations/deepseekr1-dynamic.md
index 9872f01ad4..8666a327e1 100644
--- a/docs/tutorials/integrations/deepseekr1-dynamic.md
+++ b/docs/tutorials/integrations/deepseekr1-dynamic.md
@@ -1,6 +1,6 @@
---
sidebar_position: 1
-title: "🐋 Run DeepSeek R1 Dynamic 1.58-bit with Llama.cpp"
+title: "Run DeepSeek R1 Dynamic 1.58-bit with Llama.cpp"
---
A huge shoutout to **UnslothAI** for their incredible efforts! Thanks to their hard work, we can now run the **full DeepSeek-R1** 671B parameter model in its dynamic 1.58-bit quantized form (compressed to just 131GB) on **Llama.cpp**! And the best part? You no longer have to despair about needing massive enterprise-class GPUs or servers — it’s possible to run this model on your personal machine (albeit slowly for most consumer hardware).
diff --git a/docs/tutorials/integrations/firefox-sidebar.md b/docs/tutorials/integrations/firefox-sidebar.md
index 8942aafbf5..a222d6d0c8 100644
--- a/docs/tutorials/integrations/firefox-sidebar.md
+++ b/docs/tutorials/integrations/firefox-sidebar.md
@@ -1,6 +1,6 @@
---
sidebar_position: 4100
-title: "🦊 Firefox AI Chatbot Sidebar"
+title: "Firefox AI Chatbot Sidebar"
---
:::warning
diff --git a/docs/tutorials/integrations/helicone.md b/docs/tutorials/integrations/helicone.md
index e27cf67284..3f3125fa82 100644
--- a/docs/tutorials/integrations/helicone.md
+++ b/docs/tutorials/integrations/helicone.md
@@ -1,5 +1,5 @@
---
-title: "🕵🏻♀️ Monitor your LLM requests with Helicone"
+title: "Monitor your LLM requests with Helicone"
sidebar_position: 19
---
diff --git a/docs/tutorials/integrations/ipex_llm.md b/docs/tutorials/integrations/ipex_llm.md
index 554d1ec9fc..36ce47dead 100644
--- a/docs/tutorials/integrations/ipex_llm.md
+++ b/docs/tutorials/integrations/ipex_llm.md
@@ -1,6 +1,6 @@
---
sidebar_position: 11
-title: "🖥️ Local LLM Setup with IPEX-LLM on Intel GPU"
+title: "Local LLM Setup with IPEX-LLM on Intel GPU"
---
:::warning
diff --git a/docs/tutorials/integrations/iterm2.md b/docs/tutorials/integrations/iterm2.md
index efde46d8c0..b78cc41d8d 100644
--- a/docs/tutorials/integrations/iterm2.md
+++ b/docs/tutorials/integrations/iterm2.md
@@ -1,5 +1,5 @@
---
-title: "💻 Iterm2 AI Integration"
+title: "Iterm2 AI Integration"
---
:::warning
diff --git a/docs/tutorials/jupyter.md b/docs/tutorials/integrations/jupyter.md
similarity index 99%
rename from docs/tutorials/jupyter.md
rename to docs/tutorials/integrations/jupyter.md
index ee20831c7c..da0310bc65 100644
--- a/docs/tutorials/jupyter.md
+++ b/docs/tutorials/integrations/jupyter.md
@@ -1,6 +1,6 @@
---
sidebar_position: 321
-title: "🐍 Jupyter Notebook Integration"
+title: "Jupyter Notebook Integration"
---
:::warning
diff --git a/docs/tutorials/integrations/langfuse.md b/docs/tutorials/integrations/langfuse.md
index aa54381408..9754716510 100644
--- a/docs/tutorials/integrations/langfuse.md
+++ b/docs/tutorials/integrations/langfuse.md
@@ -1,6 +1,6 @@
---
sidebar_position: 20
-title: "🪢 Monitoring and Debugging with Langfuse"
+title: "Monitoring and Debugging with Langfuse"
---
## Langfuse Integration with Open WebUI
diff --git a/docs/tutorials/integrations/libre-translate.md b/docs/tutorials/integrations/libre-translate.md
index c08a670541..4b1f44a816 100644
--- a/docs/tutorials/integrations/libre-translate.md
+++ b/docs/tutorials/integrations/libre-translate.md
@@ -1,6 +1,6 @@
---
sidebar_position: 25
-title: "🔠 LibreTranslate Integration"
+title: "LibreTranslate Integration"
---
:::warning
diff --git a/docs/tutorials/integrations/okta-oidc-sso.md b/docs/tutorials/integrations/okta-oidc-sso.md
index bdad5614df..f5f2e3758a 100644
--- a/docs/tutorials/integrations/okta-oidc-sso.md
+++ b/docs/tutorials/integrations/okta-oidc-sso.md
@@ -1,6 +1,6 @@
---
sidebar_position: 40
-title: "🔗 Okta OIDC SSO Integration"
+title: "Okta OIDC SSO Integration"
---
:::warning
diff --git a/docs/tutorials/integrations/onedrive-sharepoint.mdx b/docs/tutorials/integrations/onedrive-sharepoint.mdx
index 8bee5bc3aa..9a20bdbcf8 100644
--- a/docs/tutorials/integrations/onedrive-sharepoint.mdx
+++ b/docs/tutorials/integrations/onedrive-sharepoint.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 32
-title: "📁 Integrate with OneDrive & SharePoint"
+title: "Integrate with OneDrive & SharePoint"
---
:::info
diff --git a/docs/tutorials/integrations/redis.md b/docs/tutorials/integrations/redis.md
index 6b7e153b3c..60bc7fef6d 100644
--- a/docs/tutorials/integrations/redis.md
+++ b/docs/tutorials/integrations/redis.md
@@ -1,6 +1,6 @@
---
sidebar_position: 30
-title: "🔗 Redis Websocket Support"
+title: "Redis Websocket Support"
---
:::warning
diff --git a/docs/tutorials/maintenance/_category_.json b/docs/tutorials/maintenance/_category_.json
index 0f1c0c85c6..53e8c37ec6 100644
--- a/docs/tutorials/maintenance/_category_.json
+++ b/docs/tutorials/maintenance/_category_.json
@@ -1,5 +1,5 @@
{
- "label": "🛠️ Maintenance",
+ "label": "Maintenance",
"position": 5,
"link": {
"type": "generated-index"
diff --git a/docs/tutorials/maintenance/backups.md b/docs/tutorials/maintenance/backups.md
index 1e04e9da37..e146af251e 100644
--- a/docs/tutorials/maintenance/backups.md
+++ b/docs/tutorials/maintenance/backups.md
@@ -1,6 +1,6 @@
---
sidebar_position: 1000
-title: "💾 Backups"
+title: "Backups"
---
:::warning
diff --git a/docs/tutorials/database.mdx b/docs/tutorials/maintenance/database.mdx
similarity index 91%
rename from docs/tutorials/database.mdx
rename to docs/tutorials/maintenance/database.mdx
index 4ca91866ce..68455aeb95 100644
--- a/docs/tutorials/database.mdx
+++ b/docs/tutorials/maintenance/database.mdx
@@ -1,8 +1,15 @@
---
sidebar_position: 310
-title: "📦 Exporting and Importing Database"
+title: "Exporting and Importing Database"
---
+:::warning
+
+This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the contributing tutorial.
+
+:::
+
+
If you need to migrate your **Open WebUI** data (e.g., chat histories, configurations, etc.) from one server to another or back it up for later use, you can export and import the database. This guide assumes you're running Open WebUI using the internal SQLite database (not PostgreSQL).
Follow the steps below to export and import the `webui.db` file, which contains your database.
diff --git a/docs/tutorials/s3-storage.md b/docs/tutorials/maintenance/s3-storage.md
similarity index 99%
rename from docs/tutorials/s3-storage.md
rename to docs/tutorials/maintenance/s3-storage.md
index ab95a37b61..ad5844ec6f 100644
--- a/docs/tutorials/s3-storage.md
+++ b/docs/tutorials/maintenance/s3-storage.md
@@ -1,6 +1,6 @@
---
sidebar_position: 320
-title: "🪣 Switching to S3 Storage"
+title: "Switching to S3 Storage"
---
:::warning
diff --git a/docs/tutorials/offline-mode.md b/docs/tutorials/offline-mode.mdx
similarity index 93%
rename from docs/tutorials/offline-mode.md
rename to docs/tutorials/offline-mode.mdx
index 5505bac168..bf7cfc7b5d 100644
--- a/docs/tutorials/offline-mode.md
+++ b/docs/tutorials/offline-mode.mdx
@@ -1,8 +1,12 @@
---
-sidebar_position: 24
-title: "🔌 Offline Mode"
+sidebar_position: 300
+title: "Offline Mode"
---
+import { TopBanners } from "@site/src/components/TopBanners";
+
+
+
:::warning
This tutorial is a community contribution and is not supported by the Open WebUI team. It serves only as a demonstration on how to customize Open WebUI for your specific use case. Want to contribute? Check out the [contributing tutorial](../contributing.mdx).
@@ -49,7 +53,7 @@ Consider if you need to start the application offline from the beginning of your
### I: Speech-To-Text
-The local `whisper` installation does not include the model by default. In this regard, you can follow the [guide](/docs/tutorials/speech-to-text/stt-config.md) only partially if you want to use an external model/provider. To use the local `whisper` application, you must first download the model of your choice (e.g. [Huggingface - Systran](https://huggingface.co/Systran)).
+The local `whisper` installation does not include the model by default. In this regard, you can follow the [guide](/features/audio/speech-to-text/stt-config.md) only partially if you want to use an external model/provider. To use the local `whisper` application, you must first download the model of your choice (e.g. [Huggingface - Systran](https://huggingface.co/Systran)).
```python
from faster_whisper import WhisperModel
@@ -88,14 +92,6 @@ The contents of the download directory must be copied to `/app/backend/data/cach
This is the easiest approach to achieving the offline setup with almost all features available in the online version. Apply only the features you want to use for your deployment.
-### II: Speech-To-Text
-
-Follow the [guide](./speech-to-text/stt-config.md).
-
-### II: Text-To-Speech
-
-Follow one of the [guides](https://docs.openwebui.com/category/%EF%B8%8F-text-to-speech).
-
### II: Embedding Model
In your Open WebUI installation, navigate to `Admin Settings` > `Settings` > `Documents` and select the embedding model you would like to use (e.g. [sentence-transformer/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)). After the selection, click the download button next to it.
diff --git a/docs/tutorials/tab-nginx/_category_.json b/docs/tutorials/tab-nginx/_category_.json
new file mode 100644
index 0000000000..131b921d35
--- /dev/null
+++ b/docs/tutorials/tab-nginx/_category_.json
@@ -0,0 +1,7 @@
+{
+ "label": "HTTPS",
+ "position": 8,
+ "link": {
+ "type": "generated-index"
+ }
+}
diff --git a/docs/tutorials/tips/_category_.json b/docs/tutorials/tips/_category_.json
index 10bd6bd5d1..9189b71228 100644
--- a/docs/tutorials/tips/_category_.json
+++ b/docs/tutorials/tips/_category_.json
@@ -1,6 +1,6 @@
{
- "label": "💡 Tips & Tricks",
- "position": 900,
+ "label": "Tips & Tricks",
+ "position": 0,
"link": {
"type": "generated-index"
}
diff --git a/docs/tutorials/tips/contributing-tutorial.md b/docs/tutorials/tips/contributing-tutorial.md
index ba39c1be7b..2611325f83 100644
--- a/docs/tutorials/tips/contributing-tutorial.md
+++ b/docs/tutorials/tips/contributing-tutorial.md
@@ -1,6 +1,6 @@
---
sidebar_position: 2
-title: "🤝 Contributing Tutorials"
+title: "Contributing Tutorials"
---
:::warning
@@ -15,7 +15,7 @@ We appreciate your interest in contributing tutorials to the Open WebUI document
## Contributing Steps
-1. **Fork the `openwebui/docs` GitHub Repository**
+1. **Fork the `open-webui/docs` GitHub Repository**
- Navigate to the [Open WebUI Docs Repository](https://github.com/open-webui/docs) on GitHub.
- Click the **Fork** button at the top-right corner to create a copy under your GitHub account.
diff --git a/docs/tutorials/tips/improve-performance-local.md b/docs/tutorials/tips/improve-performance-local.md
index 607666d57f..4bb8a75966 100644
--- a/docs/tutorials/tips/improve-performance-local.md
+++ b/docs/tutorials/tips/improve-performance-local.md
@@ -1,6 +1,6 @@
---
sidebar_position: 12
-title: "⚡ Improve Local LLM Performance with Dedicated Task Models"
+title: "Improve Local LLM Performance with Dedicated Task Models"
---
## Improve Performance with Dedicated Task Models
@@ -11,21 +11,23 @@ This guide explains how to optimize your setup by configuring a dedicated, light
---
-> [!TIP]
->
->## Why Does Open-WebUI Feel Slow?
->
->By default, Open-WebUI has several background tasks that can make it feel like magic but can also place a heavy load on local resources:
->
->- **Title Generation**
->- **Tag Generation**
->- **Autocomplete Generation** (this function triggers on every keystroke)
->- **Search Query Generation**
->
->Each of these features makes asynchronous requests to your model. For example, continuous calls from the autocomplete feature can significantly delay responses on devices with limited memory >or processing power, such as a Mac with 32GB of RAM running a 32B quantized model.
->
->Optimizing the task model can help isolate these background tasks from your main chat application, improving overall responsiveness.
->
+:::tip
+
+## Why Does Open-WebUI Feel Slow?
+
+By default, Open-WebUI has several background tasks that can make it feel like magic but can also place a eavy load on local resources:
+
+- **Title Generation**
+- **Tag Generation**
+- **Autocomplete Generation** (this function triggers on every keystroke)
+- **Search Query Generation**
+
+Each of these features makes asynchronous requests to your model. For example, continuous calls from the utocomplete feature can significantly delay responses on devices with limited memory >or processing power, uch as a Mac with 32GB of RAM running a 32B quantized model.
+
+Optimizing the task model can help isolate these background tasks from your main chat application, improving verall responsiveness.
+
+:::
+
---
## ⚡ How to Optimize Task Model Performance
diff --git a/docs/tutorials/tips/one-click-ollama-launcher.mdx b/docs/tutorials/tips/one-click-ollama-launcher.mdx
index 207b1b7fc2..9dfccde9fd 100644
--- a/docs/tutorials/tips/one-click-ollama-launcher.mdx
+++ b/docs/tutorials/tips/one-click-ollama-launcher.mdx
@@ -1,6 +1,6 @@
---
sidebar_position: 21
-title: "🚀 One-Click Ollama + Open WebUI Launcher"
+title: "One-Click Ollama + Open WebUI Launcher"
---
:::warning
diff --git a/docs/tutorials/tips/rag-tutorial.md b/docs/tutorials/tips/rag-tutorial.md
index 9005342783..e20f356c99 100644
--- a/docs/tutorials/tips/rag-tutorial.md
+++ b/docs/tutorials/tips/rag-tutorial.md
@@ -1,6 +1,6 @@
---
sidebar_position: 3
-title: "🔎 Open WebUI RAG Tutorial"
+title: "Open WebUI RAG Tutorial"
---
:::warning
diff --git a/docs/tutorials/tips/reduce-ram-usage.md b/docs/tutorials/tips/reduce-ram-usage.md
index 95f50198bc..01a6cd5644 100644
--- a/docs/tutorials/tips/reduce-ram-usage.md
+++ b/docs/tutorials/tips/reduce-ram-usage.md
@@ -1,6 +1,6 @@
---
sidebar_position: 10
-title: "✂️ Reduce RAM Usage"
+title: "Reduce RAM Usage"
---
## Reduce RAM Usage
diff --git a/docs/tutorials/tips/sqlite-database.md b/docs/tutorials/tips/sqlite-database.md
index d9c6f3d34f..6b3cb466d4 100644
--- a/docs/tutorials/tips/sqlite-database.md
+++ b/docs/tutorials/tips/sqlite-database.md
@@ -1,6 +1,6 @@
---
sidebar_position: 11
-title: "💠 SQLite Database Overview"
+title: "SQLite Database Overview"
---
:::warning
diff --git a/static/images/enterprise/customers/samsung-semiconductor/hero.png b/static/images/enterprise/customers/samsung-semiconductor/hero.png
new file mode 100644
index 0000000000..5e87d6d3c2
Binary files /dev/null and b/static/images/enterprise/customers/samsung-semiconductor/hero.png differ
diff --git a/static/images/enterprise/customers/samsung-semiconductor/open-webui.png b/static/images/enterprise/customers/samsung-semiconductor/open-webui.png
new file mode 100644
index 0000000000..1aa8abedbc
Binary files /dev/null and b/static/images/enterprise/customers/samsung-semiconductor/open-webui.png differ
diff --git a/static/images/enterprise/hero.png b/static/images/enterprise/hero.png
new file mode 100644
index 0000000000..8307755915
Binary files /dev/null and b/static/images/enterprise/hero.png differ
diff --git a/static/images/image-generation-and-editing/automatic1111-settings.png b/static/images/image-generation-and-editing/automatic1111-settings.png
new file mode 100644
index 0000000000..2b1a1bb38d
Binary files /dev/null and b/static/images/image-generation-and-editing/automatic1111-settings.png differ
diff --git a/static/images/image-generation-and-editing/azure-openai-settings.png b/static/images/image-generation-and-editing/azure-openai-settings.png
new file mode 100644
index 0000000000..9f5f7a2083
Binary files /dev/null and b/static/images/image-generation-and-editing/azure-openai-settings.png differ
diff --git a/static/images/image-generation-and-editing/comfyui-editing-settings.png b/static/images/image-generation-and-editing/comfyui-editing-settings.png
new file mode 100644
index 0000000000..79464042db
Binary files /dev/null and b/static/images/image-generation-and-editing/comfyui-editing-settings.png differ
diff --git a/static/images/image-generation-and-editing/comfyui-generation-settings.png b/static/images/image-generation-and-editing/comfyui-generation-settings.png
new file mode 100644
index 0000000000..e6a081551d
Binary files /dev/null and b/static/images/image-generation-and-editing/comfyui-generation-settings.png differ
diff --git a/static/images/image-generation-and-editing/comfyui-node-mapping.png b/static/images/image-generation-and-editing/comfyui-node-mapping.png
new file mode 100644
index 0000000000..8f9987f8f9
Binary files /dev/null and b/static/images/image-generation-and-editing/comfyui-node-mapping.png differ
diff --git a/static/images/image-generation-and-editing/comfyui-unet-name-node.png b/static/images/image-generation-and-editing/comfyui-unet-name-node.png
new file mode 100644
index 0000000000..e6a081551d
Binary files /dev/null and b/static/images/image-generation-and-editing/comfyui-unet-name-node.png differ
diff --git a/static/images/image-generation-and-editing/comfyui-workflow-upload.png b/static/images/image-generation-and-editing/comfyui-workflow-upload.png
new file mode 100644
index 0000000000..e6a081551d
Binary files /dev/null and b/static/images/image-generation-and-editing/comfyui-workflow-upload.png differ
diff --git a/static/images/image-generation-and-editing/gemini-settings.png b/static/images/image-generation-and-editing/gemini-settings.png
new file mode 100644
index 0000000000..f1e48215f8
Binary files /dev/null and b/static/images/image-generation-and-editing/gemini-settings.png differ
diff --git a/static/images/image-generation-and-editing/image-router-settings.png b/static/images/image-generation-and-editing/image-router-settings.png
new file mode 100644
index 0000000000..9f5f7a2083
Binary files /dev/null and b/static/images/image-generation-and-editing/image-router-settings.png differ
diff --git a/static/images/image-generation-and-editing/openai-settings.png b/static/images/image-generation-and-editing/openai-settings.png
new file mode 100644
index 0000000000..9f5f7a2083
Binary files /dev/null and b/static/images/image-generation-and-editing/openai-settings.png differ