From 5a4b48def5927d17b2f0d320585123d5b79c7d75 Mon Sep 17 00:00:00 2001 From: silentoplayz Date: Mon, 6 Oct 2025 01:23:32 -0400 Subject: [PATCH] chore: standardize list style --- docs/features/banners.md | 28 ++--- docs/features/chat-features/chatshare.md | 26 ++-- docs/features/code-execution/artifacts.md | 62 +++++----- docs/features/code-execution/mermaid.md | 8 +- docs/features/code-execution/python.md | 28 ++--- .../document-extraction/apachetika.md | 40 +++--- docs/features/document-extraction/docling.md | 40 +++--- .../document-extraction/mistral-ocr.md | 28 ++--- docs/features/webhooks.md | 12 +- docs/features/workspace/groups.md | 30 ++--- docs/features/workspace/models.md | 44 +++---- docs/features/workspace/permissions.md | 52 ++++---- docs/features/workspace/prompts.md | 116 +++++++++--------- docs/features/workspace/roles.md | 34 ++--- .../advanced-topics/https-encryption.md | 44 +++---- .../advanced-topics/monitoring/index.md | 82 ++++++------- .../quick-start/tab-docker/DockerSwarm.md | 2 +- .../quick-start/tab-docker/DockerUpdating.md | 2 +- docs/tutorials/images.md | 24 ++-- .../tutorials/integrations/firefox-sidebar.md | 56 ++++----- docs/tutorials/integrations/langfuse.md | 8 +- .../tutorials/integrations/libre-translate.md | 20 +-- docs/tutorials/integrations/okta-oidc-sso.md | 50 ++++---- docs/tutorials/integrations/redis.md | 16 +-- .../openai-edge-tts-integration.md | 2 +- .../openedai-speech-integration.md | 62 +++++----- docs/tutorials/tips/reduce-ram-usage.md | 6 +- docs/tutorials/web-search/external.md | 28 ++--- docs/tutorials/web-search/searxng.md | 8 +- docs/tutorials/web-search/yacy.md | 10 +- 30 files changed, 484 insertions(+), 484 deletions(-) diff --git a/docs/features/banners.md b/docs/features/banners.md index a5f56a54e7..11d98f1aa2 100644 --- a/docs/features/banners.md +++ b/docs/features/banners.md @@ -36,29 +36,29 @@ For more information on configuring environment variables in Open WebUI, see [En Environment Variable Description --------------------------------- -* `WEBUI_BANNERS`: - * Type: list of dict - * Default: `[]` - * Description: List of banners to show to users. +- `WEBUI_BANNERS`: + - Type: list of dict + - Default: `[]` + - Description: List of banners to show to users. Banner Options ---------------- -* `id`: Unique identifier for the banner. -* `type`: Background color of the banner (info, success, warning, error). -* `title`: Title of the banner. -* `content`: Content of the banner. -* `dismissible`: Whether the banner is dismissible or not. -* `timestamp`: Timestamp for the banner (optional). +- `id`: Unique identifier for the banner. +- `type`: Background color of the banner (info, success, warning, error). +- `title`: Title of the banner. +- `content`: Content of the banner. +- `dismissible`: Whether the banner is dismissible or not. +- `timestamp`: Timestamp for the banner (optional). FAQ ---- -* Q: Can I configure banners through the admin panel? +- Q: Can I configure banners through the admin panel? A: Yes, you can configure banners through the admin panel by navigating to `Admin Panel` -> `Settings` -> `Interface` and clicking on the `+` icon to add a new banner. -* Q: Can I configure banners through environment variables? +- Q: Can I configure banners through environment variables? A: Yes, you can configure banners through environment variables by setting the `WEBUI_BANNERS` environment variable with a list of dictionaries. -* Q: What is the format for the `WEBUI_BANNERS` environment variable? +- Q: What is the format for the `WEBUI_BANNERS` environment variable? A: The format for the `WEBUI_BANNERS` environment variable is a list of dictionaries with the following keys: `id`, `type`, `title`, `content`, `dismissible`, and `timestamp`. -* Q: Can I make banners dismissible? +- Q: Can I make banners dismissible? A: Yes, you can make banners dismissible by setting the `dismissible` key to `True` in the banner configuration or by toggling dismissibility for a banner within the UI. diff --git a/docs/features/chat-features/chatshare.md b/docs/features/chat-features/chatshare.md index ef3405d3be..14bf63d244 100644 --- a/docs/features/chat-features/chatshare.md +++ b/docs/features/chat-features/chatshare.md @@ -30,11 +30,11 @@ To share a chat: When you select `Share to Open WebUI Community`: -* A new tab will open, allowing you to upload your chat as a snapshot to the Open WebUI community website (https://openwebui.com/chats/upload). -* You can control who can view your uploaded chat by choosing from the following access settings: - * **Private**: Only you can access this chat. - * **Public**: Anyone on the internet can view the messages displayed in the chat snapshot. - * **Public, Full History**: Anyone on the internet can view the full regeneration history of your chat. +- A new tab will open, allowing you to upload your chat as a snapshot to the Open WebUI community website (https://openwebui.com/chats/upload). +- You can control who can view your uploaded chat by choosing from the following access settings: + - **Private**: Only you can access this chat. + - **Public**: Anyone on the internet can view the messages displayed in the chat snapshot. + - **Public, Full History**: Anyone on the internet can view the full regeneration history of your chat. :::note Note: You can change the permission level of your shared chats on the community platform at any time from your openwebui.com account. @@ -50,12 +50,12 @@ When you select `Copy Link`, a unique share link is generated that can be shared **Important Considerations:** -* The shared chat will only include messages that existed at the time the link was created. Any new messages sent within the chat after the link is generated will not be included, unless the link is deleted and updated with a new link. -* The generated share link acts as a static snapshot of the chat at the time the link was generated. -* To view the shared chat, users must: +- The shared chat will only include messages that existed at the time the link was created. Any new messages sent within the chat after the link is generated will not be included, unless the link is deleted and updated with a new link. +- The generated share link acts as a static snapshot of the chat at the time the link was generated. +- To view the shared chat, users must: 1. Have an account on the Open WebUI instance where the link was generated. 2. Be signed in to their account on that instance. -* If a user tries to access the shared link without being signed in, they will be redirected to the login page to log in before they can view the shared chat. +- If a user tries to access the shared link without being signed in, they will be redirected to the login page to log in before they can view the shared chat. ### Viewing Shared Chats @@ -77,10 +77,10 @@ To update a shared chat: The **Share Chat** Modal includes the following options: -* **before**: Opens a new tab to view the previously shared chat from the share link. -* **delete this link**: Deletes the shared link of the chat and presents the initial share chat modal. -* **Share to Open WebUI Community**: Opens a new tab for https://openwebui.com/chats/upload with the chat ready to be shared as a snapshot. -* **Update and Copy Link**: Updates the snapshot of the chat of the previously shared chat link and copies it to your device's clipboard. +- **before**: Opens a new tab to view the previously shared chat from the share link. +- **delete this link**: Deletes the shared link of the chat and presents the initial share chat modal. +- **Share to Open WebUI Community**: Opens a new tab for https://openwebui.com/chats/upload with the chat ready to be shared as a snapshot. +- **Update and Copy Link**: Updates the snapshot of the chat of the previously shared chat link and copies it to your device's clipboard. ### Deleting Shared Chats diff --git a/docs/features/code-execution/artifacts.md b/docs/features/code-execution/artifacts.md index 917d0797fb..9c3020a7aa 100644 --- a/docs/features/code-execution/artifacts.md +++ b/docs/features/code-execution/artifacts.md @@ -14,10 +14,10 @@ Open WebUI creates an Artifact when the generated content meets specific criteri 1. **Renderable**: To be displayed as an Artifact, the content must be in a format that Open WebUI supports for rendering. This includes: -* Single-page HTML websites -* Scalable Vector Graphics (SVG) images -* Complete webpages, which include HTML, Javascript, and CSS all in the same Artifact. Do note that HTML is required if generating a complete webpage. -* ThreeJS Visualizations and other JavaScript visualization libraries such as D3.js. +- Single-page HTML websites +- Scalable Vector Graphics (SVG) images +- Complete webpages, which include HTML, Javascript, and CSS all in the same Artifact. Do note that HTML is required if generating a complete webpage. +- ThreeJS Visualizations and other JavaScript visualization libraries such as D3.js. Other content types like Documents (Markdown or Plain Text), Code snippets, and React components are not rendered as Artifacts by Open WebUI. @@ -29,39 +29,39 @@ To use artifacts in Open WebUI, a model must provide content that triggers the r When Open WebUI creates an Artifact, you'll see the content displayed in a dedicated Artifacts window to the right side of the main chat. Here's how to interact with Artifacts: -* **Editing and iterating**: Ask an LLM within the chat to edit or iterate on the content, and these updates will be displayed directly in the Artifact window. You can switch between versions using the version selector at the bottom left of the Artifact. Each edit creates a new version, allowing you to track changes using the version selector. -* **Updates**: Open WebUI may update an existing Artifact based on your messages. The Artifact window will display the latest content. -* **Actions**: Access additional actions for the Artifact, such as copying the content or opening the artifact in full screen, located in the lower right corner of the Artifact. +- **Editing and iterating**: Ask an LLM within the chat to edit or iterate on the content, and these updates will be displayed directly in the Artifact window. You can switch between versions using the version selector at the bottom left of the Artifact. Each edit creates a new version, allowing you to track changes using the version selector. +- **Updates**: Open WebUI may update an existing Artifact based on your messages. The Artifact window will display the latest content. +- **Actions**: Access additional actions for the Artifact, such as copying the content or opening the artifact in full screen, located in the lower right corner of the Artifact. ## Editing Artifacts 1. **Targeted Updates**: Describe what you want changed and where. For example: -* "Change the color of the bar in the chart from blue to red." -* "Update the title of the SVG image to 'New Title'." +- "Change the color of the bar in the chart from blue to red." +- "Update the title of the SVG image to 'New Title'." 2. **Full Rewrites**: Request major changes affecting most of the content for substantial restructuring or multiple section updates. For example: -* "Rewrite this single-page HTML website to be a landing page instead." -* "Redesign this SVG so that it's animated using ThreeJS." +- "Rewrite this single-page HTML website to be a landing page instead." +- "Redesign this SVG so that it's animated using ThreeJS." **Best Practices**: -* Be specific about which part of the Artifact you want to change. -* Reference unique identifying text around your desired change for targeted updates. -* Consider whether a small update or full rewrite is more appropriate for your needs. +- Be specific about which part of the Artifact you want to change. +- Reference unique identifying text around your desired change for targeted updates. +- Consider whether a small update or full rewrite is more appropriate for your needs. ## Use Cases Artifacts in Open WebUI enable various teams to create high-quality work products quickly and efficiently. Here are some examples tailored to our platform: -* **Designers**: - * Create interactive SVG graphics for UI/UX design. - * Design single-page HTML websites or landing pages. -* **Developers**: Create simple HTML prototypes or generate SVG icons for projects. -* **Marketers**: - * Design campaign landing pages with performance metrics. - * Create SVG graphics for ad creatives or social media posts. +- **Designers**: + - Create interactive SVG graphics for UI/UX design. + - Design single-page HTML websites or landing pages. +- **Developers**: Create simple HTML prototypes or generate SVG icons for projects. +- **Marketers**: + - Design campaign landing pages with performance metrics. + - Create SVG graphics for ad creatives or social media posts. ## Examples of Projects you can create with Open WebUI's Artifacts @@ -69,35 +69,35 @@ Artifacts in Open WebUI enable various teams and individuals to create high-qual 1. **Interactive Visualizations** -* Components used: ThreeJS, D3.js, and SVG -* Benefits: Create immersive data stories with interactive visualizations. Open WebUI's Artifacts enable you to switch between versions, making it easier to test different visualization approaches and track changes. +- Components used: ThreeJS, D3.js, and SVG +- Benefits: Create immersive data stories with interactive visualizations. Open WebUI's Artifacts enable you to switch between versions, making it easier to test different visualization approaches and track changes. Example Project: Build an interactive line chart using ThreeJS to visualize stock prices over time. Update the chart's colors and scales in separate versions to compare different visualization strategies. 2. **Single-Page Web Applications** -* Components used: HTML, CSS, and JavaScript -* Benefits: Develop single-page web applications directly within Open WebUI. Edit and iterate on the content using targeted updates and full rewrites. +- Components used: HTML, CSS, and JavaScript +- Benefits: Develop single-page web applications directly within Open WebUI. Edit and iterate on the content using targeted updates and full rewrites. Example Project: Design a to-do list app with a user interface built using HTML and CSS. Use JavaScript to add interactive functionality. Update the app's layout and UI/UX using targeted updates and full rewrites. 3. **Animated SVG Graphics** -* Components used: SVG and ThreeJS -* Benefits: Create engaging animated SVG graphics for marketing campaigns, social media, or web design. Open WebUI's Artifacts enable you to edit and iterate on the graphics in a single window. +- Components used: SVG and ThreeJS +- Benefits: Create engaging animated SVG graphics for marketing campaigns, social media, or web design. Open WebUI's Artifacts enable you to edit and iterate on the graphics in a single window. Example Project: Design an animated SVG logo for a company brand. Use ThreeJS to add animation effects and Open WebUI's targeted updates to refine the logo's colors and design. 4. **Webpage Prototypes** -* Components used: HTML, CSS, and JavaScript -* Benefits: Build and test webpage prototypes directly within Open WebUI. Switch between versions to compare different design approaches and refine the prototype. +- Components used: HTML, CSS, and JavaScript +- Benefits: Build and test webpage prototypes directly within Open WebUI. Switch between versions to compare different design approaches and refine the prototype. Example Project: Develop a prototype for a new e-commerce website using HTML, CSS, and JavaScript. Use Open WebUI's targeted updates to refines the navigation, layout, and UI/UX. 5. **Interactive Storytelling** -* Components used: HTML, CSS, and JavaScript -* Benefits: Create interactive stories with scrolling effects, animations, and other interactive elements. Open WebUI's Artifacts enable you to refine the story and test different versions. +- Components used: HTML, CSS, and JavaScript +- Benefits: Create interactive stories with scrolling effects, animations, and other interactive elements. Open WebUI's Artifacts enable you to refine the story and test different versions. Example Project: Build an interactive story about a company's history, using scrolling effects and animations to engage the reader. Use targeted updates to refine the story's narrative and Open WebUI's version selector to test different versions. diff --git a/docs/features/code-execution/mermaid.md b/docs/features/code-execution/mermaid.md index b54410856c..c57ef5b1c3 100644 --- a/docs/features/code-execution/mermaid.md +++ b/docs/features/code-execution/mermaid.md @@ -13,8 +13,8 @@ Open WebUI supports rendering of visually appealing MermaidJS diagrams, flowchar To generate a MermaidJS diagram, simply ask an LLM within any chat to create a diagram or chart using MermaidJS. For example, you can ask the LLM to: -* "Create a flowchart for a simple decision-making process for me using Mermaid. Explain how the flowchart works." -* "Use Mermaid to visualize a decision tree to determine whether it's suitable to go for a walk outside." +- "Create a flowchart for a simple decision-making process for me using Mermaid. Explain how the flowchart works." +- "Use Mermaid to visualize a decision tree to determine whether it's suitable to go for a walk outside." Note that for the LLM's response to be rendered correctly, it must begin with the word `mermaid` followed by the MermaidJS code. You can reference the [MermaidJS documentation](https://mermaid.js.org/intro/) to ensure the syntax is correct and provide structured prompts to the LLM to guide it towards generating better MermaidJS syntax. @@ -28,8 +28,8 @@ If the model generates MermaidJS syntax, but the visualization does not render, Once your visualization is displayed, you can: -* Zoom in and out to examine it more closely. -* Copy the original MermaidJS code used to generate the visualization by clicking the copy button at the top-right corner of the display area. +- Zoom in and out to examine it more closely. +- Copy the original MermaidJS code used to generate the visualization by clicking the copy button at the top-right corner of the display area. ### Example diff --git a/docs/features/code-execution/python.md b/docs/features/code-execution/python.md index 12d8574b64..cdbecd2c90 100644 --- a/docs/features/code-execution/python.md +++ b/docs/features/code-execution/python.md @@ -17,16 +17,16 @@ The Open WebUI frontend includes a self-contained WASM (WebAssembly) Python envi Pyodide code execution is configured to load only packages configured in scripts/prepare-pyodide.js and then added to "CodeBlock.svelte". The following Pyodide packages are currently supported in Open WebUI: -* micropip -* packaging -* requests -* beautifulsoup4 -* numpy -* pandas -* matplotlib -* scikit-learn -* scipy -* regex +- micropip +- packaging +- requests +- beautifulsoup4 +- numpy +- pandas +- matplotlib +- scikit-learn +- scipy +- regex These libraries can be used to perform various tasks, such as data manipulation, machine learning, and web scraping. If the package you're wanting to run is not compiled inside of the Pyodide we ship with Open WebUIm, the package will not be able to be used. @@ -36,13 +36,13 @@ To execute Python code, ask an LLM within a chat to write a Python script for yo ## Tips for Using Python Code Execution -* When writing Python code, keep in mind that the code would be running in a Pyodide environment when executed. You can inform the LLM of this by mentioning "Pyodide environment" when asking for code. -* Research the Pyodide documentation to understand the capabilities and limitations of the environment. -* Experiment with different libraries and scripts to explore the possibilities of Python code execution in Open WebUI. +- When writing Python code, keep in mind that the code would be running in a Pyodide environment when executed. You can inform the LLM of this by mentioning "Pyodide environment" when asking for code. +- Research the Pyodide documentation to understand the capabilities and limitations of the environment. +- Experiment with different libraries and scripts to explore the possibilities of Python code execution in Open WebUI. ## Pyodide Documentation -* [Pyodide Documentation](https://pyodide.org/en/stable/) +- [Pyodide Documentation](https://pyodide.org/en/stable/) ## Code Example diff --git a/docs/features/document-extraction/apachetika.md b/docs/features/document-extraction/apachetika.md index c5428e34b2..c1bf5b2553 100644 --- a/docs/features/document-extraction/apachetika.md +++ b/docs/features/document-extraction/apachetika.md @@ -14,9 +14,9 @@ This documentation provides a step-by-step guide to integrating Apache Tika with Prerequisites ------------ -* Open WebUI instance -* Docker installed on your system -* Docker network set up for Open WebUI +- Open WebUI instance +- Docker installed on your system +- Docker network set up for Open WebUI Integration Steps ---------------- @@ -62,13 +62,13 @@ Note that if you choose to use the Docker run command, you'll need to specify th To use Apache Tika as the context extraction engine in Open WebUI, follow these steps: -* Log in to your Open WebUI instance. -* Navigate to the `Admin Panel` settings menu. -* Click on `Settings`. -* Click on the `Documents` tab. -* Change the `Default` content extraction engine dropdown to `Tika`. -* Update the context extraction engine URL to `http://tika:9998`. -* Save the changes. +- Log in to your Open WebUI instance. +- Navigate to the `Admin Panel` settings menu. +- Click on `Settings`. +- Click on the `Documents` tab. +- Change the `Default` content extraction engine dropdown to `Tika`. +- Update the context extraction engine URL to `http://tika:9998`. +- Save the changes. Verifying Apache Tika in Docker ===================================== @@ -145,10 +145,10 @@ Instructions to run the script: ### Prerequisites -* Python 3.x must be installed on your system -* `requests` library must be installed (you can install it using pip: `pip install requests`) -* Apache Tika Docker container must be running (use `docker run -p 9998:9998 apache/tika` command) -* Replace `"test.txt"` with the path to the file you want to send to Apache Tika +- Python 3.x must be installed on your system +- `requests` library must be installed (you can install it using pip: `pip install requests`) +- Apache Tika Docker container must be running (use `docker run -p 9998:9998 apache/tika` command) +- Replace `"test.txt"` with the path to the file you want to send to Apache Tika ### Running the Script @@ -167,18 +167,18 @@ By following these steps, you can verify that Apache Tika is working correctly i Troubleshooting -------------- -* Make sure the Apache Tika service is running and accessible from the Open WebUI instance. -* Check the Docker logs for any errors or issues related to the Apache Tika service. -* Verify that the context extraction engine URL is correctly configured in Open WebUI. +- Make sure the Apache Tika service is running and accessible from the Open WebUI instance. +- Check the Docker logs for any errors or issues related to the Apache Tika service. +- Verify that the context extraction engine URL is correctly configured in Open WebUI. Benefits of Integration ---------------------- Integrating Apache Tika with Open WebUI provides several benefits, including: -* **Improved Metadata Extraction**: Apache Tika's advanced metadata extraction capabilities can help you extract accurate and relevant data from your files. -* **Support for Multiple File Formats**: Apache Tika supports a wide range of file formats, making it an ideal solution for organizations that work with diverse file types. -* **Enhanced Content Analysis**: Apache Tika's advanced content analysis capabilities can help you extract valuable insights from your files. +- **Improved Metadata Extraction**: Apache Tika's advanced metadata extraction capabilities can help you extract accurate and relevant data from your files. +- **Support for Multiple File Formats**: Apache Tika supports a wide range of file formats, making it an ideal solution for organizations that work with diverse file types. +- **Enhanced Content Analysis**: Apache Tika's advanced content analysis capabilities can help you extract valuable insights from your files. Conclusion ---------- diff --git a/docs/features/document-extraction/docling.md b/docs/features/document-extraction/docling.md index 7d8dc84913..662a1888df 100644 --- a/docs/features/document-extraction/docling.md +++ b/docs/features/document-extraction/docling.md @@ -14,9 +14,9 @@ This documentation provides a step-by-step guide to integrating Docling with Ope Prerequisites ------------ -* Open WebUI instance -* Docker installed on your system -* Docker network set up for Open WebUI +- Open WebUI instance +- Docker installed on your system +- Docker network set up for Open WebUI Integration Steps ---------------- @@ -35,23 +35,23 @@ docker run --gpus all -p 5001:5001 -e DOCLING_SERVE_ENABLE_UI=true quay.io/docli ### Step 2: Configure Open WebUI to use Docling -* Log in to your Open WebUI instance. -* Navigate to the `Admin Panel` settings menu. -* Click on `Settings`. -* Click on the `Documents` tab. -* Change the `Default` content extraction engine dropdown to `Docling`. -* Update the context extraction engine URL to `http://host.docker.internal:5001`. -* Save the changes. +- Log in to your Open WebUI instance. +- Navigate to the `Admin Panel` settings menu. +- Click on `Settings`. +- Click on the `Documents` tab. +- Change the `Default` content extraction engine dropdown to `Docling`. +- Update the context extraction engine URL to `http://host.docker.internal:5001`. +- Save the changes. ### (optional) Step 3: Configure Docling's picture description features -* on the `Documents` tab: -* Activate `Describe Pictures in Documents` button. -* Below, choose a description mode: `local` or `API` - * `local`: vision model will run in the same context as Docling itself - * `API`: Docling will make a call to an external service/container (i.e. Ollama) -* fill in an **object value** as described at https://github.com/docling-project/docling-serve/blob/main/docs/usage.md#picture-description-options -* Save the changes. +- on the `Documents` tab: +- Activate `Describe Pictures in Documents` button. +- Below, choose a description mode: `local` or `API` + - `local`: vision model will run in the same context as Docling itself + - `API`: Docling will make a call to an external service/container (i.e. Ollama) +- fill in an **object value** as described at https://github.com/docling-project/docling-serve/blob/main/docs/usage.md#picture-description-options +- Save the changes. #### Make sure the object value is a valid JSON! Working examples below @@ -98,12 +98,12 @@ This command starts the Docling container and maps port 5001 from the container ### 2. Verify the Server is Running -* Go to `http://127.0.0.1:5001/ui/` -* The URL should lead to a UI to use Docling +- Go to `http://127.0.0.1:5001/ui/` +- The URL should lead to a UI to use Docling ### 3. Verify the Integration -* You can try uploading some files via the UI and it should return output in MD format or your desired format +- You can try uploading some files via the UI and it should return output in MD format or your desired format ### Conclusion diff --git a/docs/features/document-extraction/mistral-ocr.md b/docs/features/document-extraction/mistral-ocr.md index 493b1057fd..17a7bfbe58 100644 --- a/docs/features/document-extraction/mistral-ocr.md +++ b/docs/features/document-extraction/mistral-ocr.md @@ -14,32 +14,32 @@ This documentation provides a step-by-step guide to integrating Mistral OCR with Prerequisites ------------ -* Open WebUI instance -* Mistral AI account +- Open WebUI instance +- Mistral AI account Integration Steps ---------------- ### Step 1: Sign Up or Login to Mistral AI console -* Go to `https://console.mistral.ai` -* Follow the instructions as instructed on the process -* After successful authorization, you should be welcomed to the Console Home +- Go to `https://console.mistral.ai` +- Follow the instructions as instructed on the process +- After successful authorization, you should be welcomed to the Console Home ### Step 2: Generate an API key -* Go to `API Keys` or `https://console.mistral.ai/api-keys` -* Create a new key and make sure to copy it +- Go to `API Keys` or `https://console.mistral.ai/api-keys` +- Create a new key and make sure to copy it ### Step 3: Configure Open WebUI to use Mistral OCR -* Log in to your Open WebUI instance. -* Navigate to the `Admin Panel` settings menu. -* Click on `Settings`. -* Click on the `Documents` tab. -* Change the `Default` content extraction engine dropdown to `Mistral OCR`. -* Paste the API Key on the field -* Save the Admin Panel. +- Log in to your Open WebUI instance. +- Navigate to the `Admin Panel` settings menu. +- Click on `Settings`. +- Click on the `Documents` tab. +- Change the `Default` content extraction engine dropdown to `Mistral OCR`. +- Paste the API Key on the field +- Save the Admin Panel. Verifying Mistral OCR ===================================== diff --git a/docs/features/webhooks.md b/docs/features/webhooks.md index da7b1ee3a3..8eaed4850b 100644 --- a/docs/features/webhooks.md +++ b/docs/features/webhooks.md @@ -50,11 +50,11 @@ New user signed up: Tim Troubleshooting -------------- -* Make sure the webhook URL is correct and properly formatted. -* Verify that the webhook service is enabled and configured correctly. -* Check the Open WebUI logs for any errors related to the webhook. -* Verify the connection hasn't been interrupted or blocked by a firewall or proxy. -* The webhook server could be temporarily unavailable or experiencing high latency. -* If provided through the webhook service, verify if the Webhook API key is invalid, expired, or revoked. +- Make sure the webhook URL is correct and properly formatted. +- Verify that the webhook service is enabled and configured correctly. +- Check the Open WebUI logs for any errors related to the webhook. +- Verify the connection hasn't been interrupted or blocked by a firewall or proxy. +- The webhook server could be temporarily unavailable or experiencing high latency. +- If provided through the webhook service, verify if the Webhook API key is invalid, expired, or revoked. Note: The webhook feature in Open WebUI is still evolving, and we plan to add more features and event types in the future. diff --git a/docs/features/workspace/groups.md b/docs/features/workspace/groups.md index 836fc49c8e..1e40f2e641 100644 --- a/docs/features/workspace/groups.md +++ b/docs/features/workspace/groups.md @@ -5,9 +5,9 @@ title: "🔐 Groups" Groups allow administrators to -* assign permissions to multiple users at once, simplifying access management -* limit access to specific resources (Models, Tools, etc) by setting their access to "private" then opening access to specific groups -* Specify access to a resource for a group to either "read" or "write" (write access implies read) +- assign permissions to multiple users at once, simplifying access management +- limit access to specific resources (Models, Tools, etc) by setting their access to "private" then opening access to specific groups +- Specify access to a resource for a group to either "read" or "write" (write access implies read) :::info Note that the permissions model is permissive. If a user is a member of two groups that define different permissions for a resource, the most permissive permission is applied. @@ -17,29 +17,29 @@ Note that the permissions model is permissive. If a user is a member of two grou Each group in Open WebUI contains: -* A unique identifier -* Name and description -* Owner/creator reference -* List of member user IDs -* Permission configuration -* Additional metadata +- A unique identifier +- Name and description +- Owner/creator reference +- List of member user IDs +- Permission configuration +- Additional metadata ### Group Management Groups can be: -* **Created manually** by administrators through the user interface -* **Synced automatically** from OAuth providers when `ENABLE_OAUTH_GROUP_MANAGEMENT` is enabled -* **Created automatically** from OAuth claims when both `ENABLE_OAUTH_GROUP_MANAGEMENT` and`ENABLE_OAUTH_GROUP_CREATION` +- **Created manually** by administrators through the user interface +- **Synced automatically** from OAuth providers when `ENABLE_OAUTH_GROUP_MANAGEMENT` is enabled +- **Created automatically** from OAuth claims when both `ENABLE_OAUTH_GROUP_MANAGEMENT` and`ENABLE_OAUTH_GROUP_CREATION` are enabled ### OAuth Group Integration When OAuth group management is enabled, user group memberships are synchronized with groups received in OAuth claims: -* Users are added to Open WebUI groups that match their OAuth claims -* Users are removed from groups not present in their OAuth claims -* With `ENABLE_OAUTH_GROUP_CREATION` enabled, groups from OAuth claims that don't exist in Open WebUI are automatically +- Users are added to Open WebUI groups that match their OAuth claims +- Users are removed from groups not present in their OAuth claims +- With `ENABLE_OAUTH_GROUP_CREATION` enabled, groups from OAuth claims that don't exist in Open WebUI are automatically created ## Group Permissions diff --git a/docs/features/workspace/models.md b/docs/features/workspace/models.md index f92295bacc..6f1804ca1b 100644 --- a/docs/features/workspace/models.md +++ b/docs/features/workspace/models.md @@ -9,32 +9,32 @@ The `Models` section of the `Workspace` within Open WebUI is a powerful tool tha From the `Models` section, you can perform the following actions on your modelfiles: -* **Edit**: Dive into the details of your modelfile and make changes to its character and more. -* **Clone**: Create a copy of a modelfile, which will be appended with `-clone` to the cloned `Model ID`. Note that you cannot clone a base model; you must create a model first before cloning it. -* **Share**: Share your modelfile with the Open WebUI community by clicking the `Share` button, which will redirect you to [https://openwebui.com/models/create](https://openwebui.com/models/create). -* **Export**: Download the modelfile's `.json` export to your PC. -* **Hide**: Hide the modelfile from the model selector dropdown within chats. +- **Edit**: Dive into the details of your modelfile and make changes to its character and more. +- **Clone**: Create a copy of a modelfile, which will be appended with `-clone` to the cloned `Model ID`. Note that you cannot clone a base model; you must create a model first before cloning it. +- **Share**: Share your modelfile with the Open WebUI community by clicking the `Share` button, which will redirect you to [https://openwebui.com/models/create](https://openwebui.com/models/create). +- **Export**: Download the modelfile's `.json` export to your PC. +- **Hide**: Hide the modelfile from the model selector dropdown within chats. ### Modelfile Editing When editing a modelfile, you can customize the following settings: -* **Avatar Photo**: Upload an avatar photo to represent your modelfile. -* **Model Name**: Change the name of your modelfile. -* **System Prompt**: Provide an optional system prompt for your modelfile. -* **Model Parameters**: Adjust the parameters of your modelfile. -* **Prompt Suggestions**: Add prompts that will be displayed on a fresh new chat page. -* **Documents**: Add documents to the modelfile (always RAG [Retrieval Augmented Generation]). -* **Tools, Filters, and Actions**: Select the tools, filters, and actions that will be available to the modelfile. -* **Vision**: Toggle to enable `Vision` for multi-modals. -* **Tags**: Add tags to the modelfile that will be displayed beside the model name in the model selector dropdown. +- **Avatar Photo**: Upload an avatar photo to represent your modelfile. +- **Model Name**: Change the name of your modelfile. +- **System Prompt**: Provide an optional system prompt for your modelfile. +- **Model Parameters**: Adjust the parameters of your modelfile. +- **Prompt Suggestions**: Add prompts that will be displayed on a fresh new chat page. +- **Documents**: Add documents to the modelfile (always RAG [Retrieval Augmented Generation]). +- **Tools, Filters, and Actions**: Select the tools, filters, and actions that will be available to the modelfile. +- **Vision**: Toggle to enable `Vision` for multi-modals. +- **Tags**: Add tags to the modelfile that will be displayed beside the model name in the model selector dropdown. ### Model Discovery and Import/Export The `Models` section also includes features for importing and exporting models: -* **Import Models**: Use this button to import models from a .json file or other sources. -* **Export Models**: Use this button to export all your modelfiles in a single .json file. +- **Import Models**: Use this button to import models from a .json file or other sources. +- **Export Models**: Use this button to export all your modelfiles in a single .json file. To download models, navigate to the **Ollama Settings** in the Connections tab. Alternatively, you can also download models directly by typing a command like `ollama run hf.co/[model creator]/[model name]` in the model selection dropdown. @@ -44,12 +44,12 @@ This action will create a button labeled "Pull [Model Name]" for downloading. **Example**: Switching between **Mistral**, **LLaVA**, and **GPT-3.5** in a Multi-Stage Task -* **Scenario**: A multi-stage conversation involves different task types, such as starting with a simple FAQ, interpreting an image, and then generating a creative response. -* **Reason for Switching**: The user can leverage each model's specific strengths for each stage: - * **Mistral** for general questions to reduce computation time and costs. - * **LLaVA** for visual tasks to gain insights from image-based data. - * **GPT-3.5** for generating more sophisticated and nuanced language output. -* **Process**: The user switches between models, depending on the task type, to maximize efficiency and response quality. +- **Scenario**: A multi-stage conversation involves different task types, such as starting with a simple FAQ, interpreting an image, and then generating a creative response. +- **Reason for Switching**: The user can leverage each model's specific strengths for each stage: + - **Mistral** for general questions to reduce computation time and costs. + - **LLaVA** for visual tasks to gain insights from image-based data. + - **GPT-3.5** for generating more sophisticated and nuanced language output. +- **Process**: The user switches between models, depending on the task type, to maximize efficiency and response quality. **How To**: 1. **Select the Model**: Within the chat interface, select the desired models from the model switcher dropdown. You can select up to two models simultaneously, and both responses will be generated. You can then navigate between them by using the back and forth arrows. diff --git a/docs/features/workspace/permissions.md b/docs/features/workspace/permissions.md index a35abc9ce3..39c426b6a9 100644 --- a/docs/features/workspace/permissions.md +++ b/docs/features/workspace/permissions.md @@ -15,29 +15,29 @@ Administrators can manage permissions in the following ways: Workspace permissions control access to core components of the Open WebUI platform: -* **Models Access**: Toggle to allow users to access and manage custom models. (Environment variable: `USER_PERMISSIONS_WORKSPACE_MODELS_ACCESS`) -* **Knowledge Access**: Toggle to allow users to access and manage knowledge bases. (Environment variable: `USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ACCESS`) -* **Prompts Access**: Toggle to allow users to access and manage saved prompts. (Environment variable: `USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS`) -* **Tools Access**: Toggle to allow users to access and manage tools. (Environment variable: `USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS`) *Note: Enabling this gives users the ability to upload arbitrary code to the server.* +- **Models Access**: Toggle to allow users to access and manage custom models. (Environment variable: `USER_PERMISSIONS_WORKSPACE_MODELS_ACCESS`) +- **Knowledge Access**: Toggle to allow users to access and manage knowledge bases. (Environment variable: `USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ACCESS`) +- **Prompts Access**: Toggle to allow users to access and manage saved prompts. (Environment variable: `USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS`) +- **Tools Access**: Toggle to allow users to access and manage tools. (Environment variable: `USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS`) *Note: Enabling this gives users the ability to upload arbitrary code to the server.* ## Chat Permissions Chat permissions determine what actions users can perform within chat conversations: -* **Allow Chat Controls**: Toggle to enable access to chat control options. -* **Allow File Upload**: Toggle to permit users to upload files during chat sessions. (Environment variable: `USER_PERMISSIONS_CHAT_FILE_UPLOAD`) -* **Allow Chat Delete**: Toggle to permit users to delete chat conversations. (Environment variable: `USER_PERMISSIONS_CHAT_DELETE`) -* **Allow Chat Edit**: Toggle to permit users to edit messages in chat conversations. (Environment variable: `USER_PERMISSIONS_CHAT_EDIT`) -* **Allow Temporary Chat**: Toggle to permit users to create temporary chat sessions. (Environment variable: `USER_PERMISSIONS_CHAT_TEMPORARY`) +- **Allow Chat Controls**: Toggle to enable access to chat control options. +- **Allow File Upload**: Toggle to permit users to upload files during chat sessions. (Environment variable: `USER_PERMISSIONS_CHAT_FILE_UPLOAD`) +- **Allow Chat Delete**: Toggle to permit users to delete chat conversations. (Environment variable: `USER_PERMISSIONS_CHAT_DELETE`) +- **Allow Chat Edit**: Toggle to permit users to edit messages in chat conversations. (Environment variable: `USER_PERMISSIONS_CHAT_EDIT`) +- **Allow Temporary Chat**: Toggle to permit users to create temporary chat sessions. (Environment variable: `USER_PERMISSIONS_CHAT_TEMPORARY`) ## Features Permissions Features permissions control access to specialized capabilities within Open WebUI: -* **Web Search**: Toggle to allow users to perform web searches during chat sessions. (Environment variable: `ENABLE_RAG_WEB_SEARCH`) -* **Image Generation**: Toggle to allow users to generate images. (Environment variable: `ENABLE_IMAGE_GENERATION`) -* **Code Interpreter**: Toggle to allow users to use the code interpreter feature. (Environment variable: `USER_PERMISSIONS_FEATURES_CODE_INTERPRETER`) -* **Direct Tool Servers**: Toggle to allow users to connect directly to tool servers. (Environment variable: `USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS`) +- **Web Search**: Toggle to allow users to perform web searches during chat sessions. (Environment variable: `ENABLE_RAG_WEB_SEARCH`) +- **Image Generation**: Toggle to allow users to generate images. (Environment variable: `ENABLE_IMAGE_GENERATION`) +- **Code Interpreter**: Toggle to allow users to use the code interpreter feature. (Environment variable: `USER_PERMISSIONS_FEATURES_CODE_INTERPRETER`) +- **Direct Tool Servers**: Toggle to allow users to connect directly to tool servers. (Environment variable: `USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS`) ## Default Permission Settings @@ -45,25 +45,25 @@ By default, Open WebUI applies the following permission settings: **Workspace Permissions**: -* Models Access: Disabled (`USER_PERMISSIONS_WORKSPACE_MODELS_ACCESS=False`) -* Knowledge Access: Disabled (`USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ACCESS=False`) -* Prompts Access: Disabled (`USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS=False`) -* Tools Access: Disabled (`USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS=False`) +- Models Access: Disabled (`USER_PERMISSIONS_WORKSPACE_MODELS_ACCESS=False`) +- Knowledge Access: Disabled (`USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ACCESS=False`) +- Prompts Access: Disabled (`USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS=False`) +- Tools Access: Disabled (`USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS=False`) **Chat Permissions**: -* Allow Chat Controls: Enabled -* Allow File Upload: Enabled (`USER_PERMISSIONS_CHAT_FILE_UPLOAD=True`) -* Allow Chat Delete: Enabled (`USER_PERMISSIONS_CHAT_DELETE=True`) -* Allow Chat Edit: Enabled (`USER_PERMISSIONS_CHAT_EDIT=True`) -* Allow Temporary Chat: Enabled (`USER_PERMISSIONS_CHAT_TEMPORARY=True`) +- Allow Chat Controls: Enabled +- Allow File Upload: Enabled (`USER_PERMISSIONS_CHAT_FILE_UPLOAD=True`) +- Allow Chat Delete: Enabled (`USER_PERMISSIONS_CHAT_DELETE=True`) +- Allow Chat Edit: Enabled (`USER_PERMISSIONS_CHAT_EDIT=True`) +- Allow Temporary Chat: Enabled (`USER_PERMISSIONS_CHAT_TEMPORARY=True`) **Features Permissions**: -* Web Search: Enabled (`ENABLE_RAG_WEB_SEARCH=True`) -* Image Generation: Enabled (`ENABLE_IMAGE_GENERATION=True`) -* Code Interpreter: Enabled (`USER_PERMISSIONS_FEATURES_CODE_INTERPRETER`) -* Direct Tool Servers: Disabled (`USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS=False`) +- Web Search: Enabled (`ENABLE_RAG_WEB_SEARCH=True`) +- Image Generation: Enabled (`ENABLE_IMAGE_GENERATION=True`) +- Code Interpreter: Enabled (`USER_PERMISSIONS_FEATURES_CODE_INTERPRETER`) +- Direct Tool Servers: Disabled (`USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS=False`) Administrators can change the default permissions in the user interface under "users" in the admin panel. diff --git a/docs/features/workspace/prompts.md b/docs/features/workspace/prompts.md index d21e8147ad..1aaa9e9b59 100644 --- a/docs/features/workspace/prompts.md +++ b/docs/features/workspace/prompts.md @@ -9,19 +9,19 @@ The `Prompts` section of the `Workspace` within Open WebUI enables users to crea The Prompts interface provides several key features for managing your custom prompts: -* **Create**: Design new prompts with customizable titles, access levels, and content. -* **Share**: Share prompts with other users based on configured access permissions. -* **Access Control**: Set visibility and usage permissions for each prompt (refer to [Permissions](./permissions.md) for more details). -* **Slash Commands**: Quickly access prompts using custom slash commands during chat sessions. +- **Create**: Design new prompts with customizable titles, access levels, and content. +- **Share**: Share prompts with other users based on configured access permissions. +- **Access Control**: Set visibility and usage permissions for each prompt (refer to [Permissions](./permissions.md) for more details). +- **Slash Commands**: Quickly access prompts using custom slash commands during chat sessions. ### Creating and Editing Prompts When creating or editing a prompt, you can configure the following settings: -* **Title**: Give your prompt a descriptive name for easy identification. -* **Access**: Set the access level to control who can view and use the prompt. -* **Command**: Define a slash command that will trigger the prompt (e.g., `/summarize`). -* **Prompt Content**: Write the actual prompt text that will be sent to the model. +- **Title**: Give your prompt a descriptive name for easy identification. +- **Access**: Set the access level to control who can view and use the prompt. +- **Command**: Define a slash command that will trigger the prompt (e.g., `/summarize`). +- **Prompt Content**: Write the actual prompt text that will be sent to the model. ### Prompt Variables @@ -29,17 +29,17 @@ Open WebUI supports two kinds of variables to make your prompts more dynamic and **System Variables** are automatically replaced with their corresponding value when the prompt is used. They are useful for inserting dynamic information like the current date or user details. -* **Clipboard Content**: Use `{{CLIPBOARD}}` to insert content from your clipboard. -* **Date and Time**: - * `{{CURRENT_DATE}}`: Current date - * `{{CURRENT_DATETIME}}`: Current date and time - * `{{CURRENT_TIME}}`: Current time - * `{{CURRENT_TIMEZONE}}`: Current timezone - * `{{CURRENT_WEEKDAY}}`: Current day of the week -* **User Information**: - * `{{USER_NAME}}`: Current user's name - * `{{USER_LANGUAGE}}`: User's selected language - * `{{USER_LOCATION}}`: User's location (requires HTTPS and Settings > Interface toggle) +- **Clipboard Content**: Use `{{CLIPBOARD}}` to insert content from your clipboard. +- **Date and Time**: + - `{{CURRENT_DATE}}`: Current date + - `{{CURRENT_DATETIME}}`: Current date and time + - `{{CURRENT_TIME}}`: Current time + - `{{CURRENT_TIMEZONE}}`: Current timezone + - `{{CURRENT_WEEKDAY}}`: Current day of the week +- **User Information**: + - `{{USER_NAME}}`: Current user's name + - `{{USER_LANGUAGE}}`: User's selected language + - `{{USER_LOCATION}}`: User's location (requires HTTPS and Settings > Interface toggle) **Custom Input Variables** transform your prompts into interactive templates. When you use a prompt containing these variables, a modal window will automatically appear, allowing you to fill in your values. This is extremely powerful for creating complex, reusable prompts that function like forms. See the guidelines below for a full explanation. @@ -47,13 +47,13 @@ By leveraging custom input variables, you can move beyond static text and build ### Variable Usage Guidelines -* Enclose all variables with double curly braces: `{{variable}}` -* **All custom input variables are optional by default** - users can leave fields blank when filling out the form -* Use the `:required` flag to make specific variables mandatory when necessary -* The `{{USER_LOCATION}}` system variable requires: - * A secure HTTPS connection - * Enabling the feature in `Settings` > `Interface` -* The `{{CLIPBOARD}}` system variable requires clipboard access permission from your device +- Enclose all variables with double curly braces: `{{variable}}` +- **All custom input variables are optional by default** - users can leave fields blank when filling out the form +- Use the `:required` flag to make specific variables mandatory when necessary +- The `{{USER_LOCATION}}` system variable requires: + - A secure HTTPS connection + - Enabling the feature in `Settings` > `Interface` +- The `{{CLIPBOARD}}` system variable requires clipboard access permission from your device --- @@ -72,12 +72,12 @@ By leveraging custom input variables, you can move beyond static text and build There are two ways to define a custom variable: 1. **Simple Input**: `{{variable_name}}` - * This creates a standard, single-line `text` type input field in the popup window. - * The field will be optional by default. + - This creates a standard, single-line `text` type input field in the popup window. + - The field will be optional by default. 2. **Typed Input**: `{{variable_name | [type][:property="value"]}}` - * This allows you to specify the type of input field (e.g., a dropdown, a date picker) and configure its properties. - * The field will be optional by default unless you add the `:required` flag. + - This allows you to specify the type of input field (e.g., a dropdown, a date picker) and configure its properties. + - The field will be optional by default unless you add the `:required` flag. **Required vs Optional Variables** @@ -92,9 +92,9 @@ To make a variable **required** (mandatory), add the `:required` flag: When a field is marked as required: -* The form will display a visual indicator (asterisk) next to the field label -* Users cannot submit the form without providing a value -* Browser validation will prevent form submission if required fields are empty +- The form will display a visual indicator (asterisk) next to the field label +- Users cannot submit the form without providing a value +- Browser validation will prevent form submission if required fields are empty **Input Types Overview** @@ -124,8 +124,8 @@ You can specify different input types to build rich, user-friendly forms. Here i Create a reusable prompt where the article content is required but additional parameters are optional. -* **Command:** `/summarize_article` -* **Prompt Content:** +- **Command:** `/summarize_article` +- **Prompt Content:** ``` Please summarize the following article. {{article_text | textarea:placeholder="Paste the full text of the article here...":required}} @@ -143,8 +143,8 @@ Create a reusable prompt where the article content is required but additional pa This prompt ensures critical information is captured while allowing optional details. -* **Command:** `/bug_report` -* **Prompt Content:** +- **Command:** `/bug_report` +- **Prompt Content:** ``` Generate a bug report with the following details: @@ -166,8 +166,8 @@ This prompt ensures critical information is captured while allowing optional det This prompt generates tailored content with required core information and optional customizations. -* **Command:** `/social_post` -* **Prompt Content:** +- **Command:** `/social_post` +- **Prompt Content:** ``` Generate a social media post for {{platform | select:options=["LinkedIn","Twitter","Facebook","Instagram"]:required}}. @@ -186,8 +186,8 @@ This prompt generates tailored content with required core information and option Generate structured meeting minutes with required basics and optional details. -* **Command:** `/meeting_minutes` -* **Prompt Content:** +- **Command:** `/meeting_minutes` +- **Prompt Content:** ``` # Meeting Minutes @@ -217,8 +217,8 @@ Generate structured meeting minutes with required basics and optional details. A flexible template for reviewing various types of content. -* **Command:** `/content_review` -* **Prompt Content:** +- **Command:** `/content_review` +- **Prompt Content:** ``` Please review the following {{content_type | select:options=["Blog Post","Marketing Copy","Documentation","Email","Presentation"]:required}}: @@ -238,26 +238,26 @@ A flexible template for reviewing various types of content. Prompt management is controlled by the following permission settings: -* **Prompts Access**: Users need the `USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS` permission to create and manage prompts. -* For detailed information about configuring permissions, refer to the [Permissions documentation](./permissions.md). +- **Prompts Access**: Users need the `USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS` permission to create and manage prompts. +- For detailed information about configuring permissions, refer to the [Permissions documentation](./permissions.md). ### Best Practices -* Use clear, descriptive titles for your prompts -* Create intuitive slash commands that reflect the prompt's purpose -* **Design with flexibility in mind**: Mark only truly essential fields as required, leaving optional fields for enhanced customization -* For custom variables, use clear names (e.g., `{{your_name}}` instead of `{{var1}}`) and descriptive `placeholder` text to make templates easy to understand -* **Provide sensible defaults** for optional fields where appropriate to speed up form completion -* **Use the required flag strategically** - only require information that is absolutely necessary for the prompt to function properly -* Document any specific requirements or expected inputs in the prompt description -* Test prompts with different variable combinations, including scenarios where optional fields are left blank -* Consider access levels carefully when sharing prompts with other users - public sharing means that it will appear automatically for all users when they hit `/` in a chat, so you want to avoid creating too many -* **Consider user workflows**: Think about which information users will always have versus what they might want to customize occasionally +- Use clear, descriptive titles for your prompts +- Create intuitive slash commands that reflect the prompt's purpose +- **Design with flexibility in mind**: Mark only truly essential fields as required, leaving optional fields for enhanced customization +- For custom variables, use clear names (e.g., `{{your_name}}` instead of `{{var1}}`) and descriptive `placeholder` text to make templates easy to understand +- **Provide sensible defaults** for optional fields where appropriate to speed up form completion +- **Use the required flag strategically** - only require information that is absolutely necessary for the prompt to function properly +- Document any specific requirements or expected inputs in the prompt description +- Test prompts with different variable combinations, including scenarios where optional fields are left blank +- Consider access levels carefully when sharing prompts with other users - public sharing means that it will appear automatically for all users when they hit `/` in a chat, so you want to avoid creating too many +- **Consider user workflows**: Think about which information users will always have versus what they might want to customize occasionally ### Migration Notes If you have existing prompts created before this update, they will continue to work as before. However, note that: -* All existing variables are now treated as optional by default -* If you want to maintain required behavior for critical fields, edit your prompts to add the `:required` flag to those variables -* This change provides better user experience by allowing flexible usage of prompt templates +- All existing variables are now treated as optional by default +- If you want to maintain required behavior for critical fields, edit your prompts to add the `:required` flag to those variables +- This change provides better user experience by allowing flexible usage of prompt templates diff --git a/docs/features/workspace/roles.md b/docs/features/workspace/roles.md index bea8c1262c..47ae67185f 100644 --- a/docs/features/workspace/roles.md +++ b/docs/features/workspace/roles.md @@ -13,9 +13,9 @@ Open WebUI implements a structured role-based access control system with three p ### Role Assignment -* **First User:** The first account created on a new Open WebUI instance automatically receives Administrator +- **First User:** The first account created on a new Open WebUI instance automatically receives Administrator privileges. -* **Subsequent Users:** New user registrations are assigned a default role based on the `DEFAULT_USER_ROLE` +- **Subsequent Users:** New user registrations are assigned a default role based on the `DEFAULT_USER_ROLE` configuration. The default role for new registrations can be configured using the `DEFAULT_USER_ROLE` environment variable: @@ -30,36 +30,36 @@ When set to "pending", new users must be manually approved by an administrator b Groups allow administrators to -* assign permissions to multiple users at once, simplifying access management -* limit access to specific resources (Models, Tools, etc) by setting their access to "private" then opening access to +- assign permissions to multiple users at once, simplifying access management +- limit access to specific resources (Models, Tools, etc) by setting their access to "private" then opening access to specific groups -* Group access to a resource can be set as "read" or "write" +- Group access to a resource can be set as "read" or "write" ### Group Structure Each group in Open WebUI contains: -* A unique identifier -* Name and description -* Owner/creator reference -* List of member user IDs -* Permission configuration -* Additional metadata +- A unique identifier +- Name and description +- Owner/creator reference +- List of member user IDs +- Permission configuration +- Additional metadata ### Group Management Groups can be: -* **Created manually** by administrators through the user interface -* **Synced automatically** from OAuth providers when `ENABLE_OAUTH_GROUP_MANAGEMENT` is enabled -* **Created automatically** from OAuth claims when both `ENABLE_OAUTH_GROUP_MANAGEMENT` and`ENABLE_OAUTH_GROUP_CREATION` +- **Created manually** by administrators through the user interface +- **Synced automatically** from OAuth providers when `ENABLE_OAUTH_GROUP_MANAGEMENT` is enabled +- **Created automatically** from OAuth claims when both `ENABLE_OAUTH_GROUP_MANAGEMENT` and`ENABLE_OAUTH_GROUP_CREATION` are enabled ### OAuth Group Integration When OAuth group management is enabled, user group memberships are synchronized with groups received in OAuth claims: -* Users are added to Open WebUI groups that match their OAuth claims -* Users are removed from groups not present in their OAuth claims -* With `ENABLE_OAUTH_GROUP_CREATION` enabled, groups from OAuth claims that don't exist in Open WebUI are automatically +- Users are added to Open WebUI groups that match their OAuth claims +- Users are removed from groups not present in their OAuth claims +- With `ENABLE_OAUTH_GROUP_CREATION` enabled, groups from OAuth claims that don't exist in Open WebUI are automatically created diff --git a/docs/getting-started/advanced-topics/https-encryption.md b/docs/getting-started/advanced-topics/https-encryption.md index 5045fda12d..908cae7b63 100644 --- a/docs/getting-started/advanced-topics/https-encryption.md +++ b/docs/getting-started/advanced-topics/https-encryption.md @@ -11,39 +11,39 @@ This guide explains how to enable HTTPS encryption for your Open WebUI instance. HTTPS (Hypertext Transfer Protocol Secure) encrypts communication between your web browser and the Open WebUI server. This encryption provides several key benefits: -* **Privacy and Security:** Protects sensitive data like usernames, passwords, and chat content from eavesdropping and interception, especially on public networks. -* **Integrity:** Ensures that data transmitted between the browser and server is not tampered with during transit. -* **Feature Compatibility:** **Crucially, modern browsers block access to certain "secure context" features, such as microphone access for Voice Calls, unless the website is served over HTTPS.** -* **Trust and User Confidence:** HTTPS is indicated by a padlock icon in the browser address bar, building user trust and confidence in your Open WebUI deployment. +- **Privacy and Security:** Protects sensitive data like usernames, passwords, and chat content from eavesdropping and interception, especially on public networks. +- **Integrity:** Ensures that data transmitted between the browser and server is not tampered with during transit. +- **Feature Compatibility:** **Crucially, modern browsers block access to certain "secure context" features, such as microphone access for Voice Calls, unless the website is served over HTTPS.** +- **Trust and User Confidence:** HTTPS is indicated by a padlock icon in the browser address bar, building user trust and confidence in your Open WebUI deployment. **When is HTTPS Especially Important?** -* **Internet-Facing Deployments:** If your Open WebUI instance is accessible from the public internet, HTTPS is strongly recommended to protect against security risks. -* **Voice Call Feature:** If you plan to use the Voice Call feature in Open WebUI, HTTPS is **mandatory**. -* **Sensitive Data Handling:** If you are concerned about the privacy of user data, enabling HTTPS is a crucial security measure. +- **Internet-Facing Deployments:** If your Open WebUI instance is accessible from the public internet, HTTPS is strongly recommended to protect against security risks. +- **Voice Call Feature:** If you plan to use the Voice Call feature in Open WebUI, HTTPS is **mandatory**. +- **Sensitive Data Handling:** If you are concerned about the privacy of user data, enabling HTTPS is a crucial security measure. ## Choosing the Right HTTPS Solution for You 🛠️ The best HTTPS solution depends on your existing infrastructure and technical expertise. Here are some common and effective options: -* **Cloud Providers (e.g., AWS, Google Cloud, Azure):** - * **Load Balancers:** Cloud providers typically offer managed load balancers (like AWS Elastic Load Balancer) that can handle HTTPS termination (encryption/decryption) for you. This is often the most straightforward and scalable approach in cloud environments. -* **Docker Container Environments:** - * **Reverse Proxies (Nginx, Traefik, Caddy):** Popular reverse proxies like Nginx, Traefik, and Caddy are excellent choices for managing HTTPS in Dockerized deployments. They can automatically obtain and renew SSL/TLS certificates (e.g., using Let's Encrypt) and handle HTTPS termination. - * **Nginx:** Highly configurable and widely used. - * **Traefik:** Designed for modern microservices and container environments, with automatic configuration and Let's Encrypt integration. - * **Caddy:** Focuses on ease of use and automatic HTTPS configuration. -* **Cloudflare:** - * **Simplified HTTPS:** Cloudflare provides a CDN (Content Delivery Network) and security services, including very easy HTTPS setup. It often requires minimal server-side configuration changes and is suitable for a wide range of deployments. -* **Ngrok:** - * **Local Development HTTPS:** Ngrok is a convenient tool for quickly exposing your local development server over HTTPS. It's particularly useful for testing features that require HTTPS (like Voice Calls) during development and for demos. **Not recommended for production deployments.** +- **Cloud Providers (e.g., AWS, Google Cloud, Azure):** + - **Load Balancers:** Cloud providers typically offer managed load balancers (like AWS Elastic Load Balancer) that can handle HTTPS termination (encryption/decryption) for you. This is often the most straightforward and scalable approach in cloud environments. +- **Docker Container Environments:** + - **Reverse Proxies (Nginx, Traefik, Caddy):** Popular reverse proxies like Nginx, Traefik, and Caddy are excellent choices for managing HTTPS in Dockerized deployments. They can automatically obtain and renew SSL/TLS certificates (e.g., using Let's Encrypt) and handle HTTPS termination. + - **Nginx:** Highly configurable and widely used. + - **Traefik:** Designed for modern microservices and container environments, with automatic configuration and Let's Encrypt integration. + - **Caddy:** Focuses on ease of use and automatic HTTPS configuration. +- **Cloudflare:** + - **Simplified HTTPS:** Cloudflare provides a CDN (Content Delivery Network) and security services, including very easy HTTPS setup. It often requires minimal server-side configuration changes and is suitable for a wide range of deployments. +- **Ngrok:** + - **Local Development HTTPS:** Ngrok is a convenient tool for quickly exposing your local development server over HTTPS. It's particularly useful for testing features that require HTTPS (like Voice Calls) during development and for demos. **Not recommended for production deployments.** **Key Considerations When Choosing:** -* **Complexity:** Some solutions (like Cloudflare or Caddy) are simpler to set up than others (like manually configuring Nginx). -* **Automation:** Solutions like Traefik and Caddy offer automatic certificate management, which simplifies ongoing maintenance. -* **Scalability and Performance:** Consider the performance and scalability needs of your Open WebUI instance when choosing a solution, especially for high-traffic deployments. -* **Cost:** Some solutions (like cloud load balancers or Cloudflare's paid plans) may have associated costs. Let's Encrypt and many reverse proxies are free and open-source. +- **Complexity:** Some solutions (like Cloudflare or Caddy) are simpler to set up than others (like manually configuring Nginx). +- **Automation:** Solutions like Traefik and Caddy offer automatic certificate management, which simplifies ongoing maintenance. +- **Scalability and Performance:** Consider the performance and scalability needs of your Open WebUI instance when choosing a solution, especially for high-traffic deployments. +- **Cost:** Some solutions (like cloud load balancers or Cloudflare's paid plans) may have associated costs. Let's Encrypt and many reverse proxies are free and open-source. ## 📚 Explore Deployment Tutorials for Step-by-Step Guides diff --git a/docs/getting-started/advanced-topics/monitoring/index.md b/docs/getting-started/advanced-topics/monitoring/index.md index 40fc11836b..30c46d3c4a 100644 --- a/docs/getting-started/advanced-topics/monitoring/index.md +++ b/docs/getting-started/advanced-topics/monitoring/index.md @@ -9,10 +9,10 @@ Monitoring your Open WebUI instance is crucial for ensuring it runs reliably, pe **Why Monitor?** -* **Ensure Uptime:** Proactively detect outages and service disruptions. -* **Performance Insights:** Track response times and identify potential bottlenecks. -* **Early Issue Detection:** Catch problems before they impact users significantly. -* **Peace of Mind:** Gain confidence that your Open WebUI instance is running smoothly. +- **Ensure Uptime:** Proactively detect outages and service disruptions. +- **Performance Insights:** Track response times and identify potential bottlenecks. +- **Early Issue Detection:** Catch problems before they impact users significantly. +- **Peace of Mind:** Gain confidence that your Open WebUI instance is running smoothly. ## 🚦 Levels of Monitoring @@ -45,17 +45,17 @@ You can use `curl` or any HTTP client to check this endpoint: 1. **Add a New Monitor:** In your Uptime Kuma dashboard, click "Add New Monitor". 2. **Configure Monitor Settings:** - * **Monitor Type:** Select "HTTP(s)". - * **Name:** Give your monitor a descriptive name, e.g., "Open WebUI Health Check". - * **URL:** Enter the health check endpoint URL: `http://your-open-webui-instance:8080/health` (Replace `your-open-webui-instance:8080` with your actual Open WebUI address and port). - * **Monitoring Interval:** Set the frequency of checks (e.g., `60 seconds` for every minute). - * **Retry Count:** Set the number of retries before considering the service down (e.g., `3` retries). + - **Monitor Type:** Select "HTTP(s)". + - **Name:** Give your monitor a descriptive name, e.g., "Open WebUI Health Check". + - **URL:** Enter the health check endpoint URL: `http://your-open-webui-instance:8080/health` (Replace `your-open-webui-instance:8080` with your actual Open WebUI address and port). + - **Monitoring Interval:** Set the frequency of checks (e.g., `60 seconds` for every minute). + - **Retry Count:** Set the number of retries before considering the service down (e.g., `3` retries). **What This Check Verifies:** -* **Web Server Availability:** Ensures the web server (e.g., Nginx, Uvicorn) is responding to requests. -* **Application Running:** Confirms that the Open WebUI application itself is running and initialized. -* **Basic Database Connectivity:** Typically includes a basic check to ensure the application can connect to the database. +- **Web Server Availability:** Ensures the web server (e.g., Nginx, Uvicorn) is responding to requests. +- **Application Running:** Confirms that the Open WebUI application itself is running and initialized. +- **Basic Database Connectivity:** Typically includes a basic check to ensure the application can connect to the database. ## Level 2: Open WebUI Model Connectivity 🔗 @@ -63,9 +63,9 @@ To go beyond basic availability, you can monitor the `/api/models` endpoint. Thi **Why Monitor Model Connectivity?** -* **Model Provider Issues:** Detect problems with your model provider services (e.g., API outages, authentication failures). -* **Configuration Errors:** Identify misconfigurations in your model provider settings within Open WebUI. -* **Ensure Model Availability:** Confirm that the models you expect to be available are actually accessible to Open WebUI. +- **Model Provider Issues:** Detect problems with your model provider services (e.g., API outages, authentication failures). +- **Configuration Errors:** Identify misconfigurations in your model provider settings within Open WebUI. +- **Ensure Model Availability:** Confirm that the models you expect to be available are actually accessible to Open WebUI. **API Endpoint Details:** @@ -89,17 +89,17 @@ You'll need an API key to access this endpoint. See the "Authentication Setup" Before you can monitor the `/api/models` endpoint, you need to enable API keys in Open WebUI and generate one: 1. **Enable API Keys (Admin Required):** - * Log in to Open WebUI as an administrator. - * Go to **Admin Settings** (usually in the top right menu) > **General**. - * Find the "Enable API Key" setting and **turn it ON**. - * Click **Save Changes**. + - Log in to Open WebUI as an administrator. + - Go to **Admin Settings** (usually in the top right menu) > **General**. + - Find the "Enable API Key" setting and **turn it ON**. + - Click **Save Changes**. 2. **Generate an API Key (User Settings):** - * Go to your **User Settings** (usually by clicking on your profile icon in the top right). - * Navigate to the **Account** section. - * Click **Generate New API Key**. - * Give the API key a descriptive name (e.g., "Monitoring API Key"). - * **Copy the generated API key** and store it securely. You'll need this for your monitoring setup. + - Go to your **User Settings** (usually by clicking on your profile icon in the top right). + - Navigate to the **Account** section. + - Click **Generate New API Key**. + - Give the API key a descriptive name (e.g., "Monitoring API Key"). + - **Copy the generated API key** and store it securely. You'll need this for your monitoring setup. *(Optional but Recommended):* For security best practices, consider creating a **non-administrator user account** specifically for monitoring and generate an API key for that user. This limits the potential impact if the monitoring API key is compromised. @@ -108,21 +108,21 @@ Before you can monitor the `/api/models` endpoint, you need to enable API keys i ### Using Uptime Kuma for Model Connectivity Monitoring 🐻 1. **Create a New Monitor in Uptime Kuma:** - * Monitor Type: "HTTP(s) - JSON Query". - * Name: "Open WebUI Model Connectivity Check". - * URL: `http://your-open-webui-instance:8080/api/models` (Replace with your URL). - * Method: "GET". - * Expected Status Code: `200`. + - Monitor Type: "HTTP(s) - JSON Query". + - Name: "Open WebUI Model Connectivity Check". + - URL: `http://your-open-webui-instance:8080/api/models` (Replace with your URL). + - Method: "GET". + - Expected Status Code: `200`. 2. **Configure JSON Query (Verify Model List):** - * **JSON Query:** `$count(data[*])>0` - * **Explanation:** This JSONata query checks if the `data` array in the API response (which contains the list of models) has a count greater than 0. In other words, it verifies that at least one model is returned. - * **Expected Value:** `true` (The query should return `true` if models are listed). + - **JSON Query:** `$count(data[*])>0` + - **Explanation:** This JSONata query checks if the `data` array in the API response (which contains the list of models) has a count greater than 0. In other words, it verifies that at least one model is returned. + - **Expected Value:** `true` (The query should return `true` if models are listed). 3. **Add Authentication Headers:** - * In the "Headers" section of the Uptime Kuma monitor configuration, click "Add Header". - * **Header Name:** `Authorization` - * **Header Value:** `Bearer YOUR_API_KEY` (Replace `YOUR_API_KEY` with the API key you generated). + - In the "Headers" section of the Uptime Kuma monitor configuration, click "Add Header". + - **Header Name:** `Authorization` + - **Header Value:** `Bearer YOUR_API_KEY` (Replace `YOUR_API_KEY` with the API key you generated). 4. **Set Monitoring Interval:** Recommended interval: `300 seconds` (5 minutes) or longer, as model lists don't typically change very frequently. @@ -130,9 +130,9 @@ Before you can monitor the `/api/models` endpoint, you need to enable API keys i You can use more specific JSONata queries to check for particular models or providers. Here are some examples: -* **Check for at least one Ollama model:** `$count(data[owned_by='ollama'])>0` -* **Check if a specific model exists (e.g., 'gpt-4o'):** `$exists(data[id='gpt-4o'])` -* **Check if multiple specific models exist (e.g., 'gpt-4o' and 'gpt-4o-mini'):** `$count(data[id in ['gpt-4o', 'gpt-4o-mini']]) = 2` +- **Check for at least one Ollama model:** `$count(data[owned_by='ollama'])>0` +- **Check if a specific model exists (e.g., 'gpt-4o'):** `$exists(data[id='gpt-4o'])` +- **Check if multiple specific models exist (e.g., 'gpt-4o' and 'gpt-4o-mini'):** `$count(data[id in ['gpt-4o', 'gpt-4o-mini']]) = 2` You can test and refine your JSONata queries at [jsonata.org](https://try.jsonata.org/) using a sample API response to ensure they work as expected. @@ -142,9 +142,9 @@ For the most comprehensive monitoring, you can test if models are actually capab **Why Test Model Responses?** -* **End-to-End Verification:** Confirms that the entire model pipeline is working, from API request to model response. -* **Model Loading Issues:** Detects problems with specific models failing to load or respond. -* **Backend Processing Errors:** Catches errors in the backend logic that might prevent models from generating completions. +- **End-to-End Verification:** Confirms that the entire model pipeline is working, from API request to model response. +- **Model Loading Issues:** Detects problems with specific models failing to load or respond. +- **Backend Processing Errors:** Catches errors in the backend logic that might prevent models from generating completions. **How to Test with `curl` (Authenticated POST Request):** diff --git a/docs/getting-started/quick-start/tab-docker/DockerSwarm.md b/docs/getting-started/quick-start/tab-docker/DockerSwarm.md index 24fc24b033..787be8598d 100644 --- a/docs/getting-started/quick-start/tab-docker/DockerSwarm.md +++ b/docs/getting-started/quick-start/tab-docker/DockerSwarm.md @@ -112,7 +112,7 @@ Choose the appropriate command based on your hardware setup: 1. Ensure CUDA is Enabled, follow your OS and GPU instructions for that. 2. Enable Docker GPU support, see [Nvidia Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html " on Nvidia's site.") 3. Follow the [Guide here on configuring Docker Swarm to with with your GPU](https://gist.github.com/tomlankhorst/33da3c4b9edbde5c83fc1244f010815c#configuring-docker-to-work-with-your-gpus) - - Ensure _GPU Resource_ is enabled in `/etc/nvidia-container-runtime/config.toml` and enable GPU resource advertising by uncommenting the `swarm-resource = "DOCKER_RESOURCE_GPU"`. The docker daemon must be restarted after updating these files on each node. + - Ensure *GPU Resource* is enabled in `/etc/nvidia-container-runtime/config.toml` and enable GPU resource advertising by uncommenting the `swarm-resource = "DOCKER_RESOURCE_GPU"`. The docker daemon must be restarted after updating these files on each node. - **With CPU Support**: diff --git a/docs/getting-started/quick-start/tab-docker/DockerUpdating.md b/docs/getting-started/quick-start/tab-docker/DockerUpdating.md index 4abc59558a..e66ea749fb 100644 --- a/docs/getting-started/quick-start/tab-docker/DockerUpdating.md +++ b/docs/getting-started/quick-start/tab-docker/DockerUpdating.md @@ -10,7 +10,7 @@ With [Watchtower](https://containrrr.dev/watchtower/), you can automate the upda docker run --rm --volume /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once open-webui ``` -_(Replace `open-webui` with your container's name if it's different.)_ +*(Replace `open-webui` with your container's name if it's different.)* ### Option 2: Manual Update diff --git a/docs/tutorials/images.md b/docs/tutorials/images.md index 0b34c66bd4..21bd2e57cd 100644 --- a/docs/tutorials/images.md +++ b/docs/tutorials/images.md @@ -73,23 +73,23 @@ ComfyUI provides an alternative interface for managing and interacting with imag 1. **Model Checkpoints**: -* Download either the `FLUX.1-schnell` or `FLUX.1-dev` model from the [black-forest-labs HuggingFace page](https://huggingface.co/black-forest-labs). -* Place the model checkpoint(s) in both the `models/checkpoints` and `models/unet` directories of ComfyUI. Alternatively, you can create a symbolic link between `models/checkpoints` and `models/unet` to ensure both directories contain the same model checkpoints. +- Download either the `FLUX.1-schnell` or `FLUX.1-dev` model from the [black-forest-labs HuggingFace page](https://huggingface.co/black-forest-labs). +- Place the model checkpoint(s) in both the `models/checkpoints` and `models/unet` directories of ComfyUI. Alternatively, you can create a symbolic link between `models/checkpoints` and `models/unet` to ensure both directories contain the same model checkpoints. 2. **VAE Model**: -* Download `ae.safetensors` VAE from [here](https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors). -* Place it in the `models/vae` ComfyUI directory. +- Download `ae.safetensors` VAE from [here](https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensors). +- Place it in the `models/vae` ComfyUI directory. 3. **CLIP Model**: -* Download `clip_l.safetensors` from [here](https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main). -* Place it in the `models/clip` ComfyUI directory. +- Download `clip_l.safetensors` from [here](https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main). +- Place it in the `models/clip` ComfyUI directory. 4. **T5XXL Model**: -* Download either the `t5xxl_fp16.safetensors` or `t5xxl_fp8_e4m3fn.safetensors` model from [here](https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main). -* Place it in the `models/clip` ComfyUI directory. +- Download either the `t5xxl_fp16.safetensors` or `t5xxl_fp8_e4m3fn.safetensors` model from [here](https://huggingface.co/comfyanonymous/flux_text_encoders/tree/main). +- Place it in the `models/clip` ComfyUI directory. To integrate ComfyUI into Open WebUI, follow these steps: @@ -99,7 +99,7 @@ To integrate ComfyUI into Open WebUI, follow these steps: 2. Click on **Settings** and then select the **Images** tab. 3. In the `Image Generation Engine` field, choose `ComfyUI`. 4. In the **API URL** field, enter the address where ComfyUI's API is accessible, following this format: `http://:8188/`. - * Set the environment variable `COMFYUI_BASE_URL` to this address to ensure it persists within the WebUI. + - Set the environment variable `COMFYUI_BASE_URL` to this address to ensure it persists within the WebUI. #### Step 2: Verify the Connection and Enable Image Generation @@ -151,9 +151,9 @@ Open WebUI also supports image generation through the **OpenAI APIs**. This opti 2. Set the `Image Generation Engine` field to `Open AI`. 3. Enter your OpenAI API key. 4. Choose the model you wish to use. Note that image size options will depend on the selected model: - * **DALL·E 2**: Supports `256x256`, `512x512`, or `1024x1024` images. - * **DALL·E 3**: Supports `1024x1024`, `1792x1024`, or `1024x1792` images. - * **GPT-Image-1**: Supports `auto`, `1024x1024`, `1536x1024`, or `1024x1536` images. + - **DALL·E 2**: Supports `256x256`, `512x512`, or `1024x1024` images. + - **DALL·E 3**: Supports `1024x1024`, `1792x1024`, or `1024x1792` images. + - **GPT-Image-1**: Supports `auto`, `1024x1024`, `1536x1024`, or `1024x1536` images. ### Azure OpenAI diff --git a/docs/tutorials/integrations/firefox-sidebar.md b/docs/tutorials/integrations/firefox-sidebar.md index 3d260a4288..5a8b73baa0 100644 --- a/docs/tutorials/integrations/firefox-sidebar.md +++ b/docs/tutorials/integrations/firefox-sidebar.md @@ -15,8 +15,8 @@ This tutorial is a community contribution and is not supported by the Open WebUI Before integrating Open WebUI as a AI chatbot browser assistant in Mozilla Firefox, ensure you have: -* Open WebUI instance URL (local or domain) -* Firefox browser installed +- Open WebUI instance URL (local or domain) +- Firefox browser installed ## Enabling AI Chatbot in Firefox @@ -54,32 +54,32 @@ The following URL parameters can be used to customize your Open WebUI instance: ### Models and Model Selection -* `models`: Specify multiple models (comma-separated list) for the chat session (e.g., `/?models=model1,model2`) -* `model`: Specify a single model for the chat session (e.g., `/?model=model1`) +- `models`: Specify multiple models (comma-separated list) for the chat session (e.g., `/?models=model1,model2`) +- `model`: Specify a single model for the chat session (e.g., `/?model=model1`) ### YouTube Transcription -* `youtube`: Provide a YouTube video ID to transcribe the video in the chat (e.g., `/?youtube=VIDEO_ID`) +- `youtube`: Provide a YouTube video ID to transcribe the video in the chat (e.g., `/?youtube=VIDEO_ID`) ### Web Search -* `web-search`: Enable web search functionality by setting this parameter to `true` (e.g., `/?web-search=true`) +- `web-search`: Enable web search functionality by setting this parameter to `true` (e.g., `/?web-search=true`) ### Tool Selection -* `tools` or `tool-ids`: Specify a comma-separated list of tool IDs to activate in the chat (e.g., `/?tools=tool1,tool2` or `/?tool-ids=tool1,tool2`) +- `tools` or `tool-ids`: Specify a comma-separated list of tool IDs to activate in the chat (e.g., `/?tools=tool1,tool2` or `/?tool-ids=tool1,tool2`) ### Call Overlay -* `call`: Enable a video or call overlay in the chat interface by setting this parameter to `true` (e.g., `/?call=true`) +- `call`: Enable a video or call overlay in the chat interface by setting this parameter to `true` (e.g., `/?call=true`) ### Initial Query Prompt -* `q`: Set an initial query or prompt for the chat (e.g., `/?q=Hello%20there`) +- `q`: Set an initial query or prompt for the chat (e.g., `/?q=Hello%20there`) ### Temporary Chat Sessions -* `temporary-chat`: Mark the chat as a temporary session by setting this parameter to `true` (e.g., `/?temporary-chat=true`) +- `temporary-chat`: Mark the chat as a temporary session by setting this parameter to `true` (e.g., `/?temporary-chat=true`) See https://docs.openwebui.com/features/chat-features/url-params for more info on URL parameters and how to use them. @@ -87,26 +87,26 @@ See https://docs.openwebui.com/features/chat-features/url-params for more info o The following `about:config` settings can be adjusted for further customization: -* `browser.ml.chat.shortcuts`: Enable custom shortcuts for the AI chatbot sidebar -* `browser.ml.chat.shortcuts.custom`: Enable custom shortcut keys for the AI chatbot sidebar -* `browser.ml.chat.shortcuts.longPress`: Set the long press delay for shortcut keys -* `browser.ml.chat.sidebar`: Enable the AI chatbot sidebar -* `browser.ml.checkForMemory`: Check for available memory before loading models -* `browser.ml.defaultModelMemoryUsage`: Set the default memory usage for models -* `browser.ml.enable`: Enable the machine learning features in Firefox -* `browser.ml.logLevel`: Set the log level for machine learning features -* `browser.ml.maximumMemoryPressure`: Set the maximum memory pressure threshold -* `browser.ml.minimumPhysicalMemory`: Set the minimum physical memory required -* `browser.ml.modelCacheMaxSize`: Set the maximum size of the model cache -* `browser.ml.modelCacheTimeout`: Set the timeout for model cache -* `browser.ml.modelHubRootUrl`: Set the root URL for the model hub -* `browser.ml.modelHubUrlTemplate`: Set the URL template for the model hub -* `browser.ml.queueWaitInterval`: Set the interval for queue wait -* `browser.ml.queueWaitTimeout`: Set the timeout for queue wait +- `browser.ml.chat.shortcuts`: Enable custom shortcuts for the AI chatbot sidebar +- `browser.ml.chat.shortcuts.custom`: Enable custom shortcut keys for the AI chatbot sidebar +- `browser.ml.chat.shortcuts.longPress`: Set the long press delay for shortcut keys +- `browser.ml.chat.sidebar`: Enable the AI chatbot sidebar +- `browser.ml.checkForMemory`: Check for available memory before loading models +- `browser.ml.defaultModelMemoryUsage`: Set the default memory usage for models +- `browser.ml.enable`: Enable the machine learning features in Firefox +- `browser.ml.logLevel`: Set the log level for machine learning features +- `browser.ml.maximumMemoryPressure`: Set the maximum memory pressure threshold +- `browser.ml.minimumPhysicalMemory`: Set the minimum physical memory required +- `browser.ml.modelCacheMaxSize`: Set the maximum size of the model cache +- `browser.ml.modelCacheTimeout`: Set the timeout for model cache +- `browser.ml.modelHubRootUrl`: Set the root URL for the model hub +- `browser.ml.modelHubUrlTemplate`: Set the URL template for the model hub +- `browser.ml.queueWaitInterval`: Set the interval for queue wait +- `browser.ml.queueWaitTimeout`: Set the timeout for queue wait ## Accessing the AI Chatbot Sidebar To access the AI chatbot sidebar, use one of the following methods: -* Press `CTRL+B` to open the bookmarks sidebar and switch to AI Chatbot -* Press `CTRL+Alt+X` to open the AI chatbot sidebar directly +- Press `CTRL+B` to open the bookmarks sidebar and switch to AI Chatbot +- Press `CTRL+Alt+X` to open the AI chatbot sidebar directly diff --git a/docs/tutorials/integrations/langfuse.md b/docs/tutorials/integrations/langfuse.md index 45f80c0b73..2ccd51a0d0 100644 --- a/docs/tutorials/integrations/langfuse.md +++ b/docs/tutorials/integrations/langfuse.md @@ -16,7 +16,7 @@ title: "🪢 Monitoring and Debugging with Langfuse" ## How to integrate Langfuse with Open WebUI ![Langfuse Integration](https://langfuse.com/images/docs/openwebui-integration.gif) -_Langfuse integration steps_ +*Langfuse integration steps* [Pipelines](https://github.com/open-webui/pipelines/) in Open WebUI is an UI-agnostic framework for OpenAI API plugins. It enables the injection of plugins that intercept, process, and forward user prompts to the final LLM, allowing for enhanced control and customization of prompt handling. @@ -38,7 +38,7 @@ docker run -p 9099:9099 --add-host=host.docker.internal:host-gateway -v pipeline ### Step 3: Connecting Open WebUI with Pipelines -In the _Admin Settings_, create and save a new connection of type OpenAI API with the following details: +In the *Admin Settings*, create and save a new connection of type OpenAI API with the following details: - **URL:** http://host.docker.internal:9099 (this is where the previously launched Docker container is running). - **Password:** 0p3n-w3bu! (standard password) @@ -47,7 +47,7 @@ In the _Admin Settings_, create and save a new connection of type OpenAI API wit ### Step 4: Adding the Langfuse Filter Pipeline -Next, navigate to _Admin Settings_ -> _Pipelines_ and add the Langfuse Filter Pipeline. Specify that Pipelines is listening on http://host.docker.internal:9099 (as configured earlier) and install the [Langfuse Filter Pipeline](https://github.com/open-webui/pipelines/blob/039f9c54f8e9f9bcbabde02c2c853e80d25c79e4/examples/filters/langfuse_v3_filter_pipeline.py) by using the _Install from Github URL_ option with the following URL: +Next, navigate to *Admin Settings* -> *Pipelines* and add the Langfuse Filter Pipeline. Specify that Pipelines is listening on http://host.docker.internal:9099 (as configured earlier) and install the [Langfuse Filter Pipeline](https://github.com/open-webui/pipelines/blob/039f9c54f8e9f9bcbabde02c2c853e80d25c79e4/examples/filters/langfuse_v3_filter_pipeline.py) by using the *Install from Github URL* option with the following URL: ``` https://github.com/open-webui/pipelines/blob/main/examples/filters/langfuse_v3_filter_pipeline.py @@ -57,7 +57,7 @@ Now, add your Langfuse API keys below. If you haven't signed up to Langfuse yet, ![Open WebUI add Langfuse Pipeline](https://langfuse.com//images/docs/openwebui-add-pipeline.png) -_**Note:** Capture usage (token counts) for OpenAi models while streaming is enabled, you have to navigate to the model settings in Open WebUI and check the "Usage" [box](https://github.com/open-webui/open-webui/discussions/5770#discussioncomment-10778586) below _Capabilities_._ +***Note:** Capture usage (token counts) for OpenAi models while streaming is enabled, you have to navigate to the model settings in Open WebUI and check the "Usage" [box](https://github.com/open-webui/open-webui/discussions/5770#discussioncomment-10778586) below *Capabilities*.* ### Step 5: See your traces in Langfuse diff --git a/docs/tutorials/integrations/libre-translate.md b/docs/tutorials/integrations/libre-translate.md index ed5255a937..ccf6642516 100644 --- a/docs/tutorials/integrations/libre-translate.md +++ b/docs/tutorials/integrations/libre-translate.md @@ -78,10 +78,10 @@ Configuring the Integration in Open WebUI Once you have LibreTranslate up and running in Docker, you can configure the integration within Open WebUI. There are several community integrations available, including: -* [LibreTranslate Filter Function](https://openwebui.com/f/iamg30/libretranslate_filter) -* [LibreTranslate Action Function](https://openwebui.com/f/jthesse/libretranslate_action) -* [MultiLanguage LibreTranslate Action Function](https://openwebui.com/f/iamg30/multilanguage_libretranslate_action) -* [LibreTranslate Filter Pipeline](https://github.com/open-webui/pipelines/blob/main/examples/filters/libretranslate_filter_pipeline.py) +- [LibreTranslate Filter Function](https://openwebui.com/f/iamg30/libretranslate_filter) +- [LibreTranslate Action Function](https://openwebui.com/f/jthesse/libretranslate_action) +- [MultiLanguage LibreTranslate Action Function](https://openwebui.com/f/iamg30/multilanguage_libretranslate_action) +- [LibreTranslate Filter Pipeline](https://github.com/open-webui/pipelines/blob/main/examples/filters/libretranslate_filter_pipeline.py) Choose the integration that best suits your needs and follow the instructions to configure it within Open WebUI. @@ -95,18 +95,18 @@ Albanian, Arabic, Azerbaijani, Bengali, Bulgarian, Catalan, Valencian, Chinese, Troubleshooting -------------- -* Make sure the LibreTranslate service is running and accessible. -* Verify that the Docker configuration is correct. -* Check the LibreTranslate logs for any errors. +- Make sure the LibreTranslate service is running and accessible. +- Verify that the Docker configuration is correct. +- Check the LibreTranslate logs for any errors. Benefits of Integration ---------------------- Integrating LibreTranslate with Open WebUI provides several benefits, including: -* Machine translation capabilities for a wide range of languages. -* Improved text analysis and processing. -* Enhanced functionality for language-related tasks. +- Machine translation capabilities for a wide range of languages. +- Improved text analysis and processing. +- Enhanced functionality for language-related tasks. Conclusion ---------- diff --git a/docs/tutorials/integrations/okta-oidc-sso.md b/docs/tutorials/integrations/okta-oidc-sso.md index 4496d4e43b..0181e09d1f 100644 --- a/docs/tutorials/integrations/okta-oidc-sso.md +++ b/docs/tutorials/integrations/okta-oidc-sso.md @@ -15,9 +15,9 @@ This documentation page outlines the steps required to integrate Okta OIDC Singl ### Prerequisites -* A new or existing Open WebUI instance. -* An Okta account with administrative privileges to create and configure applications. -* Basic understanding of OIDC, Okta application configuration, and Open WebUI environment variables. +- A new or existing Open WebUI instance. +- An Okta account with administrative privileges to create and configure applications. +- Basic understanding of OIDC, Okta application configuration, and Open WebUI environment variables. ## Setting up Okta @@ -46,9 +46,9 @@ First, you need to configure an OIDC application within your Okta organization a 2. Go to the **Sign On** tab and click **Edit** in the **OpenID Connect ID Token** section. 3. In the **Group claim type** section, select **Filter**. 4. In the **Group claims filter** section: - * Enter `groups` as the claim name (or use the default if present). - * Select **Matches regex** from the dropdown. - * Enter `.*` in the regex field. This will include all groups the user is a member of. You can use a more specific regex if needed. + - Enter `groups` as the claim name (or use the default if present). + - Select **Matches regex** from the dropdown. + - Enter `.*` in the regex field. This will include all groups the user is a member of. You can use a more specific regex if needed. 5. Click **Save**. 6. Click the **Back to applications** link. 7. From the **More** button dropdown menu for your application, click **Refresh Application Data**. @@ -60,20 +60,20 @@ First, you need to configure an OIDC application within your Okta organization a To enhance security, you can enforce Multi-Factor Authentication (MFA) for users logging into Open WebUI via Okta. This example demonstrates how to set up Google Authenticator as an additional factor. 1. **Configure an Authenticator**: - * In the Okta Admin Console, navigate to **Security > Authenticators**. - * Click **Add Authenticator** and add **Google Authenticator**. - * During setup, you can set **"User verification"** to **"Required"** to enhance security. + - In the Okta Admin Console, navigate to **Security > Authenticators**. + - Click **Add Authenticator** and add **Google Authenticator**. + - During setup, you can set **"User verification"** to **"Required"** to enhance security. 2. **Create and Apply a Sign-On Policy**: - * Go to **Security > Authenticators**, then click the **Sign On** tab. - * Click **Add a policy** to create a new policy (e.g., "WebUI MFA Policy"). - * In the policy you just created, click **Add rule**. - * Configure the rule: - * Set **"IF User's IP is"** to **"Anywhere"**. - * Set **"THEN Access is"** to **"Allowed after successful authentication"**. - * Under **"AND User must authenticate with"**, select **"Password + Another factor"**. - * Ensure your desired factor (e.g., Google Authenticator) is included under **"AND Possession factor constraints are"**. - * Finally, assign this policy to your Open WebUI application. Go to **Applications > Applications**, select your OIDC app, and under the **Sign On** tab, select the policy you created. + - Go to **Security > Authenticators**, then click the **Sign On** tab. + - Click **Add a policy** to create a new policy (e.g., "WebUI MFA Policy"). + - In the policy you just created, click **Add rule**. + - Configure the rule: + - Set **"IF User's IP is"** to **"Anywhere"**. + - Set **"THEN Access is"** to **"Allowed after successful authentication"**. + - Under **"AND User must authenticate with"**, select **"Password + Another factor"**. + - Ensure your desired factor (e.g., Google Authenticator) is included under **"AND Possession factor constraints are"**. + - Finally, assign this policy to your Open WebUI application. Go to **Applications > Applications**, select your OIDC app, and under the **Sign On** tab, select the policy you created. Now, when users log in to Open WebUI, they will be required to provide their Okta password and an additional verification code from Google Authenticator. @@ -133,8 +133,8 @@ To *also* enable automatic Just-in-Time (JIT) creation of groups that exist in O :::warning Group Membership Management When `ENABLE_OAUTH_GROUP_MANAGEMENT` is set to `true`, a user's group memberships in Open WebUI will be **strictly synchronized** with the groups received in their Okta claims upon each login. This means: -* Users will be **added** to Open WebUI groups that match their Okta claims. -* Users will be **removed** from any Open WebUI groups (including those manually created or assigned within Open WebUI) if those groups are **not** present in their Okta claims for that login session. +- Users will be **added** to Open WebUI groups that match their Okta claims. +- Users will be **removed** from any Open WebUI groups (including those manually created or assigned within Open WebUI) if those groups are **not** present in their Okta claims for that login session. Ensure that all necessary groups are correctly configured and assigned within Okta and included in the group claim. ::: @@ -176,9 +176,9 @@ Restart your Open WebUI instance after setting these environment variables. ## Troubleshooting -* **400 Bad Request/Redirect URI Mismatch:** Double-check that the **Sign-in redirect URI** in your Okta application exactly matches `/oauth/oidc/callback`. -* **Groups Not Syncing:** Verify that the `OAUTH_GROUP_CLAIM` environment variable matches the claim name configured in the Okta ID Token settings. Ensure the user has logged out and back in after group changes - a login flow is required to update OIDC. Remember admin groups are not synced. -* **Configuration Errors:** Review the Open WebUI server logs for detailed error messages related to OIDC configuration. +- **400 Bad Request/Redirect URI Mismatch:** Double-check that the **Sign-in redirect URI** in your Okta application exactly matches `/oauth/oidc/callback`. +- **Groups Not Syncing:** Verify that the `OAUTH_GROUP_CLAIM` environment variable matches the claim name configured in the Okta ID Token settings. Ensure the user has logged out and back in after group changes - a login flow is required to update OIDC. Remember admin groups are not synced. +- **Configuration Errors:** Review the Open WebUI server logs for detailed error messages related to OIDC configuration. -* Refer to the official [Open WebUI SSO Documentation](/features/sso). -* Consult the [Okta Developer Documentation](https://developer.okta.com/docs/). +- Refer to the official [Open WebUI SSO Documentation](/features/sso). +- Consult the [Okta Developer Documentation](https://developer.okta.com/docs/). diff --git a/docs/tutorials/integrations/redis.md b/docs/tutorials/integrations/redis.md index a2c944e448..c6cbfe4728 100644 --- a/docs/tutorials/integrations/redis.md +++ b/docs/tutorials/integrations/redis.md @@ -15,11 +15,11 @@ This documentation page outlines the steps required to integrate Redis with Open ### Prerequisites -* A valid Open WebUI instance (running version 1.0 or higher) -* A Redis container (we will use `docker.io/valkey/valkey:8.0.1-alpine` in this example, which is based on the latest Redis 7.x release) -* Docker Composer (version 2.0 or higher) installed on your system -* A Docker network for communication between Open WebUI and Redis -* Basic understanding of Docker, Redis, and Open WebUI +- A valid Open WebUI instance (running version 1.0 or higher) +- A Redis container (we will use `docker.io/valkey/valkey:8.0.1-alpine` in this example, which is based on the latest Redis 7.x release) +- Docker Composer (version 2.0 or higher) installed on your system +- A Docker network for communication between Open WebUI and Redis +- Basic understanding of Docker, Redis, and Open WebUI ## Setting up Redis @@ -128,8 +128,8 @@ docker exec -it redis-valkey valkey-cli -p 6379 ping If you encounter issues with Redis or websocket support in Open WebUI, you can refer to the following resources for troubleshooting: -* [Redis Documentation](https://redis.io/docs) -* [Docker Compose Documentation](https://docs.docker.com/compose/overview/) -* [sysctl Documentation](https://man7.org/linux/man-pages/man8/sysctl.8.html) +- [Redis Documentation](https://redis.io/docs) +- [Docker Compose Documentation](https://docs.docker.com/compose/overview/) +- [sysctl Documentation](https://man7.org/linux/man-pages/man8/sysctl.8.html) By following these steps and troubleshooting tips, you should be able to set up Redis with Open WebUI for websocket support and enable real-time communication and updates between clients and your application. diff --git a/docs/tutorials/text-to-speech/openai-edge-tts-integration.md b/docs/tutorials/text-to-speech/openai-edge-tts-integration.md index d3d503f442..d0bc50e4cc 100644 --- a/docs/tutorials/text-to-speech/openai-edge-tts-integration.md +++ b/docs/tutorials/text-to-speech/openai-edge-tts-integration.md @@ -47,7 +47,7 @@ This will run the service at port 5050 with all the default configs - Open the Admin Panel and go to `Settings` -> `Audio` - Set your TTS Settings to match the screenshot below -- _Note: you can specify the TTS Voice here_ +- *Note: you can specify the TTS Voice here* ![Screenshot of Open WebUI Admin Settings for Audio adding the correct endpoints for this project](https://utfs.io/f/MMMHiQ1TQaBobmOhsMkrO6Tl2kxX39dbuFiQ8cAoNzysIt7f) diff --git a/docs/tutorials/text-to-speech/openedai-speech-integration.md b/docs/tutorials/text-to-speech/openedai-speech-integration.md index d8c170ae62..19ae14083d 100644 --- a/docs/tutorials/text-to-speech/openedai-speech-integration.md +++ b/docs/tutorials/text-to-speech/openedai-speech-integration.md @@ -22,9 +22,9 @@ It serves the `/v1/audio/speech` endpoint and provides a free, private text-to-s **Requirements** ----------------- -* Docker installed on your system -* Open WebUI running in a Docker container -* Basic understanding of Docker and Docker Compose +- Docker installed on your system +- Open WebUI running in a Docker container +- Basic understanding of Docker and Docker Compose **Option 1: Using Docker Compose** ---------------------------------- @@ -63,29 +63,29 @@ HF_HOME=voices You can use any of the following Docker Compose files: -* [docker-compose.yml](https://github.com/matatonic/openedai-speech/blob/main/docker-compose.yml): This file uses the `ghcr.io/matatonic/openedai-speech` image and builds from [Dockerfile](https://github.com/matatonic/openedai-speech/blob/main/Dockerfile). -* [docker-compose.min.yml](https://github.com/matatonic/openedai-speech/blob/main/docker-compose.min.yml): This file uses the `ghcr.io/matatonic/openedai-speech-min` image and builds from [Dockerfile.min](https://github.com/matatonic/openedai-speech/blob/main/Dockerfile.min). +- [docker-compose.yml](https://github.com/matatonic/openedai-speech/blob/main/docker-compose.yml): This file uses the `ghcr.io/matatonic/openedai-speech` image and builds from [Dockerfile](https://github.com/matatonic/openedai-speech/blob/main/Dockerfile). +- [docker-compose.min.yml](https://github.com/matatonic/openedai-speech/blob/main/docker-compose.min.yml): This file uses the `ghcr.io/matatonic/openedai-speech-min` image and builds from [Dockerfile.min](https://github.com/matatonic/openedai-speech/blob/main/Dockerfile.min). This image is a minimal version that only includes Piper support and does not require a GPU. -* [docker-compose.rocm.yml](https://github.com/matatonic/openedai-speech/blob/main/docker-compose.rocm.yml): This file uses the `ghcr.io/matatonic/openedai-speech-rocm` image and builds from [Dockerfile](https://github.com/matatonic/openedai-speech/blob/main/Dockerfile) with ROCm support. +- [docker-compose.rocm.yml](https://github.com/matatonic/openedai-speech/blob/main/docker-compose.rocm.yml): This file uses the `ghcr.io/matatonic/openedai-speech-rocm` image and builds from [Dockerfile](https://github.com/matatonic/openedai-speech/blob/main/Dockerfile) with ROCm support. **Step 4: Build the Chosen Docker Image** ----------------------------------------- Before running the Docker Compose file, you need to build the Docker image: -* **Nvidia GPU (CUDA support)**: +- **Nvidia GPU (CUDA support)**: ```bash docker build -t ghcr.io/matatonic/openedai-speech . ``` -* **AMD GPU (ROCm support)**: +- **AMD GPU (ROCm support)**: ```bash docker build -f Dockerfile --build-arg USE_ROCM=1 -t ghcr.io/matatonic/openedai-speech-rocm . ``` -* **CPU only, No GPU (Piper only)**: +- **CPU only, No GPU (Piper only)**: ```bash docker build -f Dockerfile.min -t ghcr.io/matatonic/openedai-speech-min . @@ -94,25 +94,25 @@ docker build -f Dockerfile.min -t ghcr.io/matatonic/openedai-speech-min . **Step 5: Run the correct `docker compose up -d` command** ---------------------------------------------------------- -* **Nvidia GPU (CUDA support)**: Run the following command to start the `openedai-speech` service in detached mode: +- **Nvidia GPU (CUDA support)**: Run the following command to start the `openedai-speech` service in detached mode: ```bash docker compose up -d ``` -* **AMD GPU (ROCm support)**: Run the following command to start the `openedai-speech` service in detached mode: +- **AMD GPU (ROCm support)**: Run the following command to start the `openedai-speech` service in detached mode: ```bash docker compose -f docker-compose.rocm.yml up -d ``` -* **ARM64 (Apple M-series, Raspberry Pi)**: XTTS only has CPU support here and will be very slow. You can use the Nvidia image for XTTS with CPU (slow), or use the Piper only image (recommended): +- **ARM64 (Apple M-series, Raspberry Pi)**: XTTS only has CPU support here and will be very slow. You can use the Nvidia image for XTTS with CPU (slow), or use the Piper only image (recommended): ```bash docker compose -f docker-compose.min.yml up -d ``` -* **CPU only, No GPU (Piper only)**: For a minimal docker image with only Piper support (< 1GB vs. 8GB): +- **CPU only, No GPU (Piper only)**: For a minimal docker image with only Piper support (< 1GB vs. 8GB): ```bash docker compose -f docker-compose.min.yml up -d @@ -125,14 +125,14 @@ This will start the `openedai-speech` service in detached mode. You can also use the following Docker run commands to start the `openedai-speech` service in detached mode: -* **Nvidia GPU (CUDA)**: Run the following command to build and start the `openedai-speech` service: +- **Nvidia GPU (CUDA)**: Run the following command to build and start the `openedai-speech` service: ```bash docker build -t ghcr.io/matatonic/openedai-speech . docker run -d --gpus=all -p 8000:8000 -v voices:/app/voices -v config:/app/config --name openedai-speech ghcr.io/matatonic/openedai-speech ``` -* **ROCm (AMD GPU)**: Run the following command to build and start the `openedai-speech` service: +- **ROCm (AMD GPU)**: Run the following command to build and start the `openedai-speech` service: > To enable ROCm support, uncomment the `#USE_ROCM=1` line in the `speech.env` file. @@ -141,7 +141,7 @@ docker build -f Dockerfile --build-arg USE_ROCM=1 -t ghcr.io/matatonic/openedai- docker run -d --privileged --init --name openedai-speech -p 8000:8000 -v voices:/app/voices -v config:/app/config ghcr.io/matatonic/openedai-speech-rocm ``` -* **CPU only, No GPU (Piper only)**: Run the following command to build and start the `openedai-speech` service: +- **CPU only, No GPU (Piper only)**: Run the following command to build and start the `openedai-speech` service: ```bash docker build -f Dockerfile.min -t ghcr.io/matatonic/openedai-speech-min . @@ -155,15 +155,15 @@ docker run -d -p 8000:8000 -v voices:/app/voices -v config:/app/config --name op Open the Open WebUI settings and navigate to the TTS Settings under **Admin Panel > Settings > Audio**. Add the following configuration: -* **API Base URL**: `http://host.docker.internal:8000/v1` -* **API Key**: `sk-111111111` (Note that this is a dummy API key, as `openedai-speech` doesn't require an API key. You can use whatever you'd like for this field, as long as it is filled.) +- **API Base URL**: `http://host.docker.internal:8000/v1` +- **API Key**: `sk-111111111` (Note that this is a dummy API key, as `openedai-speech` doesn't require an API key. You can use whatever you'd like for this field, as long as it is filled.) **Step 7: Choose a voice** -------------------------- Under `TTS Voice` within the same audio settings menu in the admin panel, you can set the `TTS Model` to use from the following choices below that `openedai-speech` supports. The voices of these models are optimized for the English language. -* `tts-1` or `tts-1-hd`: `alloy`, `echo`, `echo-alt`, `fable`, `onyx`, `nova`, and `shimmer` (`tts-1-hd` is configurable; uses OpenAI samples by default) +- `tts-1` or `tts-1-hd`: `alloy`, `echo`, `echo-alt`, `fable`, `onyx`, `nova`, and `shimmer` (`tts-1-hd` is configurable; uses OpenAI samples by default) **Step 8: Press `Save` to apply the changes and start enjoying naturally sounding voices** -------------------------------------------------------------------------------------------- @@ -175,28 +175,28 @@ Press the `Save` button to apply the changes to your Open WebUI settings. Refres `openedai-speech` supports multiple text-to-speech models, each with its own strengths and requirements. The following models are available: -* **Piper TTS** (very fast, runs on CPU): Use your own [Piper voices](https://rhasspy.github.io/piper-samples/) via the `voice_to_speaker.yaml` configuration file. This model is great for applications that require low latency and high performance. Piper TTS also supports [multilingual](https://github.com/matatonic/openedai-speech#multilingual) voices. -* **Coqui AI/TTS XTTS v2** (fast, but requires around 4GB GPU VRAM & Nvidia GPU with CUDA): This model uses Coqui AI's XTTS v2 voice cloning technology to generate high-quality voices. While it requires a more powerful GPU, it provides excellent performance and high-quality audio. Coqui also supports [multilingual](https://github.com/matatonic/openedai-speech#multilingual) voices. -* **Beta Parler-TTS Support** (experimental, slower): This model uses the Parler-TTS framework to generate voices. While it's currently in beta, it allows you to describe very basic features of the speaker voice. The exact voice will be slightly different with each generation, but should be similar to the speaker description provided. For inspiration on how to describe voices, see [Text Description to Speech](https://www.text-description-to-speech.com/). +- **Piper TTS** (very fast, runs on CPU): Use your own [Piper voices](https://rhasspy.github.io/piper-samples/) via the `voice_to_speaker.yaml` configuration file. This model is great for applications that require low latency and high performance. Piper TTS also supports [multilingual](https://github.com/matatonic/openedai-speech#multilingual) voices. +- **Coqui AI/TTS XTTS v2** (fast, but requires around 4GB GPU VRAM & Nvidia GPU with CUDA): This model uses Coqui AI's XTTS v2 voice cloning technology to generate high-quality voices. While it requires a more powerful GPU, it provides excellent performance and high-quality audio. Coqui also supports [multilingual](https://github.com/matatonic/openedai-speech#multilingual) voices. +- **Beta Parler-TTS Support** (experimental, slower): This model uses the Parler-TTS framework to generate voices. While it's currently in beta, it allows you to describe very basic features of the speaker voice. The exact voice will be slightly different with each generation, but should be similar to the speaker description provided. For inspiration on how to describe voices, see [Text Description to Speech](https://www.text-description-to-speech.com/). **Troubleshooting** ------------------- If you encounter any problems integrating `openedai-speech` with Open WebUI, follow these troubleshooting steps: -* **Verify `openedai-speech` service**: Ensure that the `openedai-speech` service is running and the port you specified in the docker-compose.yml file is exposed. -* **Check access to host.docker.internal**: Verify that the hostname `host.docker.internal` is resolvable from within the Open WebUI container. This is necessary because `openedai-speech` is exposed via `localhost` on your PC, but `open-webui` cannot normally access it from inside its container. You can add a volume to the `docker-compose.yml` file to mount a file from the host to the container, for example, to a directory that will be served by openedai-speech. -* **Review API key configuration**: Make sure the API key is set to a dummy value or effectively left unchecked because `openedai-speech` doesn't require an API key. -* **Check voice configuration**: Verify that the voice you are trying to use for TTS exists in your `voice_to_speaker.yaml` file and the corresponding files (e.g., voice XML files) are present in the correct directory. -* **Verify voice model paths**: If you're experiencing issues with voice model loading, double-check that the paths in your `voice_to_speaker.yaml` file match the actual locations of your voice models. +- **Verify `openedai-speech` service**: Ensure that the `openedai-speech` service is running and the port you specified in the docker-compose.yml file is exposed. +- **Check access to host.docker.internal**: Verify that the hostname `host.docker.internal` is resolvable from within the Open WebUI container. This is necessary because `openedai-speech` is exposed via `localhost` on your PC, but `open-webui` cannot normally access it from inside its container. You can add a volume to the `docker-compose.yml` file to mount a file from the host to the container, for example, to a directory that will be served by openedai-speech. +- **Review API key configuration**: Make sure the API key is set to a dummy value or effectively left unchecked because `openedai-speech` doesn't require an API key. +- **Check voice configuration**: Verify that the voice you are trying to use for TTS exists in your `voice_to_speaker.yaml` file and the corresponding files (e.g., voice XML files) are present in the correct directory. +- **Verify voice model paths**: If you're experiencing issues with voice model loading, double-check that the paths in your `voice_to_speaker.yaml` file match the actual locations of your voice models. **Additional Troubleshooting Tips** ------------------------------------ -* Check the openedai-speech logs for errors or warnings that might indicate where the issue lies. -* Verify that the `docker-compose.yml` file is correctly configured for your environment. -* If you're still experiencing issues, try restarting the `openedai-speech` service or the entire Docker environment. -* If the problem persists, consult the `openedai-speech` GitHub repository or seek help on a relevant community forum. +- Check the openedai-speech logs for errors or warnings that might indicate where the issue lies. +- Verify that the `docker-compose.yml` file is correctly configured for your environment. +- If you're still experiencing issues, try restarting the `openedai-speech` service or the entire Docker environment. +- If the problem persists, consult the `openedai-speech` GitHub repository or seek help on a relevant community forum. **FAQ** ------- diff --git a/docs/tutorials/tips/reduce-ram-usage.md b/docs/tutorials/tips/reduce-ram-usage.md index 27b9fdf31a..873338ceb1 100644 --- a/docs/tutorials/tips/reduce-ram-usage.md +++ b/docs/tutorials/tips/reduce-ram-usage.md @@ -19,9 +19,9 @@ Much of the memory consumption is due to loaded ML models. Even if you are using As of v0.3.10 this includes: -* Speech-to-text (whisper by default) -* RAG embedding engine (defaults to local SentenceTransformers model) -* Image generation engine (disabled by default) +- Speech-to-text (whisper by default) +- RAG embedding engine (defaults to local SentenceTransformers model) +- Image generation engine (disabled by default) The first 2 are enabled and set to local models by default. You can change the models in the admin panel (RAG: Documents category, set it to Ollama or OpenAI, Speech-to-text: Audio section, work with OpenAI or WebAPI). If you are deploying a fresh Docker image, you can also set them with the following environment variables: `RAG_EMBEDDING_ENGINE: ollama`, `AUDIO_STT_ENGINE: openai`. Note that these environment variables have no effect if a `config.json` already exists. diff --git a/docs/tutorials/web-search/external.md b/docs/tutorials/web-search/external.md index 49d13046da..db71600d05 100644 --- a/docs/tutorials/web-search/external.md +++ b/docs/tutorials/web-search/external.md @@ -11,9 +11,9 @@ This tutorial is a community contribution and is not supported by the Open WebUI This option allows you to connect Open WebUI to your own self-hosted web search API endpoint. This is useful if you want to: -* Integrate a search engine not natively supported by Open WebUI. -* Implement custom search logic, filtering, or result processing. -* Use a private or internal search index. +- Integrate a search engine not natively supported by Open WebUI. +- Implement custom search logic, filtering, or result processing. +- Use a private or internal search index. ### Open WebUI Setup @@ -31,11 +31,11 @@ This option allows you to connect Open WebUI to your own self-hosted web search Open WebUI will interact with your `External Search URL` as follows: -* **Method:** `POST` -* **Headers:** - * `Content-Type: application/json` - * `Authorization: Bearer ` -* **Request Body (JSON):** +- **Method:** `POST` +- **Headers:** + - `Content-Type: application/json` + - `Authorization: Bearer ` +- **Request Body (JSON):** ```json { @@ -44,10 +44,10 @@ Open WebUI will interact with your `External Search URL` as follows: } ``` - * `query` (string): The search term entered by the user. - * `count` (integer): The suggested maximum number of results Open WebUI expects. Your API can return fewer results if necessary. + - `query` (string): The search term entered by the user. + - `count` (integer): The suggested maximum number of results Open WebUI expects. Your API can return fewer results if necessary. -* **Expected Response Body (JSON):** +- **Expected Response Body (JSON):** Your API endpoint *must* return a JSON array of search result objects. Each object should have the following structure: ```json @@ -66,9 +66,9 @@ Open WebUI will interact with your `External Search URL` as follows: ] ``` - * `link` (string): The direct URL to the search result. - * `title` (string): The title of the web page. - * `snippet` (string): A descriptive text snippet from the page content relevant to the query. + - `link` (string): The direct URL to the search result. + - `title` (string): The title of the web page. + - `snippet` (string): A descriptive text snippet from the page content relevant to the query. If an error occurs or no results are found, your endpoint should ideally return an empty JSON array `[]`. diff --git a/docs/tutorials/web-search/searxng.md b/docs/tutorials/web-search/searxng.md index 75abc2542c..db2c360497 100644 --- a/docs/tutorials/web-search/searxng.md +++ b/docs/tutorials/web-search/searxng.md @@ -381,10 +381,10 @@ docker exec -it open-webui curl http://host.docker.internal:8080/search?q=this+i 3. Set `Web Search Engine` from dropdown menu to `searxng` 4. Set `Searxng Query URL` to one of the following examples: -* `http://searxng:8080/search?q=` (using the container name and exposed port, suitable for Docker-based setups) -* `http://host.docker.internal:8080/search?q=` (using the `host.docker.internal` DNS name and the host port, suitable for Docker-based setups) -* `http:///search?q=` (using a local domain name, suitable for local network access) -* `https:///search?q=` (using a custom domain name for a self-hosted SearXNG instance, suitable for public or private access) +- `http://searxng:8080/search?q=` (using the container name and exposed port, suitable for Docker-based setups) +- `http://host.docker.internal:8080/search?q=` (using the `host.docker.internal` DNS name and the host port, suitable for Docker-based setups) +- `http:///search?q=` (using a local domain name, suitable for local network access) +- `https:///search?q=` (using a custom domain name for a self-hosted SearXNG instance, suitable for public or private access) **Do note the `/search?q=` part is mandatory.** diff --git a/docs/tutorials/web-search/yacy.md b/docs/tutorials/web-search/yacy.md index bee88cde0f..e812fbab7c 100644 --- a/docs/tutorials/web-search/yacy.md +++ b/docs/tutorials/web-search/yacy.md @@ -16,11 +16,11 @@ This tutorial is a community contribution and is not supported by the Open WebUI 3. Set `Web Search Engine` from dropdown menu to `yacy` 4. Set `Yacy Instance URL` to one of the following examples: - * `http://yacy:8090` (using the container name and exposed port, suitable for Docker-based setups) - * `http://host.docker.internal:8090` (using the `host.docker.internal` DNS name and the host port, suitable for Docker-based setups) - * `https://:8443` (using a local domain name, suitable for local network access) - * `https://yacy.example.com` (using a custom domain name for a self-hosted Yacy instance, suitable for public or private access) - * `https://yacy.example.com:8443` (using https over the default Yacy https port) + - `http://yacy:8090` (using the container name and exposed port, suitable for Docker-based setups) + - `http://host.docker.internal:8090` (using the `host.docker.internal` DNS name and the host port, suitable for Docker-based setups) + - `https://:8443` (using a local domain name, suitable for local network access) + - `https://yacy.example.com` (using a custom domain name for a self-hosted Yacy instance, suitable for public or private access) + - `https://yacy.example.com:8443` (using https over the default Yacy https port) 5. Optionally, enter your Yacy username and password if authentication is required for your Yacy instance. If both are left blank, digest authentication will be skipped 6. Press save