Merged
Conversation
fix: add missing beautifulsoup4 to backend requirements
Adds and harmonizes French translations across the entire UI: - Translate admin pages (Images, connections, models, etc.) - Harmonize API key/URL field translations - Fix "successfully" translations consistency - Add missing translations (feedback, file, model selector) - Fix typos and improve existing translations
- DOCX: mammoth converts to semantic HTML (prose preview) - XLSX: xlsx library extended to FileNav with sheet tabs at bottom - PPTX: custom canvas renderer produces PNG images per slide with panzoom zoom/pan and slide navigation Changes: - New: src/lib/utils/pptxToHtml.ts (canvas-based PPTX renderer) - FileNav.svelte: office format detection, blob download, conversion - FilePreview.svelte: office rendering branches, sheet tabs, slide viewer - FileItemModal.svelte: DOCX/PPTX preview tabs - package.json: added mammoth dependency
XLSX QoL: - Custom table renderer (excelToTable.ts) with column letters, row numbers, right-aligned numbers, empty cell handling - Monospace font, sticky headers + row nums, cell cursor - Sheet tabs moved to bottom bar (like PPTX navigation) - Unified styles between FileNav and FileItemModal Code highlighting: - Shiki-based syntax highlighting for code files in FileNav - Line numbers, dark/light theme support - Source/Preview toggle for code files
…eNav - Add Shiki-powered syntax highlighting for code files with dual light/dark themes (github-light/github-dark), line numbers, and source/preview toggle - Add native <video> player for mp4, webm, mov, ogv, avi, mkv - Add native <audio> player for mp3, wav, ogg, flac, m4a, aac, opus - New utility: src/lib/utils/codeHighlight.ts with extension-to-lang mapping using Shiki's bundled language registry
…e toggle - New JsonTreeView component with recursive collapsible nodes, auto-expand depth, and GitHub-themed dark mode colors - JSON/JSONC/JSON5 files show tree view by default, toggle to Shiki-highlighted source - SVG files show rendered preview (DOMPurify-sanitized) by default, toggle to Shiki-highlighted XML source - SVG removed from IMAGE_EXTS to enable text-based preview - YAML/TOML already covered by Shiki bundled languages
- New NotebookView component renders markdown cells (marked+DOMPurify), code cells (Shiki-highlighted with execution count gutter), and outputs (text, HTML tables, base64 images, error tracebacks) - ANSI escape codes stripped from error output - Source toggle shows raw JSON - Dark mode support throughout
* Update main.py * Update env.py * Update main.py * Update env.py
- Move refresh button out of directory-only block in FileNavToolbar - When viewing a file, refresh reloads that file's content - When in directory view, refresh reloads the listing (unchanged)
- New SqliteView component with table tabs, paginated data view (100 rows/page), SQL query editor (Cmd+Enter), NULL/BLOB formatting, sticky column headers, and dark mode - Supports .db, .sqlite, .sqlite3, .db3 extensions - Uses sql.js WASM served locally from /sql.js/sql-wasm.wasm - Also fixes display_file handling when another file is already open
…un_command Backend emits terminal events for write_file, replace_file_content, and run_command. Frontend showFileNavDir subscriber uses startsWith path matching to smartly refresh only when the event is relevant: - write_file/replace_file_content: refresh if path is in current view - run_command: always refresh (uses root '/' which matches everything) - Also adds copy-to-clipboard button and code preview full-height fix
* perf: pre-compile KaTeX Unicode regex at module load time
The katexStart() function was creating a new RegExp with Unicode
property escapes (\p{Script=Han}, \p{Script=Hiragana}, etc.) on
every invocation. Unicode property escapes are extremely expensive
to compile as the regex engine must build character class tables
covering tens of thousands of code points.
Since marked calls the start() function at every character position
while scanning source text, this meant hundreds of regex compilations
per marked.lexer() call, and lexer runs ~60 times/sec during streaming.
Profiling showed KaTeX regex consuming 87% (320ms/365ms) of total
markdown rendering time.
Changes:
- Pre-compile SURROUNDING_CHARS_REGEX once at module load time
- Use .test() instead of .match() to avoid array allocations
- Fix delimiter search to find earliest match, not last match
* perf: replace katexStart with single-pass character scan
The katexStart() function was the dominant cost in marked's lexer,
consuming 55-58% of total markdown rendering time per profiling.
It was called at every character position by marked and each call:
- Looped through 3-5 delimiters, each doing indexOf() on the full
remaining source (3-5 x O(n) string scans per call)
- Ran the complex ruleReg regex with Unicode lookaheads for validation
- On failed validation, created substrings and looped again
Replace with a single linear character scan using charCodeAt that:
- Checks only for $ (charCode 36) or backslash (charCode 92)
- Filters backslash hits by next character to avoid false positives
- Preserves the surrounding-character validation
- Returns immediately on first valid candidate
- Lets the tokenizer handle full validation (it already does this)
This reduces start() from O(n * delimiters * retries) to O(n) with
a very small constant factor per call.
* Update katex-extension.ts
- Add notebook API functions (createNotebookSession, executeNotebookCell, stopNotebookSession) - Create CellEditor component with CodeMirror for cell editing - Rewrite NotebookView with session-based execution, Run All, Restart, Stop - Kernel status indicator with tooltips - Wire baseUrl/apiKey through FilePreview and FileNav
Added and updated translations.
…#22304) codeHighlight.ts had a top-level static import of shiki that pulled the entire highlighter engine (~5-10MB of JavaScript including all language grammars) into any page that imported the module - even if only the lightweight isCodeFile() function was used. Replace the static shiki import with: - A static set of ~85 common language IDs for synchronous extension checks (isCodeFile, extToLang) - no shiki dependency needed - A dynamic import('shiki') inside highlightCode(), which is already async so callers are completely unaffected The static language set covers all commonly-used file extensions. Obscure extensions not in the set simply won't be detected by isCodeFile() (the file still opens fine, just won't show the code file indicator). Highlighting itself still works for all shiki languages since the full bundle loads on demand.
`raise "string"` in Python raises TypeError instead of the intended error, making error messages confusing and debugging difficult. Co-authored-by: gambletan <ethanchang32@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…ing into sidebar (#22454) getTimeRange returns month names that are used as i18n translation keys (consumed via \.t(chat.time_range) in the sidebar, search modal, etc.). The keys must be exact English strings like 'January', 'February', etc. Previously, toLocaleString('default', { month: 'long' }) was used to generate these keys. The 'default' locale defers to the browser's locale resolution, which in Firefox with intl.regional_prefs.use_os_locales=true picks up OS regional settings instead of the browser language. This caused German month names (e.g. 'Februar', 'Januar') to appear in the sidebar for users whose OS region is set to Germany, even when both browser and app language are set to English. Chrome was unaffected because it ignores OS regional settings for the 'default' locale. Since i18n has no translation key for 'Februar', the German string passed through untranslated. Replace toLocaleString with a static MONTH_NAMES array lookup to make the intent explicit and eliminate any browser/OS locale dependency.
…22445) In both inlet and outlet filter processing, response.json() was called BEFORE response.raise_for_status(). When a filter endpoint returns an HTTP error, the user's chat payload gets silently overwritten with the error response body. If the error is not caught, the corrupted payload propagates through subsequent filters and into the chat completion. Swapped the order so raise_for_status() runs first — payload is only updated on success. Co-authored-by: gambletan <ethanchang32@gmail.com>
…2444) json.loads(event_data.get("user", {})) crashes with TypeError when the "user" key is absent because the default value {} is a dict, not a JSON string. json.loads expects str/bytes, not dict. Also handle the case where "user" is already a dict (not serialized JSON) to make the webhook more robust. Co-authored-by: gambletan <ethanchang32@gmail.com>
- URL-encodes the OAuth error message when constructing the redirect URL in the OIDC callback handler - Without encoding, error messages containing spaces, ampersands, or other special characters produce malformed URLs that the frontend cannot parse correctly - The custom OAuth client callback handler already correctly uses urllib.parse.quote_plus() for the same purpose; this fix brings the OIDC handler in line with that pattern Co-authored-by: gambletan <tan@gambletan.com>
The locale for Azure TTS SSML was being extracted with `split("-")[:1]`,
which only takes the first segment (e.g., "en" from "en-US"). The
xml:lang attribute in SSML requires a full locale like "en-US", not just
a language code. This caused Azure TTS to either fail or use incorrect
pronunciation rules.
Changed `[:1]` to `[:2]` to properly extract the locale (e.g., "en-US").
Co-authored-by: gambletan <ethanchang32@gmail.com>
The get_token_usage_by_user query lacked group_id filtering, while the companion get_message_count_by_user query already supported it. When an admin filtered analytics by user group, message counts were correctly scoped to the group but token usage totals included data from all users. Add the group_id parameter and subquery filter to get_token_usage_by_user, matching the pattern used by get_message_count_by_user and other analytics queries, and pass group_id through from the analytics endpoint.
Add on:error handler to img
Previously, QueuedMessageItem only rendered text content and ignored the files array, causing queued messages with only images to appear blank. Now passes files to the component and renders image thumbnails and file name indicators inline. Fixes #22256
… templates
Add |middletruncate:n, |start:n, and |end:n pipe filters to the
{{MESSAGES}} template variable, enabling per-message character
truncation for task models (title, tags, follow-up, etc.).
Example: {{MESSAGES:END:2|middletruncate:500}}
This optimizes task model prompt size for conversations with very
long messages (e.g. pasted documents), reducing latency for local
models and API costs.
Closes #21499
* Add v0.8.10 changelog entry * changelog: docker startup fix for missing opentelemetry dependency * changelog: add translation updates for v0.8.10 * changelog: oauth-error-handling, exception-messages * changelog: fix YAML file processing with Docling (#22399) * changelog: tool access fix for non-admin users * changelog: fix time range month names localization * changelog: pipeline filter, webhook crash, shutdown handling * changelog: tool method filtering, OAuth URL encoding, Azure TTS * changelog: add analytics group filtering fix * changelog: api-calls, optimization, performance * changelog: add MariaDB Vector support entry * changelog: add web search favicon fallback fix * changelog: custom model fallback fix * changelog: pending message image display fix (#22256) * changelog: task message truncation for title and tag generation * changelog: oidc, logout, custom-endpoint * changelog: files list stability fix for issue #21879 * Remove empty Changed section from v0.8.10 * changelog: file metadata sanitization fix * Remove empty Changed section from v0.8.10 * changelog: fix knowledge file embedding updates for RAG * changelog: add Azure speech transcription error fix
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Pull Request Checklist
Note to first-time contributors: Please open a discussion post in Discussions to discuss your idea/fix with the community before creating a pull request, and describe your changes before submitting a pull request.
This is to ensure large feature PRs are discussed with the community first, before starting work on it. If the community does not want this feature or it is not relevant for Open WebUI as a project, it can be identified in the discussion before working on the feature and submitting the PR.
Before submitting, make sure you've checked the following:
devbranch. PRs targetingmainwill be immediately closed.devto ensure no unrelated commits (e.g. frommain) are included. Push updates to the existing PR branch instead of closing and reopening.Changelog Entry
Description
Added
Changed
Deprecated
Removed
Fixed
Security
Breaking Changes
Additional Information
Screenshots or Videos
Contributor License Agreement
By submitting this pull request, I confirm that I have read and fully agree to the Contributor License Agreement (CLA), and I am providing my contributions under its terms.
Note
Deleting the CLA section will lead to immediate closure of your PR and it will not be merged in.