-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: package as executable (desktop app) #470
Comments
More options for deployment are always a great idea. Packaging on macOS can bit a bit of a chore, you'll need a $99/yr developer account to create signed/notarized binaries. Not sure if you need to pay for signing Windows binaries or not, but it's at least easier to bypass for the user if you don't. |
While it is seems impractical to do it with Mac OS, it isn't the case on Windows and Linux, Android, etc. But I believe there is a way to package on multiplatform using flutter. Thought that would perhaps mean creating a new sibling for flutter project. It allows you to create apps for desktop, and according to flutter docs, including iOS. Tried it in the past but didn't have mac. It works on all platforms though. The snap store also works as executable. Tried it in some apple devices to work. |
On the packaging front, I am actively looking into Flatpak: Manifest: app-id: org.ollama-webui.Ollama-WebUI
runtime: org.freedesktop.Platform
runtime-version: '22.08'
sdk: org.freedesktop.Sdk
command: start-webui.sh
finish-args:
- --share=ipc
- --socket=x11
- --socket=wayland
- --share=network
- --filesystem=home
- --device=dri
- --env=ENV=prod
- --env=SCARF_NO_ANALYTICS=true
- --env=DO_NOT_TRACK=true
modules:
- name: nodejs
buildsystem: simple
build-commands:
- npm install
- npm run build
- mkdir -p /app/frontend
- cp -r build/* /app/frontend/
sources:
- type: git
url: https://github.com/ollama-webui/ollama-webui
tag: main # Specify the correct branch or tag here
- type: archive
url: https://chroma-onnx-models.s3.amazonaws.com/all-MiniLM-L6-v2/onnx.tar.gz
dest: /app/data/onnx_models
- name: python-backend
buildsystem: simple
build-commands:
- pip3 install --no-cache-dir torch torchvision torchaudio -f https://download.pytorch.org/whl/cpu
- pip3 install --no-cache-dir -r backend/requirements.txt
- install -D backend/start.sh /app/bin/start-webui.sh
- cp -r backend/* /app/backend/
- cp -a /app/data/onnx_models /root/.cache/chroma/onnx_models
sources:
- type: git
url: https://github.com/ollama-webui/ollama-webui
tag: main # Specify the correct branch or tag here
post-install:
- mkdir -p /app/bin
- echo -e '#!/bin/sh\nexec /app/backend/start.sh' > /app/bin/start-webui.sh
- chmod +x /app/bin/start-webui.sh Workflow: name: Build and Release Flatpak
on:
push:
tags:
- 'v*'
permissions:
contents: write
id-token: write
jobs:
flatpak-build-and-release:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Flatpak and Flatpak Builder
run: |
sudo apt-get update -y
sudo apt-get install -y flatpak flatpak-builder
- name: Add Flathub repository
run: flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
- name: Build Flatpak application
run: flatpak-builder --force-clean build-dir org.ollama.WebUI.yaml
- name: Create bundle
run: flatpak build-bundle build-dir ollama-webui.flatpak org.ollama.WebUI
- name: Create Release and Upload Asset
uses: softprops/action-gh-release@v1
with:
name: ${{ github.ref_name }}
body: |
## Release ${{ github.ref_name }} of Ollama WebUI
### 🚀 New Features
- List new features here
- Improvements or bug fixes
### 📦 Installation Instructions
To install Ollama WebUI on your system, follow these steps:
1. Ensure Flatpak is installed on your system.
2. Download `ollama-webui.flatpak`.
3. Install the application using `flatpak install ollama-webui.flatpak`.
tag_name: ${{ github.ref_name }}
files: |
ollama-webui.flatpak
token: ${{ secrets.GITHUB_TOKEN }}
draft: true # Adjust these as preferred
prerelease: true # Adjust these as preferred I'll wait til after the rename to start the PR. |
We should also have some thing like these from oobabooga/text-generation-webui, so that users can install without docker with one command as well and streamline the installation process: https://github.com/oobabooga/text-generation-webui/blob/main/start_linux.sh |
I had thought of doing exactly this, I've got macOS and Linux mostly hammered out already for my own purposes. Since we're on the same page I'll get those polished up and ready for PR too 👍 |
We might also want to look into providing installation option using |
One can use pipx it's far better than pip which often influnces whatever else you have installed on device. |
Hey guys, i created one https://x.com/cocktailpeanut/status/1763254738177462672 Basically I work on a project called pinokio, which is sort of like a browser but for automating anything on your computer, which can be used for installing and running and managing AI apps in native format (no need to mess with terminal stuff). And I became a fan of this project lately and have been using it daily, so decided to write a 1 click launcher script for this. Hope you enjoy. |
That's awesome @cocktailpeanut, glad to see you've kept busy. Thanks for the shoutout! 🫶 |
There are large issues when it comes to Windows and pretty much all Python packagers. Especially when it comes to single file packaged. Windows Defender picks up everything that isn't signed (See extortion). If Microsoft could get away not allowing you to install anything 3rd party not approved by them they would do that definitely. See S-Mode |
Here it is, a one-click installation. Content of the "open-webui.bat": https://pastebin.com/527wvn0k If you want to enter the existing venv and make changes you can make a "cmd_venv.bat" (or a name you like) and start it: https://pastebin.com/wNArfua2 I hope it will help. |
@nightboysfm Looks promising, feel free to make a PR! |
This will be an excellent milestone. If you can achieve the Desktop app, or even the Android APP, you can change the condition of the LLM API application. We know that most of the excellent open source models are not good clients without good clients. , But maybe this project can bring hope, quickly bring the open source model to everyone who only clicks the EXE file or the APK file. |
How about forking Ollama, re-using all that existing packaging already in place for OSX, Windows, Linux and also bundle open webui next to ollama? |
Packaging isn't the hard part, doing it accordingly with with certificate signing for various platforms is the hairy part everyone wishes to avoid. |
I'd include certificate signing in the scope of packaging. Ollama has it figured out technically and it is obviously working (https://github.com/ollama/ollama/blob/main/.github/workflows/release.yaml) and the overlap of users installing Ollama desktop app and Open WebUI must be large. Agree that organizationally there are other challenges in how to set up and maintain the various developer programs/registrations, if that is what you mean as the hairy part I agree, but one route is to officially be shipped with the Ollama desktop app, using Ollama's certificates. |
While I'd love nothing more than the ultimate Ollama x WebUI collab by having them include us in their installer packages, I highly doubt they'd go for that. And I'd understand their reasons why probably. Indeed you are correct, the main sticking point is managing the various developer accounts and credentials required with the platform-owners, which are also not without cost. |
How about a bring-your-own-credentials model, where it is made easy to package and sign executables / desktop apps, but by default they are not signed (Ollama does this by only signing if SIGN=1 env var is set when running the bundling scripts). This way, anyone that needs signed artifacts (e.g. for distribution within their org) can do it themselves by following simple instructions. |
As I mentioned before, this is undoubtedly the best open-source UI I have ever experienced on GitHub. However, for most people new to Docker, the installation process can be a nightmare. Even with detailed instructions, they still find the CLI too technical. Over the past few days, I recommended open-webui to several friends, but they all complained about the lack of a straightforward installation method. They preferred AnythingLLM because it only requires downloading and clicking to install, without the need for WSL, Docker, or similar tools. This proves that a simple installation process is a crucial aspect of the user experience. I suggest we offer installation prompts similar to AnythingLLM, such as those found here: https://docs.useanything.com/installation/desktop/windows. We could add the following to our installation guide:
I understand this is not a perfect solution, but it at least provides more options for users and helps improve their overall experience. As @muhanstudio said, "quickly bring the open-source model to everyone who only clicks the EXE file or the APK file." Isn't user-friendliness one of the most important features of open-webui? I sincerely hope this project continues to thrive and bring a better experience to more people. Once again, thank you to the development team and all the contributors for your hard work. |
my friends use podman and havent heard of docker (surprisingly) so the only docker instructions confused them |
I have packaged a Windows executable program for open-webui. Those in need can click the link to download it. It also supports modifying the .env file and is suitable for Windows x64. https://github.com/zhouxihong1/open-webui/releases/download/V0.3.10/start_open_webui.7z |
Ideally we would want something like Ollama
Maybe we could use https://pyinstaller.org/en/stable/
The text was updated successfully, but these errors were encountered: