-
-
Notifications
You must be signed in to change notification settings - Fork 4
2. Installation
- Hardware, an Nvidia GPU with at least 4GB VRAM and recommended to have 8GB VRAM.
-
First, if you have some programming background, it is highly recommended to follow development installation, to get the latest version and full NodeJS and Python capabilities. However, if you are new, and just wanted to get standalone AI image generation started, please follow the next step.
-
Download Windows Installer/Executable https://github.com/fvsionai/fvsion/releases.
-
Make sure you have all 3 binary zip files (fvsion.zip.z001/2/3) in the same folder. You can unzip them via winzip or 7zip (https://www.7-zip.org/download.html)
-
Your unzip folders should look as follows:
-
Download the difusers models as per instruction below.
-
Download the upscaler models by clicking on
models/download_upscaler_model.bat
. -
Run the UI and python engine by double clicking
start_app.cmd
. -
You are ready to generate your image.
- Linux is currently supported via development installation as per instruction below. Need help to create a docker image.
- MacOS not yet supported
- Make sure you have git, python 3.10.7 & virtualenv, and node >= 18.9.1 installed.
- Clone git via
git clone https://github.com/fvsionai/fvsion.git <folder-name>
. Replaced<folder-name>
with your desired folder name. - Navigate to the newly created folder
cd <folder-name>
- Download diffusers model as per instruction below: [Link](# Diffusers Model Download)
- Make sure you are in the root folder and create virtualenv local folders
python -m virtualenv .venv
- Activate your virtualenv by using
.venv/scripts/activate
- Update pip
python -m pip install --upgrade pip
- Install python requirement using
pip install -r py/requirements.txt
- Run
python py/main.py
& go tohttp://localhost:4242/docs
to confirm server is running. - You can start generating image by directly interacting with the built in SwaggerAPI. Click "CTRL + C" while in terminal to close the server when you are done.
- For a proper front end, need to install npm requirement using
npm i
- Development run via
npm run dev
-
(Optional) You can run
npm run build
to generate standalone exe files. Do note however it might take a while (~30 minutes on AMD Ryzen 5 1600), especially for the first run. - (Recommended) Please send PR for any improvement suggestions. Very much welcomed.
-
There are two options as of today, either download manually by going to huggingface website, or using
git lfs
to handle large files. Note that the whole files total up to >5 GB -
Additionally, you can use the default download version if you have 8GB VRAM or more. If you have less VRAM, it is highly recommended to download from fp16 tree. Do note that even if you have VRAM >= 8GB, it is also good to have low VRAM model to ensure that you are able to run a lot more prompts in batches. The low VRAM model is just slightly slower than the normal model.
-
Install Git LFS (large file storage) via
git lfs install
command in your favorite cmd/bash. -
For low VRAM, run the command in cmd/bash
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 -b fp16 models/stable-diffusion-v1-4-fp16
. There will be prompt asking for your huggingface id and password for the first time. -
(Optional) If the above fails to work, you can manually download the models, by logging into
https://huggingface.co/login
and navigate to license agreement page athttps://huggingface.co/CompVis/stable-diffusion-v1-4
. Once you have scrolled down and accepted the license, navigate to fp16 version of the model viahttps://huggingface.co/CompVis/stable-diffusion-v1-4/tree/fp16
. It is important to login & accept the license first, otherwise you will receive an error when navigating to the download page. -
(Optional) For low VRAM, downloads
stable-diffusion-v1-4
whole folder fromhttps://huggingface.co/CompVis/stable-diffusion-v1-4/tree/fp16
and copy tomodels/stable-diffusion-v1-4-fp16
. -
For high VRAM, run the command in cmd/bash
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 models/stable-diffusion-v1-4
. There will be prompt asking for your huggingface id and password for the first time. -
(Optional) If the above fails to work, you can manually download the models, by logging into
https://huggingface.co/login
and navigate to license agreement page athttps://huggingface.co/CompVis/stable-diffusion-v1-4
. Once you have scrolled down and accepted the license, navigate to fp16 version of the model viahttps://huggingface.co/CompVis/stable-diffusion-v1-4/tree/main
. It is important to login & accept the license first, otherwise you will receive an error when navigating to the download page. -
(Optional) For high VRAM, downloads
stable-diffusion-v1-4
whole folder fromhttps://huggingface.co/CompVis/stable-diffusion-v1-4/tree/main
and copy tomodels/stable-diffusion-v1-4
. -
Once completed please ensure that you are back in the root directory.
- There are edge cases where electron failed to be installed using
npm i
, if so, openpackage.json
file, temporaily delete the"electron": "2x.x.x", "electron-builder": "2x.x.x", "got": "x.x.x",
lines, save the files and runnpm i
again. Once successful, undo the changes inpackage.json
i.e. re-add the deleted line, and re-runnpm i
- There might be undeleted temporary files in
C:\Users\test\AppData\Local\Temp
. Please check if you storage is heavily used. Do note that diffusers libary tend to cache files to speed up operation which also use some amount of data. More details here. - UX: Minimal loading indicator when loading generation.