Skip to content

2. Installation

Hafiidz edited this page Oct 12, 2022 · 6 revisions

Pre-requisite

  1. Hardware, an Nvidia GPU with at least 4GB VRAM and recommended to have 8GB VRAM.

Beginner Friendly Installation for Windows

  1. First, if you have some programming background, it is highly recommended to follow development installation, to get the latest version and full NodeJS and Python capabilities. However, if you are new, and just wanted to get standalone AI image generation started, please follow the next step.

  2. Download Windows Installer/Executable https://github.com/fvsionai/fvsion/releases.

  3. Make sure you have all 3 binary zip files (fvsion.zip.z001/2/3) in the same folder. You can unzip them via winzip or 7zip (https://www.7-zip.org/download.html) image

  4. Your unzip folders should look as follows:

    image

  5. Download the difusers models as per instruction below.

  6. Download the upscaler models by clicking on models/download_upscaler_model.bat.

  7. Run the UI and python engine by double clicking start_app.cmd.

  8. You are ready to generate your image.

Other OS

  1. Linux is currently supported via development installation as per instruction below. Need help to create a docker image.
  2. MacOS not yet supported

Development Installation

  1. Make sure you have git, python 3.10.7 & virtualenv, and node >= 18.9.1 installed.
  2. Clone git via git clone https://github.com/fvsionai/fvsion.git <folder-name>. Replaced <folder-name> with your desired folder name.
  3. Navigate to the newly created folder cd <folder-name>
  4. Download diffusers model as per instruction below: [Link](# Diffusers Model Download)
  5. Make sure you are in the root folder and create virtualenv local folders python -m virtualenv .venv
  6. Activate your virtualenv by using .venv/scripts/activate
  7. Update pip python -m pip install --upgrade pip
  8. Install python requirement using pip install -r py/requirements.txt
  9. Run python py/main.py & go to http://localhost:4242/docs to confirm server is running.
  10. You can start generating image by directly interacting with the built in SwaggerAPI. Click "CTRL + C" while in terminal to close the server when you are done.
  11. For a proper front end, need to install npm requirement using npm i
  12. Development run via npm run dev
  13. (Optional) You can run npm run build to generate standalone exe files. Do note however it might take a while (~30 minutes on AMD Ryzen 5 1600), especially for the first run.
  14. (Recommended) Please send PR for any improvement suggestions. Very much welcomed.

Diffusers Model Download

  1. There are two options as of today, either download manually by going to huggingface website, or using git lfs to handle large files. Note that the whole files total up to >5 GB

  2. Additionally, you can use the default download version if you have 8GB VRAM or more. If you have less VRAM, it is highly recommended to download from fp16 tree. Do note that even if you have VRAM >= 8GB, it is also good to have low VRAM model to ensure that you are able to run a lot more prompts in batches. The low VRAM model is just slightly slower than the normal model.

  3. Install Git LFS (large file storage) via git lfs install command in your favorite cmd/bash.

  4. For low VRAM, run the command in cmd/bash git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 -b fp16 models/stable-diffusion-v1-4-fp16. There will be prompt asking for your huggingface id and password for the first time.

  5. (Optional) If the above fails to work, you can manually download the models, by logging into https://huggingface.co/login and navigate to license agreement page at https://huggingface.co/CompVis/stable-diffusion-v1-4. Once you have scrolled down and accepted the license, navigate to fp16 version of the model via https://huggingface.co/CompVis/stable-diffusion-v1-4/tree/fp16 . It is important to login & accept the license first, otherwise you will receive an error when navigating to the download page.

  6. (Optional) For low VRAM, downloads stable-diffusion-v1-4 whole folder from https://huggingface.co/CompVis/stable-diffusion-v1-4/tree/fp16 and copy to models/stable-diffusion-v1-4-fp16.

  7. For high VRAM, run the command in cmd/bash git clone https://huggingface.co/CompVis/stable-diffusion-v1-4 models/stable-diffusion-v1-4. There will be prompt asking for your huggingface id and password for the first time.

  8. (Optional) If the above fails to work, you can manually download the models, by logging into https://huggingface.co/login and navigate to license agreement page at https://huggingface.co/CompVis/stable-diffusion-v1-4. Once you have scrolled down and accepted the license, navigate to fp16 version of the model via https://huggingface.co/CompVis/stable-diffusion-v1-4/tree/main . It is important to login & accept the license first, otherwise you will receive an error when navigating to the download page.

  9. (Optional) For high VRAM, downloads stable-diffusion-v1-4 whole folder from https://huggingface.co/CompVis/stable-diffusion-v1-4/tree/main and copy to models/stable-diffusion-v1-4.

  10. Once completed please ensure that you are back in the root directory.

Login Page

image

License Agreement image

FP16 model download link image

Known Issues

  1. There are edge cases where electron failed to be installed using npm i, if so, open package.json file, temporaily delete the "electron": "2x.x.x", "electron-builder": "2x.x.x", "got": "x.x.x", lines, save the files and run npm i again. Once successful, undo the changes in package.json i.e. re-add the deleted line, and re-run npm i
  2. There might be undeleted temporary files in C:\Users\test\AppData\Local\Temp. Please check if you storage is heavily used. Do note that diffusers libary tend to cache files to speed up operation which also use some amount of data. More details here.
  3. UX: Minimal loading indicator when loading generation.

Contributors

Hafiidz