Skip to content

The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.

License

Notifications You must be signed in to change notification settings

veryvanya/ComfyUI

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ComfyUI

The most powerful and modular stable diffusion GUI and backend.

ComfyUI Screenshot

This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. For some workflow examples and see what ComfyUI can do you can check out:

Features

Workflow examples can be found on the Examples page

Shortcuts

Keybind Explanation
Ctrl + Enter Queue up current graph for generation
Ctrl + Shift + Enter Queue up current graph as first for generation
Ctrl + Z/Ctrl + Y Undo/Redo
Ctrl + S Save workflow
Ctrl + O Load workflow
Ctrl + A Select all nodes
Alt + C Collapse/uncollapse selected nodes
Ctrl + M Mute/unmute selected nodes
Ctrl + B Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through)
Delete/Backspace Delete selected nodes
Ctrl + Delete/Backspace Delete the current graph
Space Move the canvas around when held and moving the cursor
Ctrl/Shift + Click Add clicked node to selection
Ctrl + C/Ctrl + V Copy and paste selected nodes (without maintaining connections to outputs of unselected nodes)
Ctrl + C/Ctrl + Shift + V Copy and paste selected nodes (maintaining connections from outputs of unselected nodes to inputs of pasted nodes)
Shift + Drag Move multiple selected nodes at the same time
Ctrl + D Load default graph
Q Toggle visibility of the queue
H Toggle visibility of history
R Refresh graph
Double-Click LMB Open node quick search palette

Ctrl can also be replaced with Cmd instead for macOS users

Installing

Windows

There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the releases page.

Simply download, extract with 7-Zip and run. Make sure you put your Stable Diffusion checkpoints/models (the huge ckpt/safetensors files) in: ComfyUI\models\checkpoints

If you have trouble extracting it, right click the file -> properties -> unblock

How do I share models between another UI and ComfyUI?

See the Config file to set the search paths for models. In the standalone windows build you can find this file in the ComfyUI directory. Rename this file to extra_model_paths.yaml and edit it with your favorite text editor.

Jupyter Notebook

To run it on services like paperspace, kaggle or colab you can use my Jupyter Notebook

Manual Install (Windows, Linux)

Git clone this repo.

Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints

Put your VAE in: models/vae

Note: pytorch stable does not support python 3.12 yet. If you have python 3.12 you will have to use the nightly version of pytorch. If you run into issues you should try python 3.11 instead.

AMD GPUs (Linux only)

AMD users can install rocm and pytorch with pip if you don't have it already installed, this is the command to install the stable version:

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6

This is the command to install the nightly with ROCm 5.7 which has a python 3.12 package and might have some performance improvements:

pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm5.7

NVIDIA

Nvidia users should install stable pytorch using this command:

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121

This is the command to install pytorch nightly instead which has a python 3.12 package and might have performance improvements:

pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121

Troubleshooting

If you get the "Torch not compiled with CUDA enabled" error, uninstall torch with:

pip uninstall torch

And install it again with the command above.

Dependencies

Install the dependencies by opening your terminal inside the ComfyUI folder and:

pip install -r requirements.txt

After this you should have everything installed and can proceed to running ComfyUI.

Others:

Apple Mac silicon

You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS version.

  1. Install pytorch nightly. For instructions, read the Accelerated PyTorch training on Mac Apple Developer guide (make sure to install the latest pytorch nightly).
  2. Follow the ComfyUI manual installation instructions for Windows and Linux.
  3. Install the ComfyUI dependencies. If you have another Stable Diffusion UI you might be able to reuse the dependencies.
  4. Launch ComfyUI by running python main.py --force-fp16. Note that --force-fp16 will only work if you installed the latest pytorch nightly.

Note: Remember to add your models, VAE, LoRAs etc. to the corresponding Comfy folders, as discussed in ComfyUI manual installation.

DirectML (AMD Cards on Windows)

pip install torch-directml Then you can launch ComfyUI with: python main.py --directml

I already have another UI for Stable Diffusion installed do I really have to install all of these dependencies?

You don't. If you have another UI installed and working with its own python venv you can use that venv to run ComfyUI. You can open up your favorite terminal and activate it:

source path_to_other_sd_gui/venv/bin/activate

or on Windows:

With Powershell: "path_to_other_sd_gui\venv\Scripts\Activate.ps1"

With cmd.exe: "path_to_other_sd_gui\venv\Scripts\activate.bat"

And then you can use that terminal to run ComfyUI without installing any dependencies. Note that the venv folder might be called something else depending on the SD UI.

Running

python main.py

For AMD cards not officially supported by ROCm

Try running it with this command if you have issues:

For 6700, 6600 and maybe other RDNA2 or older: HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py

For AMD 7600 and maybe other RDNA3 cards: HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py

Notes

Only parts of the graph that have an output with all the correct inputs will be executed.

Only parts of the graph that change from each execution to the next will be executed, if you submit the same graph twice only the first will be executed. If you change the last part of the graph only the part you changed and the part that depends on it will be executed.

Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it.

You can use () to change emphasis of a word or phrase like: (good code:1.2) or (bad code:0.8). The default emphasis for () is 1.1. To use () characters in your actual prompt escape them like \( or \).

You can use {day|night}, for wildcard/dynamic prompts. With this syntax "{wild|card|test}" will be randomly replaced by either "wild", "card" or "test" by the frontend every time you queue the prompt. To use {} characters in your actual prompt escape them like: \{ or \}.

Dynamic prompts also support C-style comments, like // comment or /* comment */.

To use a textual inversion concepts/embeddings in a text prompt put them in the models/embeddings directory and use them in the CLIPTextEncode node like this (you can omit the .pt extension):

embedding:embedding_filename.pt

How to increase generation speed?

Make sure you use the regular loaders/Load Checkpoint node to load checkpoints. It will auto pick the right settings depending on your GPU.

You can set this command line setting to disable the upcasting to fp32 in some cross attention operations which will increase your speed. Note that this will very likely give you black images on SD2.x models. If you use xformers or pytorch attention this option does not do anything.

--dont-upcast-attention

How to show high-quality previews?

Use --preview-method auto to enable previews.

The default installation includes a fast latent preview method that's low-resolution. To enable higher-quality previews with TAESD, download the taesd_decoder.pth (for SD1.x and SD2.x) and taesdxl_decoder.pth (for SDXL) models and place them in the models/vae_approx folder. Once they're installed, restart ComfyUI to enable high-quality previews.

Support and dev channel

Matrix space: #comfyui_space:matrix.org (it's like discord but open source).

QA

Why did you make this?

I wanted to learn how Stable Diffusion worked in detail. I also wanted something clean and powerful that would let me experiment with SD without restrictions.

Who is this for?

This is for anyone that wants to make complex workflows with SD or that wants to learn more how SD works. The interface follows closely how SD works and the code should be much more simple to understand than other SD UIs.

About

The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 58.2%
  • JavaScript 39.8%
  • CSS 1.2%
  • Other 0.8%