Skip to content

NukeDiffusion is an integration tool for Nuke that uses Stable Diffusion to generate AI images from prompts using local Checkpoints.

License

Notifications You must be signed in to change notification settings

danilodelucio/NukeDiffusion

Repository files navigation

NukeDiffusion - Stable Diffusion for Nuke

NukeDiffusion_Logo_v001


NukeDiffusion is an integration tool for Nuke that uses Stable Diffusion to generate AI images from prompts using local Checkpoints.
It uses the official library from Hugging Face, and you don't need to create any account, everything works locally!

✅ Unlimited image generation;
✅ Local Checkpoints (SD and SDXL models);
✅ Main workflows included (txt2img, img2img, inpainting);
✅ No internet connection required;
✅ No sign-in account required;
✅ Free for non-commercial or commercial use;
✅ Available for Windows (still working for Linux and Mac OS);
✅ Compatible with CPU and CUDA.

Some limitations you need to consider for this version:

📌 generate single images only (no animation supported);
📌 batch feature for multiple image generation not included.

Note

For experienced users, NukeDifussion does not support ControlNet, Lora, AnimateDiff and other advanced controls, just the basic setup for image generation.



Quick Access 🔗

NukeDiffusion_cover_FHD_v002


Stable Diffusion Requirements 🖥️

For a complete guide to Stable Diffusion requirements, I suggest you read this article.

In summary, the minimum setup required mentioned in the article is:

  • GPU: GTX 1060 (6GB VRAM);
  • System RAM: 16GB DDR4.

Important

Please note that due to the size of the SDXL models, which is around 6GB, certain Checkpoints may not be compatible with this setup.


Python Compatibility 🐍

  • NukeDiffusion Terminal uses some libraries from Hugging Face (such as Diffusers and PyTorch) and requires Python version 3.8-3.11 installed into your system;

  • NukeDiffusion node was written in Python2.7 to make it possible to run in all Nuke versions (hopefully). 😐


Workflows 💼

For now, the included pipeline workflows are:

  • txt2img: generates an image from a text description (which is also known as a Prompt);
  • img2img: generates an image passing an initial image (user input) as a starting point for the diffusion process;
  • Inpainting: replaces or edits specific areas of an image by a provided input mask.

Note

For now, the inpainting workflow works only with the Stable Diffusion default model, therefore is not possible to use local Checkpoint.


NukeDiffusion node ☢️

The NukeDiffusion node is pretty straightforward. Everything you need is in the same panel, and the UI updates accordingly to your workflow option (txt2img, img2img, inpainting).

NukeDiffusion_NodeUI_v004

  • Workflow: select one of the 3 workflow options to work with: txt2img, img2img or inpainting;

  • Checkpoint: here you can select the Checkpoint file available in the directory you specified earlier on checkpoints_path.json, or if you are using the default path ./NukeDiffusion/models/checkpoints;

  • SD Model: after selecting the Checkpoint model, you must indicate its version. By default, if the "XL" or "xl" letters are included in the checkpoint name, it will update the SD Model knob to "SDXL", otherwise to "SD";

Warning

Keep in mind to match the SD Model to your selected Checkpoint.
SD and SDXL models were pretrained with different resolutions, and they have different pipelines to produce your image.
If you provide a Checkpoint with the wrong SD Model, the NukeDiffusion Terminal will close automatically.

  • CUDA: generate the images using the GPU (graphics card) if it is available on your machine.;

  • Positive Prompt: type everything you want to be generated in your image;

  • Negative Prompt: type everything you do not want to be generated in your image;

  • Width: width size of your output image;

  • Height: height size of your output image;

  • Seed: If you leave the value as -1, your image will be generated randomly. However, if you set any other value, your image will always be the same. It's a good idea to lock the Seed and try different settings to see how they affect your image;

  • CFG: Classifier-Free Guidance scale, which controls how closely the image generation process follows the text prompt. The higher the value, the more the image will follow the text input (by default, the maximum value is 10, but you can increase it if you want). With a lower value, the image generation deviates from the text input and becomes more creative;

    cfg examples nukediffusion_CFG

  • Steps: iterations of sampling and refining for the latent image. With higher steps you can get better images (usually between 20 and 40). Higher than this will probably slow down the image generation and will not have too much difference;

    steps examples nukediffusion_Steps

  • Strength: parameter that sets the denoising strength from 0 to 1. It is only used for img2img and inpainting workflows and requires an initial image. Higher values will produce more deviation from the input image (producing more creative output), and lower values will preserve the input image;

    strength examples nukediffusion_Strength

  • Mask Opacity: just for visualization purposes to check the mask input over the image input.

  • Render Input Image: quickly export the Input Image to the ./NukeDiffusion/_input folder;

  • Render Input Mask: quickly export the Input Mask to the ./NukeDiffusion/_input folder;


NukeDiffusion Terminal 🤖

After clicking on the Generate Image button, it will open the NukeDiffusion Terminal, which will load all the information provided in the NukeDiffusion node.

Screenshot 2024-03-03 215656

Here you don't have too much to do, just check the information and... wait! 😅

Important

While the cursor blinks on the NukeDiffusion Terminal, don't close it! Just ignore the 'triton' module error message and wait for your image to be generated!


Some images generated with NukeDiffusion using different workflows 🖼️

txt2img

nukediffusion_txt2img_A nukediffusion_txt2img_B nukediffusion_txt2img_C

img2img

nukediffusion_img2img_A2

nukediffusion_img2img_B

inpainting

nukediffusion_inpainting_A

nukediffusion_inpainting_B


Waiting time ⌛

This is a subject I need to highlight with you. 🥸

Since we are generating AI images locally, the waiting time depends exclusively on your machine's performance.

Keep in mind that when you run the code for the first time, it will ALWAYS take a while to load the selected Checkpoint into memory. After that, loading the next AI images will take less time.

The image generation itself is faster than the loading checkpoint process. For example, in all my tests I had to wait around 10 and 20 minutes to load the Checkpoint, but it took me 20 seconds to 2-3 minutes to generate the rest of the images (the referred time is per single image, of course).

Tip

To generate faster images, try SD models instead of SDXL.


Installing ⚙️

Here is the most annoying part... 😣
But don't give up, it will be worth it! 🤓


Let me break it into a few parts:

1. .nuke

Click on the green button to download the NukeDiffusion and save it into your .nuke folder.

image

After extracting the NukeDiffusion.zip file, rename the NukeDiffusion-main folder to NukeDiffusion only (without the "main" word at the end).

image

Open the init.py file from the .nuke root, then indicate the NukeDiffusion folder, like this:

import nuke

nuke.pluginAddPath('./NukeDiffusion')

If you don't have an init.py file in your .nuke directory, you can create a new text file and paste the code above.

Don't forget to rename it as init and change the file extension to .py.


2. _NUKEDIFFUSION_SETUP file

You will find a _NUKEDIFFUSION_SETUP file in the for_windows or for_linux_and_mac folder.

Execute the file related to your operating system:

  • In Windows, double-click in the _NUKEDIFFUSION_SETUP.bat file;

    2024-03-27 02_25_30-for_windows

  • In Linux/Mac OS, open the terminal in the same directory and run source _NUKEDIFFUSION_SETUP.sh.

    Screenshot from 2024-03-27 02-16-25 2

It will create the nukediffusion-env folder (the virtual environment), and then all the dependencies will be installed automatically inside this folder.

image

Follow the instructions until you see the "The NukeDiffusion setup has been completed!" message.

2024-03-22 16_56_11-for_windows

Screenshot 2024-03-22 165031

Do not close the Terminal before the process is complete... this may take some time!


If you are getting CUDA is not available as a response, go to this page from NVidia, download and install the CUDA Toolkit, then try this step again until you get CUDA enabled. 🤞

At the end of this guide, you should see the NukeDiffusion's icon on your left side toolbar when you launch Nuke.

Screenshot 2024-03-03 145810

Checkpoints ✅

You can use the CivitAI website to download Checkpoints and try some different Prompts shared by the community. 😉

Important

For now, NukeDiffusion only accepts .safetensors files.

If you are unsure about which Checkpoint to use, I'm going to list some of my favourites:

SD models
SDXL models

After downloading the Checkpoint, you can put them in .\NukeDiffusion\models\checkpoints.
If you have another folder where you'd like to use the Checkpoints, you can set a default path in the checkpoints_path.json file, located in the .\NukeDiffusion\config folder.

image

Caution

Using a single backslash \ can cause issues. Please use either a forward slash / or double backslash \\.

Important

If the custom Checkpoint does not exist or if you leave the SD [Default Model] option enabled, it will download a default model from the Hugging Face repository (this happens for the first time only).


Troubleshooting 🛠️

'triton' Error

The error ModuleNotFoundError: No module named 'triton' must be ignored!

Triton is an Open-source GPU programming for neural networks, and what I found regarding this issue, is that Triton module is not available for Windows. However, this error does not affect the image generation, so just simply ignore it!

triton error

I didn't hide this issue so you can check on the Terminal if you get another import module error.


If you have feedback, suggestions, or feature requests, please visit the Discussions page and create a New Discussion.
For bugs, please go to the Issues page and create a New Issue.

Release Notes 📢

v01.1:

  • Simplified setup installation;
  • CPU support;
  • Export buttons for Input Image and Input Mask;
  • "Input Format" button to adjust the image resolution.;
  • Countdown feature for closing the Terminal window.

Support me! 🥺

image

This personal project required significant time and extra hours of hard work to make it available to everyone.
It's not perfect, and I still need to work on many features, but for the first version, I believe it can help Nuke users live this experience. 🤖

If you find this tool useful, please consider supporting me on Buy Me A Coffee. ☕
You can also share this tool or send me a positive message, it would help me in the same way.

If you believe in this project and want to sponsor it for future updates, reach out on my Linkedin.


Special thanks to Gustavo Goncalves and Leticia Matsuoka for testing this tool and providing valuable feedback for improvement. Also, thanks to Juliana Chen for her support and encouragement.

Cheers! 🥂

About

NukeDiffusion is an integration tool for Nuke that uses Stable Diffusion to generate AI images from prompts using local Checkpoints.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published