Skip to content

SD Dream node parameters

stassius edited this page May 27, 2023 · 2 revisions

File

image

Output file name

Output file name is a template for the filename. You don't have to add a file extension here.

Append name

Append name will add a number to the filename based on the task and batch number. You can also use @pdg_index in backticks in your filename, but this will not use the batch number, so it's better to use Append name.

Save filename to attribute

Save filename to attribute will save the name of the file with all the appended numbers to an attribute. Later you can use it in any node by adding it in backticks like this: @filename.

Stable Diffusion

image

Seed

Seed is 64-bit number that defines the image generation process. Basically if you use the same seed with the same parameters and prompt, you should get the same result. Put -1 here to randomize seed each time.

Sequential seed

Will add one to the seed for each iteration (for both tasks and batches), so all your images will have a different seed value.

Resolution

You can set the image resolution by tweaking Width and Height or you can tick the "Upstream image resolution" checkbox to use the resolution of incoming image.

Prompts

You can set "Prompt Source" to "Custom". This way the node will read prompts from the Prompts foldout. Or you can set it to "Upstream attribute" to use prompts from upstream nodes.

Batches

It's a "Batch size" parameter from Automatic1111. If you put 4 here, it will try to generate 4 images simultaneously with a unique seed for each. The possible amount of batches depends on your GPU memory and image resolution. All the generated batches will be turned into tasks at the output of the node.

Model

You can switch a model to a particular one right on this node, but I'd suggest to leave it as "Current model" and switch models with SD Switch Model instead.

Sampler

Is an algorithm of finding the right spot on a treasure map (see the Prompting Basics). There are a lot of different options available here. Which one to use is a theme for a heated discussion on Reddit. I tend to stick to Euler A for static images.

CFG Scale

Is how strongly Stable Diffusion will try to match your prompt. Roughly speaking it's a mix value between "No prompt at all" and "Only prompt" generations. Usually it should be in 7-15 range, but in some cases like when you use a particular Lora or Alternative image2image test you would like to lower this value.

Steps

How many iterations it would take to generate your image. The number depends on the Sampler. For Euler A 20 steps is enough. For some you should increase the number.

Face Restoration

Will try to find a face on your generated image and restore it with CodeFormer or GFPGAN networks. You can choose which one to use in the Automatic1111 settings tab. It only works when the face is vertical and the style is photorealistic. I don't use it often, as good models will render good faces without this option.

Tiling

Will try to create a tileable texture.

Images

image

This tab will appear in Image2Image mode.

Denoising strength

It influences how much your initial image will be changed. 0 - not changed at all. 1 - changed completely. For minor fixes use low numbers here.

Image type

You can choose an upstream image here (for example from another generation or from a File Pattern node) or a custom file on disk.

Use mask

See How to use Inpainting: https://github.com/stassius/StableHoudini/wiki/How-to-use-inpainting

ControlNet

image

See How to use ControlNet: https://github.com/stassius/StableHoudini/wiki/How-to-use-ControlNet

Settings

image

Custom URL

Lets you choose URL this node works with. When it turned off, it will use the default value from the /hda/Config/Config.ini file

Open result in external viewer

Here you can send your generated image to an external program as a command line argument.