This is an Extension for the Forge Webui, which implements IC-Light, allowing you to manipulate the illumination of images.
Last Checked: 2024 Nov.01
Automatic1111 v1.10.1 |
Forge (Gradio 4) |
Forge Classic (Gradio 3) |
reForge main |
reForge dev_upstream |
---|---|---|---|---|
Working | Working | Working | Working | Pending: #137 |
for Automatic1111 Webui
- Only version v1.10.0 or later is supported
- You also need to install sd-webui-model-patcher first
- Download the two models from Releases
- Create a new folder,
ic-light
, inside your webuimodels
folder - Place the 2 models inside said folder
- (Optional) You can rename the models, as long as the filenames contain either
fc
orfbc
respectively
Only works with SD 1.5 checkpoints
- txt2img - FC
- txt2img - FBC
- img2img - FC
- Options
Relighting with Foreground Condition
- In the Extension input, upload an image of your subject, then generate a new background using txt2img
- If the generation aspect ratio is different, the
Foreground
image will beCrop and resize
first Hires. Fix
is supported
example output
prompt: outdoors, garden, flowers
Relighting with Foreground and Background Condition
- In the Extension inputs, upload an image of your subject, and another image as the background
- Simply write some quality tags as the prompts
Hires. Fix
is supported
example output
prompt: (high quality, best quality)
Relighting with Light-Map Condition
- In the img2img input, upload an image of your subject as normal
- In the Extension input, you can select between different light directions, or select
Custom LightMap
and upload one yourself - Describe the scene with the prompts
- Low
CFG
(~2.0
) and highDenoising strength
(~ 1.0
) is recommended
example output
prompt: beach, sunset
source: Right Light
info
When enabled, the subject will be additionally pasted onto the light map to preserve the original color. This may improve the details at the cost of weaker lighting influence.
prompt: fiery, bright, day, explosion
source: Bottom Light
These settings are avaliable for all 3 modes
- Use the rembg package to separate the subject from the background.
- If you already have a subject image with alpha, you can simply disable this option.
- If you have an anime subject instead, select
isnet-anime
from the Background Removal Model dropdown. - When this is enabled, it will additionally append the result to the outputs.
- If the separation is not clean enough, edit the Threshold parameters to improve the accuracy.
Use the Difference of Gaussian algorithm to transfer the details from the input to the output.
By default, this only uses the DoG
of the subject without background. You can also switch to using the DoG
of the entire input image instead. Increasing the Blur Radius will strengthen the effect.
The settings are in the IC Light section under the Stable Diffusion category in the Settings tab
- Sync Resolution Button: Adds a button in the
txt2img
tab that changes theWidth
andHeight
parameters to the cloest ratio of the uploadedForeground
image - All Rembg Models: By default, the Extension only shows
u2net_human_seg
andisnet-anime
options. If those do not suit your needs (eg. your subject is not a "person"), you may enable this to list all available models instead.
- Select different
rembg
models - API Support
- Improve
Reinforce Foreground
- Improve
Restore Details
- If you click
Reuse Seed
when previewing the appended images instead of the first result image, it will result in an error.This is mostly upstream, as even ControlNet raises this error for the detected maps. I probably won't address it until the Webuis have an unified way to properly append images...
Note
This fork has been heavily rewritten. I will still try to merge any backend changes upstream; however, the frontend will retain my opinionated breaking changes. Therefore, merging this fork is highly discouraged without thorough testing.