You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue exists on a clean installation of Fooocus
The issue exists in the current version of Fooocus
The issue has not been reported before recently
The issue has been reported before but has not been fixed yet
What happened?
I create a prompt after generation image was corrupted.
I had heard of a similar problem with a1111, so I assumed there would be a similar situation with fooocus.
The code I suspect is the implementation of patched_encode_token_weights in patch_clip.py.
z=z* (original_mean/new_mean)
In the above line, it seems that there are cases where the new_mean is much smaller than the original_mean. In fact, if we dump these values,
original: tensor(-0.0196, device='cuda:0')
new: tensor(-0.0199, device='cuda:0')
original: tensor(0.0005, device='cuda:0')
new: tensor(0.0001, device='cuda:0')
original: tensor(-0.0159, device='cuda:0')
new: tensor(-0.0159, device='cuda:0')
original: tensor(0.0079, device='cuda:0')
new: tensor(0.0080, device='cuda:0')
original: tensor(-0.0202, device='cuda:0')
new: tensor(-0.0205, device='cuda:0')
original: tensor(0.0004, device='cuda:0')
new: tensor(3.7944e-05, device='cuda:0')
In some cases, the output image was broken in this case.
Steps to reproduce the problem
Create a prompt
Generate
What should have happened?
Normal image output
What browsers do you use to access Fooocus?
Other
Where are you running Fooocus?
Locally
What operating system are you using?
Windows 11
Console logs
[Fooocus Model Management] Moving model(s) has taken 2.48 seconds
[Parameters] Adaptive CFG = 7
[Parameters] CLIP Skip = 2
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] Seed = 3825047950370249823
[Parameters] CFG = 4
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
[Fooocus] Processing prompts ...
[Fooocus] Preparing Fooocus text #1 ...
[Prompt Expansion] masterpiece, best quality, very aesthetic, 8k, detailed, beautiful color, amazing quality, highres, shiny skin, MGCM, 1girl, solo, (blue eyes, blonde hair, short hair, school uniform, pleated skit, red bow, frills, eriza), walking, street, day, outdoors, bag, skirt, sharp
[Fooocus] Encoding positive #1 ...
[Fooocus Model Management] Moving model(s) has taken 0.29 seconds
[Fooocus] Encoding negative #1 ...
[Parameters] Denoising Strength = 1.0
[Parameters] Initial Latent shape: Image Space (896, 1152)
Preparation time: 2.06 seconds
Using karras scheduler.
[Fooocus] Preparing task 1/1 ...
[Sampler] refiner_swap_method = joint
[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828
Requested to load SDXL
Loading 1 new model
[Fooocus Model Management] Moving model(s) has taken 5.02 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:27<00:00, 1.10it/s]
Additional information
No response
The text was updated successfully, but these errors were encountered:
Checklist
What happened?
I create a prompt after generation image was corrupted.
I had heard of a similar problem with a1111, so I assumed there would be a similar situation with fooocus.
The code I suspect is the implementation of patched_encode_token_weights in patch_clip.py.
In the above line, it seems that there are cases where the new_mean is much smaller than the original_mean. In fact, if we dump these values,
original: tensor(-0.0196, device='cuda:0')
new: tensor(-0.0199, device='cuda:0')
original: tensor(0.0005, device='cuda:0')
new: tensor(0.0001, device='cuda:0')
original: tensor(-0.0159, device='cuda:0')
new: tensor(-0.0159, device='cuda:0')
original: tensor(0.0079, device='cuda:0')
new: tensor(0.0080, device='cuda:0')
original: tensor(-0.0202, device='cuda:0')
new: tensor(-0.0205, device='cuda:0')
original: tensor(0.0004, device='cuda:0')
new: tensor(3.7944e-05, device='cuda:0')
In some cases, the output image was broken in this case.
Steps to reproduce the problem
What should have happened?
Normal image output
What browsers do you use to access Fooocus?
Other
Where are you running Fooocus?
Locally
What operating system are you using?
Windows 11
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: