Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: The Picture Generated through foreground_to_blend is blur and dark #75

Open
Ning570 opened this issue Apr 19, 2024 · 0 comments
Open

Comments

@Ning570
Copy link

Ning570 commented Apr 19, 2024

What happened?

I am a newbie and I'm facing some bug that I cannot find the solution.
When I use the foreground_to_blend workflow to generate a car poster, the output_picture is very dark and has a mosaic effect.
Even more strangely, if I used the foreground_image generated by LayerDiffuse to generate a new blend_image, such a result would not occurred.

Steps to reproduce the problem

my workflow:
workflow

the Prompt and CheckPoint I used:
prompt

Output:
output

What should have happened?

How can I get a better_quality output?I would be extremely grateful if you could help me

Commit where the problem happens

ComfyUI:
ComfyUI-layerdiffuse:

Sysinfo

Graphics Card:Nvidia
OS:Intel

Console logs

Requested to load SDXL
Loading 1 new model
WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])
Merged with diffusion_model.input_blocks.0.0.weight channel changed from torch.Size([320, 4, 3, 3]) to [320, 8, 3, 3]

Workflow json file

fg2ble.json

Additional information

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant