We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
when I tried to use soft inpaint,
"Error running post_sample: C:\stable-diffusion-webui-directml\extensions-builtin\soft-inpainting\scripts\soft_inpainting.py"
this appears
I think it works though, but it might means something is going wrong
if I changed COMMANDLINE_ARGS it disappears.
it disappears when I change "--use-directml" to any command line using CPU instead
I dont know why it happens
i hope it disappears, so I can be relief
Google Chrome
sysinfo-2024-04-07-01-45.json
it appears every steps --- 88%|███████████████████████████████████████████████████████████████████████▊ | 14/16 [00:10<00:01, 1.35it/s]*** Error running post_sample: C:\stable-diffusion-webui-directml\extensions-builtin\soft-inpainting\scripts\soft_inpainting.py Traceback (most recent call last): File "C:\stable-diffusion-webui-directml\modules\scripts.py", line 848, in on_mask_blend script.on_mask_blend(p, mba, *script_args) File "C:\stable-diffusion-webui-directml\extensions-builtin\soft-inpainting\scripts\soft_inpainting.py", line 678, in on_mask_blend mba.blended_latent = latent_blend(settings, File "C:\stable-diffusion-webui-directml\extensions-builtin\soft-inpainting\scripts\soft_inpainting.py", line 81, in latent_blend a_magnitude = torch.norm(a, p=2, dim=1, keepdim=True).to(torch.float64).pow_( RuntimeError --- 94%|████████████████████████████████████████████████████████████████████████████▉ | 15/16 [00:11<00:00, 1.31it/s]*** Error running post_sample: C:\stable-diffusion-webui-directml\extensions-builtin\soft-inpainting\scripts\soft_inpainting.py Traceback (most recent call last): File "C:\stable-diffusion-webui-directml\modules\scripts.py", line 848, in on_mask_blend script.on_mask_blend(p, mba, *script_args) File "C:\stable-diffusion-webui-directml\extensions-builtin\soft-inpainting\scripts\soft_inpainting.py", line 678, in on_mask_blend mba.blended_latent = latent_blend(settings, File "C:\stable-diffusion-webui-directml\extensions-builtin\soft-inpainting\scripts\soft_inpainting.py", line 81, in latent_blend a_magnitude = torch.norm(a, p=2, dim=1, keepdim=True).to(torch.float64).pow_( RuntimeError
No response
The text was updated successfully, but these errors were encountered:
torch-directml doesn't seem to support float64 power operation. What's your gpu?
Sorry, something went wrong.
it's Radeon RX 6600
I'll add CPU fallback, but you'd better to try ZLUDA that has more wide support.
[DirectML] Add float64 power CPU fallback. (#436)
e51a849
I think this is fixed now. Could we close this issue?
No branches or pull requests
Checklist
What happened?
when I tried to use soft inpaint,
"Error running post_sample: C:\stable-diffusion-webui-directml\extensions-builtin\soft-inpainting\scripts\soft_inpainting.py"
this appears
I think it works though, but it might means something is going wrong
if I changed COMMANDLINE_ARGS it disappears.
it disappears when I change "--use-directml" to any command line using CPU instead
Steps to reproduce the problem
I dont know why it happens
What should have happened?
i hope it disappears, so I can be relief
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
sysinfo-2024-04-07-01-45.json
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: