Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select) #2373

Closed
Skyfallen228 opened this issue Oct 12, 2022 · 17 comments

Comments

@Skyfallen228
Copy link

When I try to use txt2image the first image is generated normally but when I try to generate the next one it shows "RuntimeError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 4.00 GiB total capacity; 3.36 GiB already allocated; 0 bytes free; 3.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF" and after that, if I try to repeat the generation, it shows "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)"
My GPU is 1650 Super 4GB
Yes, I know that there is not enough video memory, but I use the parameters in webui-user.bat --precision full --no-half --medvram Because without them, I only generate completely black pictures.
Img2img works fine the only problem is with txt2img
How to fix it?

@vasanthwho
Copy link

did u find the solution??

@TrueTGamer
Copy link

did anybody solve it?

@TrueTGamer
Copy link

did u find the solution??

hey dude did you manage to fix it?

@Eprise1701e
Copy link

Having same issue with 1660. Worked on previous install, then went and updated and now cannot used txt2img.

@qvisionstudios
Copy link

qvisionstudios commented Oct 24, 2022

Having the same problem. Was working fine with using the standard model. Once I switch to any another loaded model, I get the error. Running NVIDIA GeForce RTX 3090, 24GB.
Edit using SD v1.5

@Skyfallen228
Copy link
Author

Skyfallen228 commented Oct 25, 2022

unknown
My friend told me to put this "set COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --opt-split-attention" in webui.bat and it worked
I can now make pictures 896x896

did u find the solution??

@TrueTGamer
Copy link

unknown My friend told me to put this "set COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --opt-split-attention" in webui.bat and it worked I can now make pictures 896x896

did u find the solution??

Nope, i've tried this aswell and much to no good it hasn't worked at all. I moved over to midjourney and decided to subscribe to their service. Hopefully when i get a better pc i might have better luck i guess

@CapRogers9527
Copy link

unknown My friend told me to put this "set COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --opt-split-attention" in webui.bat and it worked I can now make pictures 896x896

did u find the solution??

Nope, i've tried this aswell and much to no good it hasn't worked at all. I moved over to midjourney and decided to subscribe to their service. Hopefully when i get a better pc i might have better luck i guess

after you put this "set COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --opt-split-attention" in webui.bat
if there is still a error warning like this:"You can skip this check with --disable-safe-unpickle commandline argument."
the way to solve this problem is put "--disable-safe-unpickle" command in the bat file you start up your Stable diffusion
the code that line should be "python launch.py --disable-safe-unpickle" after done.

unknown My friend told me to put this "set COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --opt-split-attention" in webui.bat and it worked I can now make pictures 896x896

did u find the solution??

Nope, i've tried this aswell and much to no good it hasn't worked at all. I moved over to midjourney and decided to subscribe to their service. Hopefully when i get a better pc i might have better luck i guess

after you put this "set COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --opt-split-attention" in webui.bat
if there is still a error warning like this:"You can skip this check with --disable-safe-unpickle commandline argument."
the way to solve this problem is put "--disable-safe-unpickle" command in the bat file you start up your Stable diffusion
the code that line should be "python launch.py --disable-safe-unpickle" after done.

@Moxie1776
Copy link

Mine started when I added the extension sd-webui-refiner, and went away when I removed it.

@suonnon
Copy link

suonnon commented Aug 9, 2023

Mine started when I added the extension sd-webui-refiner, and went away when I removed it.

Problem solved!!!! Thank you so much!! Sad to say goodbye to SDXL refiner integration when generating but happy to get rid of this damn issue!

@Wanninayake
Copy link

unknown My friend told me to put this "set COMMANDLINE_ARGS=--precision full --no-half --lowvram --always-batch-cond-uncond --opt-split-attention" in webui.bat and it worked I can now make pictures 896x896

did u find the solution??

was able to fix the issue. thanks a lot

@marks202309
Copy link

Mine started when I added the extension sd-webui-refiner, and went away when I removed it.

Problem solved!!!! Thank you so much!! Sad to say goodbye to SDXL refiner integration when generating but happy to get rid of this damn issue!

Yeah, works after I disabled the SDXL refiner. Good findings.

@zzzgithub
Copy link

I am trying to make stylezed animateddiff animation.
Prompt: <lora:LCM Lora_V1.5:1>
Sampling: LCM
Controlnet: lineart
Controlnet1: Ipadaptor
Animatediff model:mm_sd_v15_v2.ckpt
when I am trying to generate shows error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)
please help how can I solve the problem. Other solutions above did not help me.

@lynrayy
Copy link

lynrayy commented May 25, 2024

Mine started when I added the extension sd-webui-refiner, and went away when I removed it.

THANKS

Fuck this refiner. It didn't work at all anyway.

@lynrayy
Copy link

lynrayy commented May 25, 2024

image
Solution:
Remove or disable this shit

@Cheesdumpling1234
Copy link

A solution that worked for me was switching to the original Model.ckpt and making a new hypernetwork with that model running
image

@Cheesdumpling1234
Copy link

A solution that worked for me was switching to the original Model.ckpt and making a new hypernetwork with that model running image

image

Atry pushed a commit to Atry/stable-diffusion-webui that referenced this issue Jul 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

14 participants