-
Notifications
You must be signed in to change notification settings - Fork 25.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same (On 1660ti) #5088
Comments
I think I found the issue on my end w/ this error - using the embedding from here https://huggingface.co/datasets/Nerfgun3/bad_prompt as May be the same for all embeddings. |
I don't have any embeddings installed and I got this error without using any negative prompts. I don't know if our issues are exactly the same. You might want to keep your issue open. |
That was my problem as well. Solved by removing any reference to embeddings in my prompt. |
Well, mine had something to do with the arguments I use when I run the program. I had to use However, I just found this reddit thread that has a workaround so you can run Automatic1111 on a 1660ti without using that argument. So I tried it and now the 768 model is working, but that's a bandaid since it will revert back anytime I try to update. You have to add this code to modules/devices.py torch.backends.cudnn.benchmark = True
|
The reason why you have to change each time is that you have the |
I do not have git pull in my startup sequence. Yeah, they just updated devices.py again. So I had to redo it. It's something that needs to be implemented as a menu option for people using a 1660ti. |
@slymeasy's fix didn't work for me, but I found another one-line solution: #5113 (comment) |
Is there an existing issue for this?
What happened?
I'm using a 6GB GTX 1660ti (a turing GPU with no tensor cores) with the argument --precision full --no-half -medvram and I'm getting this error when I try to use the SD 2.0 model.
Steps to reproduce the problem
Use Automatic1111 with a 1660ti
Use the arguments --precision full --no-half -medvram
Add 768-v-ema.yaml to the models/stable-diffusion folder
Add 768-v-ema.ckpt to the models/stable-diffusion folder
Select the 768-v-ema.ckpt model
Enter a prompt
What should have happened?
It should have produced an image.
Commit where the problem happens
b5050ad
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
Additional information, context and logs
No response
The text was updated successfully, but these errors were encountered: