Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same (On 1660ti) #5088

Closed
1 task done
slymeasy opened this issue Nov 26, 2022 · 7 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@slymeasy
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

I'm using a 6GB GTX 1660ti (a turing GPU with no tensor cores) with the argument --precision full --no-half -medvram and I'm getting this error when I try to use the SD 2.0 model.

RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same

Steps to reproduce the problem

Use Automatic1111 with a 1660ti
Use the arguments --precision full --no-half -medvram

Add 768-v-ema.yaml to the models/stable-diffusion folder
Add 768-v-ema.ckpt to the models/stable-diffusion folder
Select the 768-v-ema.ckpt model

Enter a prompt

What should have happened?

It should have produced an image.

Commit where the problem happens

b5050ad

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--precision full --no-half -medvram

Additional information, context and logs

No response

@Goldmato
Copy link

Goldmato commented Nov 26, 2022

I think I found the issue on my end w/ this error - using the embedding from here https://huggingface.co/datasets/Nerfgun3/bad_prompt as (bad_prompt:0.8) in Negative Prompts seems like the culprit, removing it fixed the error completely.

May be the same for all embeddings.

@slymeasy
Copy link
Author

slymeasy commented Nov 26, 2022

I think I found the issue on my end w/ this error - using the embedding from here https://huggingface.co/datasets/Nerfgun3/bad_prompt as (bad_prompt:0.8) in Negative Prompts seems like the culprit, removing it fixed the error completely.

May be the same for all embeddings.

I don't have any embeddings installed and I got this error without using any negative prompts. I don't know if our issues are exactly the same. You might want to keep your issue open.

@AugmentedRealityCat
Copy link

I think I found the issue on my end w/ this error - using the embedding from here https://huggingface.co/datasets/Nerfgun3/bad_prompt as (bad_prompt:0.8) in Negative Prompts seems like the culprit, removing it fixed the error completely.

May be the same for all embeddings.

That was my problem as well. Solved by removing any reference to embeddings in my prompt.

@slymeasy
Copy link
Author

slymeasy commented Nov 26, 2022

Well, mine had something to do with the arguments I use when I run the program. I had to use --precision full --no-half because I am using a 1660ti.

However, I just found this reddit thread that has a workaround so you can run Automatic1111 on a 1660ti without using that argument. So I tried it and now the 768 model is working, but that's a bandaid since it will revert back anytime I try to update.

You have to add this code to modules/devices.py

torch.backends.cudnn.benchmark = True
torch.backends.cudnn.enabled = True

def enable_tf32():

if torch.cuda.is_available():
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.enabled = True

errors.run``(enable_tf32, "Enabling TF32")

@AugmentedRealityCat
Copy link

revert back anytime I try to update.

The reason why you have to change each time is that you have the git pull command in your startup sequence. This tells git to download the latest official master branch.
You can actually change the active branch to another one, and this way git will continue to update that branch (the one with the fix) instead of replacing it with the master branch - the official version that was released before the fix.
The command to change the branch is git checkout followed by the name of the branch. You must have installed the proper branch prior to checkout. You can also see which branches you have installed already by typing git branch.

@slymeasy
Copy link
Author

The reason why you have to change each time is that you have the git pull command in your startup sequence. This tells git to download the latest official master branch.
You can actually change the active branch to another one, and this way git will continue to update that branch (the one with the fix) instead of replacing it with the master branch - the official version that was released before the fix.
The command to change the branch is git checkout followed by the name of the branch. You must have installed the proper branch prior to checkout. You can also see which branches you have installed already by typing git branch.

I do not have git pull in my startup sequence.

Yeah, they just updated devices.py again. So I had to redo it. It's something that needs to be implemented as a menu option for people using a 1660ti.

@cameron
Copy link

cameron commented Dec 12, 2022

@slymeasy's fix didn't work for me, but I found another one-line solution: #5113 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

5 participants