New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
While running the clip_guided notebook in CPU mode I get: "RuntimeError - Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.FloatTensor instead" #28
Comments
Not sure what is happening here, but you should try to aim for a GPU if possible. See the comment in the notebook:
CPU mode takes 20 times more computation time than GPU mode. |
I know, but my current GPU doesn't have enough VRAM... that's why I was running in CPU mode. |
Yes, sure. In the meantime, try to use a free GPU on Google Colab. |
@woctezuma I finally got hold of a new GPU with 6 GB VRAM... so I am now running again the clip_guided notebook in GPU mode, but I am seeing exactly the same error I documented above... |
Thanks! I saw them already but I don't have the necessary ML & rel. libs knowledge to properly make use of them... |
It could be just a simple change of this line: glide-text2im/glide_text2im/clip/encoders.py Lines 123 to 127 in 9cc8e56
You could try to replace: F.embedding(cast(torch.Tensor, t), self.w_t) with either: F.embedding(cast(torch.Tensor, t.long()), self.w_t) or: F.embedding(cast(torch.Tensor, t).long(), self.w_t) |
Ok, thanks! Now at least in CPU mode it works! |
When I run clip_guided notebook in CPU mode, I get the following error at the "Sample from the base model" cell:
Can anyone help?
Thanks!
The text was updated successfully, but these errors were encountered: