-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possibility to use with byt5-small ? #23
Comments
Unfortunately, Paella requires at least 30GB of memory since we trained on big conditoning models. We will change that in future models. |
So not even on RTX 3090? That's a pretty huge restriction if I do say so |
Definitely. Paella is not a finished research project and we are still working on improving on many things. One thing that you can do is just not load t5 at all and enable |
I tried but I think there are references to t5 stuff even outside the checks for |
I managed to use Paella on colab using ~10GB of VRAM by simply using the CLIP image model to create variations, and deleting the T5 model and the prior model (I could also delete the text clip encoder). If you have a rtx 3090 I think you can try loading the byT5-XL model alone, save the embeddings, delete the model and load the rest (if it doesn't work, try loading the model in half precision). |
Can you share your code for image variations? Thank you. |
Hi, XL model way too large for me! I tried byt5-small but crash:
The text was updated successfully, but these errors were encountered: