-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use DeepFloyd IF #72
Comments
I'm sorry, but I'm temporarily rebuilding the implementation of DeepFloydIF. I will let you know as soon as it is completed. |
sad people were expecting a tutorial for this from me :/ but thanks a lot looking forward to that. so how do we add other models? does it have feature to auto download necessary files? lets say this repo : https://huggingface.co/dreamlike-art/dreamlike-anime-1.0 |
Click here for instructions on how to add a model. Or you can put ckpt and safetensors files in |
nice thanks any ETA for DeepFlyod ? |
I'm planning on implementing it this week. Tomorrow at the earliest. It's not that hard of a task. |
thank you so much testing now |
looks like automatic install fails on windows venv "F:\deepfloyd ai\Radiata\venv\Scripts\Python.exe" what library we need i will try with activating venv |
I tried
this is the error below
|
If you only use deepfloyd if you don't need to turn on tensorrt. |
i need it for deepfloyd :) |
Oh sorry. Currently TensorRT is not yet compatible with DeepfloydIF. |
wait i am confused |
Yes, radiata supports deepfloydif. |
on your documentation it shows this @echo off set PYTHON= call launch.bat |
oh sorry understood. This is a mistake in the documentation. |
Try setting like this. |
I'm glad. IF has just been implemented and is in an unstable state. It would be helpful if you could contact me if there is a problem. |
sure i will hopefully. by the way does it support optimizations? what command line arguments should i use for lower vram? hopefully I will make a video so i can explain my audience i am owner of https://www.youtube.com/secourses we are having 200k+ monthly views mostly generative AI |
Currently there are 5 modes. |
Using |
Use 'auto' model ,I found a problem, after the first two steps to generate images, RAM and VRAM are not released(Always like this until closing the program), resulting in insufficient resources to start the third step: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 8.00 GiB (GPU 0; 22.00 GiB total capacity; 9.46 GiB already allocated; 10.38 GiB free; 9.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF system info: |
without this it doesnt work atm i am testing on windows 10 and python 3.10.9 set CUDA_VISIBLE_DEVICES=0 and with that even with 24 GB vram RTX 3090 i am getting out of memory error :d looks like there is memory leak or something else missing **i will make a tutorial for my subscribers and also i will show your web ui can you fix them? here errors** when set as auto and both gpu 0 and gpu 1 visible and testing only the first step - 64 pixels
when only cuda 0 is visible stage 3 testing option is medvram
when normal mode selected - i did restart stage 1 done
|
sequential_off_load gives this below error i have 64 gb ram
|
off_load -
|
lowvram first stage fails
|
tested all options and stage 3 fails :) |
@ddPn08 in that offload mode, are you doing gc.collect()
with torch.cuda.device(self.gpu_id):
torch.cuda.empty_cache()
torch.cuda.ipc_collect() I haven't looked at the codebase but it seems that some model wasn't unloaded (or offloaded) or there is some reference to it that prevents it from doing so. |
@ddPn08 i sent a message to you from twitter to my discord : MonsterMMORPG#2198 |
@ddPn08 there is a kaggle notebook that shows how to load into 2 gpu that may help https://www.kaggle.com/furkangozukara/deepfloyd-if-4-3b-generator-of-pictures-video-vers |
Thank you for all the experiments. The OOM is probably caused by your version of torch. Try 2.0.0. Other errors seem to require modification of device-related code. |
Is this fixed? Hopefully I am planning a tutorial very soon |
Sorry for leaving it for a while. Sorry, I gave up on the Deepfloyd IF implementation. We are moving towards SDXL support instead. |
The hugging face repo has to many files
Currently I am downloading via git clone but they are over 100 gb
So how to use DeepFloyd IF large model? thank you
also how to set a different port to launch
The text was updated successfully, but these errors were encountered: