-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to use it, but my video memory is only 12G #11
Comments
Not working on a 32gb vram card either. Needed a 80gb vram to avoid the out of memory error. But it didn't work for me some other error about float not iterable. |
Thank you for your suggestion. Will consider improving memory usage. |
I ran it on an NVIDIA 3080 10Gb and got 173 s/it (total generation time was about 1.5 hours). Then I reduced the output size to 512×512, and the time dropped to 50 s/it (total generation time was 25 minutes), which is still long, but definitely better. |
How did you do that? Thank you. |
Please consider following some tips at https://github.com/bytedance/InfiniteYou?#memory-requirements first. |
Honestly, I’m not sure — I just installed it and ran it. My desktop setup: Ryzen 5 3600 I did get an out-of-memory error after bumping the image size to 1024×1024 though. 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [1:03:21<00:00, 126.72s/it] Try to decrease output size to 512 x 512. |
This seems impossible because after I installed and ran it, the built-in model of InfiniteYou located in the directory InfiniteYou/infu_flux_v1.0/aes_stage2/InfuseNetModel contains diffusion… which is 11 GB. Additionally, it required downloading the flux model, and when I saw the largest one was 23 GB, I immediately aborted the download. Your VRAM likely couldn’t handle running it. However, I truly don’t know how you managed to run it. |
it runs on 10gb vram and i couldnt get it running on 24? might be a skill issue from me will try again tomorrow |
ChatGPT suggested that it's likely the folks at ByteDance are using CPU offloading. |
I think it should be like this. What LLM tool are you using to run it? Can you share it? Thank you! |
I use PyCharm 2022.3.1 (Community Edition). It isn't an "LLM tool" – it's an IDE for Python development. Just clone the repository from Git, install the required packages from requirements.txt, and run it. |
|
FETCH ComfyRegistry Data: 70/80 |
My RTX5080 only has 100 s/it, why. |
Try my workflow with GGUF nodes https://civitai.com/models/1424364 . On my RTX 4080 SUPER with 16 Gb VRAM it takes 70 - 120 sec for 1024 x 1024 image generation, consider it very acceptable speed |
working on my rtx4090, but took me +3 hours to generate 1 image |
Thank you for sharing. Should workflow be used in Comfyui? |
@pppking9527 yes, clearly, it is for ComfyUI, the node was made by ZenAI Team https://github.com/ZenAI-Vietnam/ComfyUI_InfiniteYou and modded by me adding GGUF support, LoRA support, SIMILARITY / FLEXIBILITY model switch and minor adjustments. Just remember to install first the requirements https://github.com/ZenAI-Vietnam/ComfyUI_InfiniteYou and then follow carefully the download model and install instruction of ZenAI fork https://github.com/ZenAI-Vietnam/ComfyUI_InfiniteYou |
Anyone know how to reduce the memory requirements?
The text was updated successfully, but these errors were encountered: