Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory requirements #4

Open
skydam opened this issue Mar 21, 2025 · 9 comments
Open

Memory requirements #4

skydam opened this issue Mar 21, 2025 · 9 comments

Comments

@skydam
Copy link

skydam commented Mar 21, 2025

Does anybody else have these ridiculous memory requirements? I'm running on a 3090 for more than an hour per generation. 150.20s/it.

@GraftingRayman
Copy link

GraftingRayman commented Mar 21, 2025

any chance you can throw a screen shot of your InfuseNetModel directory?

@skydam
Copy link
Author

skydam commented Mar 21, 2025

Image
Yes of course! It's using 41.6GB GPU memory which means that it's swapping like crazy.

@GraftingRayman
Copy link

Thank you, for some reason when i try to run it, its trying to join the shards and its running out of vram

Wow, 41.6GB vram is huge, no chance with my 16gb

@EndlessSora
Copy link
Collaborator

Thank you guys for your suggestion. We will try to improve memory usage for users' convenience. We also welcome community contributions.

@skydam
Copy link
Author

skydam commented Mar 21, 2025

That would be great. The results are really promising, but >45 minutes per item is a bit much if you know what i mean! ;-)

@EndlessSora
Copy link
Collaborator

We have released our Hugging Face online demo. Please feel free to try it: https://huggingface.co/spaces/ByteDance/InfiniteYou-FLUX

@niftyflora
Copy link

i am on a 3090 24gb vram and i cannot even run it? just rebooted the computer to make sure its fresh and nothing else is running but still nothing. here is the full error: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 54.00 MiB. GPU 0 has a total capacity of 23.68 GiB of which 12.88 MiB is free. Process 2212 has 394.45 MiB memory in use. Including non-PyTorch memory, this process has 23.14 GiB memory in use. Of the allocated memory 22.89 GiB is allocated by PyTorch, and 8.64 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

@niknah
Copy link

niknah commented Mar 25, 2025

Is it possible to use flux fp8 instead?

@EndlessSora
Copy link
Collaborator

Please consider following some tips at https://github.com/bytedance/InfiniteYou?#memory-requirements first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants