Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the minimum memory requirment? #43

Open
nitinmukesh opened this issue May 1, 2024 · 9 comments
Open

What is the minimum memory requirment? #43

nitinmukesh opened this issue May 1, 2024 · 9 comments

Comments

@nitinmukesh
Copy link

nitinmukesh commented May 1, 2024

I have 4060 8GB VRAM + 8GB shared RAM and it fails all the time with CUDA memory error.

Is there any setting which I can modify so it can run on lower requirements. I already tried 360 x 480 image size and same memory issue.

@yisol
Copy link
Owner

yisol commented May 2, 2024

Hello, >18GB VRAM is required for single image inference.
You can use optimization scheme like offloading for further memory reducing.

@nitinmukesh
Copy link
Author

Thank you for your response. Appreciate your efforts in creating this.

Hello, >18GB VRAM is required for single image inference. You can use optimization scheme like offloading for further memory reducing.

Unfortunately being an end user with no developer background, no idea what need to be done.

@theguaz
Copy link

theguaz commented May 7, 2024

@yisol hello, I'm getting this when I tried to run it on a G4dn.2xlarge AWS instance (GPU NVIDIA T4 16 GB GPU memory).

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.58 GiB total capacity; 13.85 GiB already allocated; 15.56 MiB free; 14.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I've tried to use max_split_size_mb with values ranging from 64 to 512 and I still get the same result.
Should I get a higher end GPU or is anything I can do to optimize this repo ?

@FurkanGozukara
Copy link

works minimum with 8 GB VRAM on 4 bit precision with cpu offloading #47

@theguaz
Copy link

theguaz commented May 8, 2024

@FurkanGozukara but that is using your files and not the original repo

@FurkanGozukara
Copy link

@FurkanGozukara but that is using your files and not the original repo

true original repo uses a lot

@nitinmukesh
Copy link
Author

d to use max_split_size_mb with values ranging from 64 to 512 and I still get the same result.
Should I get a higher end GPU or is anything I can do to optimize this rep

Just don't spam all the github with your posts. I have already seen your post to buy from patreon and I'm not interested.

@smsali
Copy link

smsali commented Sep 29, 2024

@FurkanGozukara but that is using your files and not the original repo

true original repo uses a lot

Hey! What do you mean by 'your files' and 'original repo'? I have 3060ti with 8gb memory. Want to try this. Thanks!

@FurkanGozukara
Copy link

@FurkanGozukara but that is using your files and not the original repo

true original repo uses a lot

Hey! What do you mean by 'your files' and 'original repo'? I have 3060ti with 8gb memory. Want to try this. Thanks!

here our idm vton app : https://youtu.be/m4pcIeAVQD0

also here how to use on cloud : https://youtu.be/LeHfgq_lAXU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants