Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make force free GPU memory work in img2img #1844

Merged
merged 1 commit into from
Dec 8, 2022

Conversation

addianto
Copy link
Contributor

@addianto addianto commented Dec 7, 2022

This PR attempts to ensure --free_gpu_mem option work in img2img.py.

The solution is similar to the procedure in txt2img.py. I added condition check to see if --free_gpu_mem is enabled. If so, then move the loaded diffusion model into the CPU (the RAM, I presume?).

Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like the right approach. You tested this, right?

@addianto
Copy link
Contributor Author

addianto commented Dec 8, 2022

It looks like the right approach. You tested this, right?

I did. Let me explain how I tested the PR.

First, the following is my InvokeAI default setup defined at ~/.invokeai:

--root="D:\[REDACTED]\invokeai"
--outdir="D:\[REDACTED]\invokeai\outputs\all"
--no-nsfw_checker
--free_gpu_mem

The testing was done by monitoring GPU memory usage through Windows' Task Manager while generating img2img images.

Before the PR, the memory usage stayed at the top even though I have --free_gpu_mem enabled, as illustrated in the following screenshot. Sometimes it may resulted in CUDA OOM error in subsequent image generation.

Screenshot 2022-12-08 065148

After the PR, the memory usage was able to be reduced each time the img2img finished generating an image:

Screenshot 2022-12-08 065457

@lstein
Copy link
Collaborator

lstein commented Dec 8, 2022

It looks like the right approach. You tested this, right?

I did. Let me explain how I tested the PR.

First, the following is my InvokeAI default setup defined at ~/.invokeai:

--root="D:\[REDACTED]\invokeai"
--outdir="D:\[REDACTED]\invokeai\outputs\all"
--no-nsfw_checker
--free_gpu_mem

The testing was done by monitoring GPU memory usage through Windows' Task Manager while generating img2img images.

Before the PR, the memory usage stayed at the top even though I have --free_gpu_mem enabled, as illustrated in the following screenshot. Sometimes it may resulted in CUDA OOM error in subsequent image generation.

Screenshot 2022-12-08 065148

After the PR, the memory usage was able to be reduced each time the img2img finished generating an image:

Screenshot 2022-12-08 065457

More than good enough for me. Thank you for the contribution!

@lstein lstein merged commit d7ba041 into invoke-ai:main Dec 8, 2022
@addianto addianto deleted the improvement/free-gpu-mem-img2img branch December 8, 2022 00:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants