Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not enough RAM #5

Open
TonyAssi opened this issue Apr 4, 2024 · 3 comments
Open

Not enough RAM #5

TonyAssi opened this issue Apr 4, 2024 · 3 comments

Comments

@TonyAssi
Copy link

TonyAssi commented Apr 4, 2024

I am running out of RAM when I run this code. I tried Google Colab T4 and V100. 16GB of RAM.

I also tried using both of these VAEs
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)

pipe = StableDiffusionXLPipeline.from_pretrained(
base_model_path,
torch_dtype=torch.float16,
add_watermarker=False,
vae=vae,
)

Any suggestions on how to run using less RAM?

@ResearcherXman
Copy link
Contributor

Here are some general suggestions, not every method works under our testing. But pipe.enable_vae_tiling() does reduce memory consumption by about 3GB.

@yi
Copy link
Contributor

yi commented Apr 10, 2024

16G VRAM is fine for generation on SDXL pipeline, check my notebook, run it with a V100 HIRAM type.

@ResearcherXman
Copy link
Contributor

We have added an experimental distributed inference feature from diffusers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants