Skip to content

Conversation

linoytsaban
Copy link
Collaborator

@linoytsaban linoytsaban commented Oct 6, 2025

smol change to fix bug when --cache_latents & --offload both enabled

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@linoytsaban linoytsaban marked this pull request as ready for review October 7, 2025 10:00
accelerator.device, non_blocking=True, dtype=vae.dtype
)
latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
latents_cache.append(vae.encode(batch["pixel_values"]).latent_dist)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so that vae & batch["pixel_values"] are both on the same device. w/o this change the vae is back on cpu but batch["pixel_values"] are on accelerator.device

@linoytsaban linoytsaban requested a review from sayakpaul October 7, 2025 10:27
Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay to merge after "smol" is updated to "small". :v

@linoytsaban linoytsaban merged commit 1066de8 into huggingface:main Oct 7, 2025
25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants