Skip to content

Reduce video tiny VAE peak VRAM and decode time (CORE-127)#13617

Merged
Kosinkadink merged 3 commits intoComfy-Org:masterfrom
kijai:tinyvae_optimization
Apr 29, 2026
Merged

Reduce video tiny VAE peak VRAM and decode time (CORE-127)#13617
Kosinkadink merged 3 commits intoComfy-Org:masterfrom
kijai:tinyvae_optimization

Conversation

@kijai
Copy link
Copy Markdown
Collaborator

@kijai kijai commented Apr 29, 2026

  • Stream completed frames to intermediate_device() as they finish in
    apply_model_with_memblocks, instead of accumulating the full output on
    GPU before returning.
  • Push pixel_(un)shuffle into the streaming loop as per-frame pre/post
    hooks so they run on GPU one frame at a time, avoiding the full-video
    spike.
  • Drop the x.reshape that forced a contiguous copy of the (non-contiguous,
    post-movedim) input; chunk along the time dim with views instead.

Bit-exact vs. previous code (fp16).

LTX2, 1024×1024, T_in=128:

  Decode peak: 12.97 GB → 0.13 GB
  Decode time: 2780 ms → 1841 ms (−34%)
  Encode peak: 1.70 GB → 0.90 GB (delta over input flat in T)

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 29, 2026

📝 Walkthrough

Walkthrough

The PR modifies comfy/taesd/taehv.py: apply_model_with_memblocks signature adds output_device=None, patch_size=1, and decode=False. It now handles patching via pixel_unshuffle/pixel_shuffle conditionally in both parallel and non-parallel paths, chunks non-parallel work along the time dimension, and moves outputs to output_device. TAEHV.encode delegates patching to apply_model_with_memblocks using patch_size; TAEHV.decode delegates postprocessing via decode=True and routes decoder outputs through comfy.model_management.intermediate_device() using output_device.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description check ✅ Passed The description is directly related to the changeset, providing detailed technical context about the optimizations and measured performance improvements.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Title check ✅ Passed The pull request title directly summarizes the main changes: reducing peak VRAM and decode time for the video tiny VAE through streaming and memory optimizations.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy/taesd/taehv.py`:
- Around line 199-203: The decode() method in TAEHV hardcodes output_device to
comfy.model_management.intermediate_device(), breaking callers that expect to
override it; add an optional parameter (e.g. output_device=None) to TAEHV.decode
(and any callers/constructors that forward args) that defaults to
comfy.model_management.intermediate_device() when None, and pass that variable
into the apply_model_with_memblocks call (replacing the current hardcoded
comfy.model_management.intermediate_device() usage) so external nodes can supply
a different device while preserving the current default behavior.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f697a2fe-614a-4a9e-99a4-d7d79d21402e

📥 Commits

Reviewing files that changed from the base of the PR and between fce0398 and 904d86e.

📒 Files selected for processing (1)
  • comfy/taesd/taehv.py

Comment thread comfy/taesd/taehv.py Outdated
Comment thread comfy/taesd/taehv.py Outdated
rattus128
rattus128 previously approved these changes Apr 29, 2026
Copy link
Copy Markdown
Contributor

@rattus128 rattus128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. optional nits only.

Using a boolean for encode vs decode is slightly cleaner than the lambdas IMO as it consolidates all the logic together while solving your slicing problem. But either way.

@kijai
Copy link
Copy Markdown
Collaborator Author

kijai commented Apr 29, 2026

LGTM. optional nits only.

Using a boolean for encode vs decode is slightly cleaner than the lambdas IMO as it consolidates all the logic together while solving your slicing problem. But either way.

Agreed and changed.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
comfy/taesd/taehv.py (1)

192-194: ⚠️ Potential issue | 🟠 Major

Expose output_device override in decode() to preserve caller flexibility.

Line 193 hardcodes intermediate_device(), so callers can no longer steer decode output placement through kwargs. That’s a backward-compatibility regression for custom nodes that depended on device control.

♻️ Proposed fix
 def decode(self, x, **kwargs):
     x = x.unsqueeze(0) if x.ndim == 4 else x  # [T, C, H, W] -> [1, T, C, H, W]
     x = x.movedim(1, 2) if x.shape[1] != self.latent_channels else x  # [B, T, C, H, W] or [B, C, T, H, W]
     x = self.process_in(x).movedim(2, 1)  # [B, C, T, H, W] -> [B, T, C, H, W]
+    output_device = kwargs.get("output_device", comfy.model_management.intermediate_device())
     x = apply_model_with_memblocks(self.decoder, x, self.parallel, self.show_progress_bar,
-                                    output_device=comfy.model_management.intermediate_device(),
+                                    output_device=output_device,
                                     patch_size=self.patch_size, decode=True)
     return x[:, self.frames_to_trim:].movedim(2, 1)

As per coding guidelines: comfy/** changes should prioritize backward compatibility for custom nodes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy/taesd/taehv.py` around lines 192 - 194, The call to
apply_model_with_memblocks in decode() currently hardcodes
output_device=comfy.model_management.intermediate_device(), which prevents
callers from overriding output placement; change decode() to accept and forward
an output_device kwarg (defaulting to
comfy.model_management.intermediate_device()) and pass that variable into
apply_model_with_memblocks (the call that uses self.decoder, x, self.parallel,
self.show_progress_bar, patch_size=self.patch_size, decode=True) so callers can
override device placement while preserving the original default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@comfy/taesd/taehv.py`:
- Around line 192-194: The call to apply_model_with_memblocks in decode()
currently hardcodes output_device=comfy.model_management.intermediate_device(),
which prevents callers from overriding output placement; change decode() to
accept and forward an output_device kwarg (defaulting to
comfy.model_management.intermediate_device()) and pass that variable into
apply_model_with_memblocks (the call that uses self.decoder, x, self.parallel,
self.show_progress_bar, patch_size=self.patch_size, decode=True) so callers can
override device placement while preserving the original default.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 58bf1aab-b15f-459b-8a64-052a3dad8416

📥 Commits

Reviewing files that changed from the base of the PR and between 3551828 and 51b96f7.

📒 Files selected for processing (1)
  • comfy/taesd/taehv.py

@rattus128 rattus128 changed the title Reduce video tiny VAE peak VRAM and decode time Reduce video tiny VAE peak VRAM and decode time (CORE-127) Apr 29, 2026
@Kosinkadink Kosinkadink merged commit 0e25a69 into Comfy-Org:master Apr 29, 2026
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants