Skip to content

Eliminate GPU sync overhead and CPU→GPU transfers across LTX2 pipeline#13564

Open
ViktoriiaRomanova wants to merge 1 commit intohuggingface:mainfrom
ViktoriiaRomanova:ltx2pipelinespeedup
Open

Eliminate GPU sync overhead and CPU→GPU transfers across LTX2 pipeline#13564
ViktoriiaRomanova wants to merge 1 commit intohuggingface:mainfrom
ViktoriiaRomanova:ltx2pipelinespeedup

Conversation

@ViktoriiaRomanova
Copy link
Copy Markdown

Fixes performance issues identified by profiling LTX2 with torch.profiler as part of #13401.

Optimises LTX2 by removing unnecessary GPU synchronisation points and replacing CPU tensor creation with on-device tensor operations across the decoding pipeline, transformer RoPE computations, scheduler, and connector padding logic.

Pipeline Denoising Optimisation

  1. Added explicit set_begin_index(0) calls to both video and audio schedulers. This avoids the DtoH sync in _init_step_index. Uses the same pattern as the issue fixed in PR Avoid DtoH sync from access of nonzero() item in scheduler #11696.
    Before (eager mode):
image After (eager mode, no sync gap): image

Before (compile mode):
image
After (compile mode, no sync gap):
image

  1. Replaced torch.tensor(..., device=device) with on-device torch.stack([torch.ones(...)*s for s in decode_noise_scale]). Avoids CPU tensor allocation and GPU transfers for decode noise scaling.

Transformer Model Optimisation

Replaced CPU tensor creation for patch sizes with on-device tensor construction.
Eliminates unnecessary CPU-to-GPU memcpy operations during RoPE coordinate preparation.

Connector Refactoring

Replaced list-comprehension-based padding logic with vectorised masking. This simplifies left-padding layout logic and eliminates unnecessary cudaStreamSynchronize calls.

Performance Results

Metric Before After
cudaStreamSynchronize calls (total) 18 6
Scheduler sync (eager mode) 233ms eliminated
Scheduler sync (compiled mode) 573ms eliminated
Other syncs total (eager mode) 88ms 25ms
Other syncs total (compiled mode) 93ms 25ms

Profiler trace

https://drive.google.com/drive/folders/1cZn1xw-8Eon22mA2zP1uoF1nE4YCC3Wo?usp=drive_link

Before submitting

Who can review?

@sayakpaul @dg845

…or creation

across the LTX2 pipeline, transformer, scheduler, and connector logic.

- Add set_begin_index(0) to schedulers to eliminate DtoH sync in _init_step_index
- Replace torch.tensor(..., device=...) with on-device tensor construction for decode scaling
- Move RoPE-related tensor creation to GPU to avoid memcpy overhead
- Refactor connector padding logic using vectorized masking instead of list-based ops
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant