Skip to content

add ltx2.3#955

Merged
helloyongyang merged 5 commits intomainfrom
gsq/ltx23
Mar 20, 2026
Merged

add ltx2.3#955
helloyongyang merged 5 commits intomainfrom
gsq/ltx23

Conversation

@gushiqiao
Copy link
Contributor

No description provided.

gushiqiao and others added 3 commits March 20, 2026 06:40
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the LTX2.3 model configurations and significantly refactors the Gemma text encoder architecture for improved modularity and new features. It integrates advanced attention mechanisms like gated attention and cross-attention AdaLN, along with a flexible multi-modal guider for enhanced control during inference. Additionally, the audio processing pipeline has been upgraded to include Bandwidth Extension for higher fidelity audio output.

Highlights

  • New LTX2.3 Configurations: Introduced several new JSON configuration files for LTX2.3, including distilled and upsampling variants with CPU offloading, and updated the base LTX2 configuration.
  • Gemma Text Encoder Refactoring: The Gemma text encoder architecture has been significantly refactored, consolidating components into a unified GemmaTextEncoder and introducing an EmbeddingsProcessor for modular handling of video and audio embeddings.
  • Advanced Attention Mechanisms: New features such as apply_gated_attention and cross_attention_adaln have been integrated into the attention mechanisms and pre-inference steps, offering enhanced model control and performance.
  • Multi-Modal Guider Integration: A flexible multi-modal guider (mm_guider) has been added to the LTX2 model, enabling fine-grained control over conditional and unconditional inference paths, including perturbation types like STG and isolated modality.
  • Vocoder Enhancements with Bandwidth Extension (BWE): The audio VAE vocoder has been upgraded to support Bandwidth Extension (BWE) through new VocoderWithBWE and MelSTFT classes, leading to improved audio quality.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for LTX-2.3, which includes a significant refactoring of the Gemma text encoder pipeline, the addition of new features like gated attention and a multi-modal guider, and an updated vocoder with bandwidth extension. The changes are extensive and generally improve the codebase's modularity and capabilities. My review identified a potential double-normalization bug in the attention mechanism and a minor case of code duplication in the configuration loading logic. The feedback provided aims to address these points for improved correctness and maintainability.

Comment on lines +241 to +244
if is_self_attn or self.apply_gated_attention:
q_in = x
else:
q_in = self.norm_infer_func(x, weight=None, bias=None, eps=1e-6)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There appears to be a potential double normalization issue for the query input (q_in). The call sites for cross-attention already provide a normalized x tensor. However, in the case where is_self_attn is false and self.apply_gated_attention is false, q_in is normalized again via self.norm_infer_func(x, ...). This could lead to incorrect attention outputs.

It seems the caller is always responsible for passing a correctly normalized tensor. To fix this and simplify the logic, you can remove this conditional assignment and always set q_in = x.

        q_in = x

Comment on lines +138 to +142
# LTX-2 / HuggingFace: root config.json may nest transformer under "transformer".
# LightX2V DiT expects num_layers, rope_type, etc. on the root config.
_nested_transformer = config.get("transformer")
if isinstance(_nested_transformer, dict):
config.update(_nested_transformer)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic to flatten the nested transformer config dictionary appears to be duplicated. It's present here in auto_calc_config and also in set_args2config. Since set_config calls set_args2config before auto_calc_config, this logic is executed twice. To avoid redundancy and improve maintainability, you can remove this block from auto_calc_config.

@helloyongyang helloyongyang merged commit be564fe into main Mar 20, 2026
2 checks passed
@helloyongyang helloyongyang deleted the gsq/ltx23 branch March 20, 2026 06:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants