Skip to content

fix import error#573

Merged
gushiqiao merged 1 commit intomainfrom
dev_fix
Dec 5, 2025
Merged

fix import error#573
gushiqiao merged 1 commit intomainfrom
dev_fix

Conversation

@gushiqiao
Copy link
Contributor

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @gushiqiao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the robustness and flexibility of the lightx2v project by addressing potential import errors and expanding model configuration options. It introduces conditional imports for specific transformers components and refines the availability check for Flash Attention, preventing runtime failures due to missing dependencies. Furthermore, it extends quantization capabilities for T5 and XLM-RoBERTa models with a new VLLM FP8 scheme and adds a configurable Rotary Positional Embedding type to the generation pipeline, offering more control over model behavior.

Highlights

  • Conditional Imports for Qwen2.5 VL: Imports for Qwen2Tokenizer and Qwen2_5_VLForConditionalGeneration are now conditionally handled with try-except ImportError to prevent crashes if transformers components are unavailable.
  • Robust Flash Attention Availability Check: The detection of flash_attn_func is made more resilient by nesting try-except blocks, ensuring FLASH_ATTN_3_AVAILABLE is accurately set even if flash_attn is partially installed.
  • VLLM FP8 Quantization Support: Added VllmQuantLinearFp8 import and integration for T5 and XLM-RoBERTa models, enabling a new "fp8-vllm" quantization scheme.
  • Configurable RoPE Type in Pipeline: Introduced a rope_type parameter in the pipeline's generator configuration, allowing users to specify the type of Rotary Positional Embedding (defaulting to "torch").
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gushiqiao gushiqiao merged commit b161b91 into main Dec 5, 2025
2 checks passed
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix import errors by making some dependencies optional. However, the current implementation is incomplete and could lead to runtime errors. I've identified two critical issues where modules are set to None or are left undefined after a failed import, but are then used without proper checks, which would cause the application to crash. My review includes suggestions on how to handle these optional dependencies more robustly. The other changes, such as adding support for a new quantization scheme and a new configuration option, appear to be correct and well-implemented.

Comment on lines +7 to 12
try:
from transformers import Qwen2Tokenizer, Qwen2_5_VLForConditionalGeneration
except ImportError:
Qwen2Tokenizer = None
Qwen2_5_VLForConditionalGeneration = None

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

While this change correctly handles the ImportError for optional dependencies, it introduces a potential runtime error. If transformers is not installed or the specified modules are not available, Qwen2Tokenizer and Qwen2_5_VLForConditionalGeneration will be None. However, they are later used without a None check (e.g., Qwen2Tokenizer.from_pretrained on line 78 and Qwen2_5_VLForConditionalGeneration.from_pretrained on line 74), which will raise an AttributeError. The code should either raise a more informative error if these optional dependencies are required for a certain functionality, or handle the None case gracefully where these classes are used.

Comment on lines 10 to 18
except ImportError:
from flash_attn import flash_attn_func
try:
from flash_attn import flash_attn_func

FLASH_ATTN_3_AVAILABLE = False
FLASH_ATTN_3_AVAILABLE = False

except ImportError:
FLASH_ATTN_3_AVAILABLE = False

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This change attempts to handle the optional dependency flash_attn, but it's incomplete and will lead to a NameError at runtime if flash_attn is not installed. When the inner except ImportError block is executed, flash_attn_func is not defined. However, the code later calls flash_attn_func (e.g., on line 392) if FLASH_ATTN_3_AVAILABLE is False. The suggested change defines a placeholder function that will raise an informative ImportError at runtime if flash_attn is not available, making the issue easier to debug.

Suggested change
except ImportError:
from flash_attn import flash_attn_func
try:
from flash_attn import flash_attn_func
FLASH_ATTN_3_AVAILABLE = False
FLASH_ATTN_3_AVAILABLE = False
except ImportError:
FLASH_ATTN_3_AVAILABLE = False
except ImportError:
FLASH_ATTN_3_AVAILABLE = False
try:
from flash_attn import flash_attn_func
except ImportError:
def flash_attn_func(*args, **kwargs):
raise ImportError("flash-attn is not installed, which is required when flash_attn_interface is not available.")

@gushiqiao gushiqiao deleted the dev_fix branch December 11, 2025 10:27
helloyongyang pushed a commit that referenced this pull request Mar 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants