Skip to content

fix ci and bug#905

Merged
gushiqiao merged 2 commits intomainfrom
gsq/fix-bugs
Feb 27, 2026
Merged

fix ci and bug#905
gushiqiao merged 2 commits intomainfrom
gsq/fix-bugs

Conversation

@gushiqiao
Copy link
Contributor

No description provided.

@gushiqiao gushiqiao changed the title Gsq/fix bugs fix ci and bug Feb 27, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on addressing several minor bugs and improving code robustness and clarity across different modules. Key changes include refining device assignment practices, enhancing the handling of LoRA weights and biases in convolutional layers, and optimizing model initialization logic for device placement. These adjustments aim to prevent potential runtime errors and streamline the codebase.

Highlights

  • Device Assignment Refinement: Refactored device assignment logic in WeightModule to directly utilize AI_DEVICE, enhancing clarity and consistency by removing an intermediate target_device variable.
  • Robust LoRA Weight/Bias Handling: Implemented robust handling for LoRA weight and bias application in Conv3dWeightTemplate by introducing dedicated getter methods (_get_actual_weight, _get_actual_bias), ensuring correct behavior even when diffs are not explicitly initialized.
  • Optimized Model Initialization: Removed an unnecessary conditional CUDA device transfer during WanSFModel initialization, providing more flexible and controlled device management.
  • Dependency Cleanup: Streamlined imports in intel_xpu.py by removing an unused torch.distributed module, reducing unnecessary dependencies.
  • Minor Formatting Fixes: Applied minor formatting adjustments to configuration and Python files, including adding missing newlines for consistency.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • configs/z_image/z_image_turbo_t2i.json
    • Added a newline at the end of the file.
  • lightx2v/common/modules/weight_module.py
    • Added a blank line after the AI_DEVICE import.
    • Removed the target_device local variable, directly using AI_DEVICE for parameter transfers to CUDA.
  • lightx2v/common/ops/conv/conv3d.py
    • Removed the import of AI_DEVICE.
    • Removed the direct initialization of self.weight_diff and self.bias_diff to torch.tensor with AI_DEVICE.
    • Introduced _get_actual_weight and _get_actual_bias methods to safely retrieve the effective weight and bias, considering LoRA diffs.
    • Updated the apply method to utilize _get_actual_weight and _get_actual_bias for convolution operations.
  • lightx2v/models/networks/wan/sf_model.py
    • Removed the conditional self.to_cuda() call from the __init__ method.
  • lightx2v_platform/base/intel_xpu.py
    • Removed the torch.distributed import.
    • Added a newline at the end of the file.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gushiqiao gushiqiao merged commit 16ecaaa into main Feb 27, 2026
2 checks passed
@gushiqiao gushiqiao deleted the gsq/fix-bugs branch February 27, 2026 10:58
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a series of fixes and refactorings across different components. The changes include fixing a bug in how LoRA-like diffs are applied in Conv3d, removing redundant variables and unused imports, and adjusting when models are moved to CUDA to improve flexibility. My review includes a suggestion to improve the readability of a newly introduced helper method in conv3d.py.

Comment on lines +43 to +53
def _get_actual_bias(self, bias=None):
if bias is not None:
if not hasattr(self, "bias_diff"):
return bias
return bias + self.bias_diff
else:
if not hasattr(self, "bias") or self.bias is None:
return None
if not hasattr(self, "bias_diff"):
return self.bias
return self.bias + self.bias_diff
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic in _get_actual_bias can be simplified to improve readability and reduce duplication. The current implementation repeats the check for bias_diff. Consider refactoring to first determine the base bias and then apply the diff if it exists.

Suggested change
def _get_actual_bias(self, bias=None):
if bias is not None:
if not hasattr(self, "bias_diff"):
return bias
return bias + self.bias_diff
else:
if not hasattr(self, "bias") or self.bias is None:
return None
if not hasattr(self, "bias_diff"):
return self.bias
return self.bias + self.bias_diff
def _get_actual_bias(self, bias=None):
if bias is None:
if not hasattr(self, "bias") or self.bias is None:
return None
base_bias = self.bias
else:
base_bias = bias
if hasattr(self, "bias_diff"):
return base_bias + self.bias_diff
return base_bias

helloyongyang pushed a commit that referenced this pull request Mar 6, 2026
Co-authored-by: gushiqiao <975033167>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants