Skip to content

[feat]: support lora in qwen-image-edit#570

Merged
helloyongyang merged 2 commits intomainfrom
qwenimage_lora
Dec 4, 2025
Merged

[feat]: support lora in qwen-image-edit#570
helloyongyang merged 2 commits intomainfrom
qwenimage_lora

Conversation

@Watebear
Copy link
Collaborator

@Watebear Watebear commented Dec 4, 2025

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Watebear, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Qwen-Image-Edit model by integrating Low-Rank Adaptation (LoRA) capabilities. This allows users to apply lightweight, task-specific fine-tuned models without modifying the base model's full weights, leading to more efficient model customization and deployment. The changes include a new LoRA management module, updates to the model runner for dynamic LoRA application, and a new configuration file and inference script to demonstrate its usage. Additionally, several existing inference scripts have been cleaned up for better maintainability.

Highlights

  • LoRA Integration: Introduced comprehensive support for Low-Rank Adaptation (LoRA) within the Qwen-Image-Edit model, enabling efficient fine-tuning and application of specialized models.
  • Dedicated LoRA Adapter: Added a new lora_adapter.py module with a QwenImageLoraWrapper class to handle loading, managing, and applying LoRA weights to specific transformer layers of the QwenImage model.
  • Config-driven LoRA Application: The QwenImageRunner now dynamically loads and applies LoRA models based on lora_configs specified in the model's configuration, ensuring flexibility and ease of use.
  • Streamlined Inference Scripts: Refactored several existing shell scripts and added a new qwen_image_i2i_lora.sh script, centralizing environment variable setup and providing a clear example for LoRA-enabled inference.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@helloyongyang helloyongyang merged commit a889bb7 into main Dec 4, 2025
2 checks passed
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces LoRA support for qwen-image-edit. The changes include a new LoRA adapter implementation, a configuration file for LoRA, and updates to the runner to apply LoRA weights. Additionally, several shell scripts have been refactored to use a common base script for environment setup, which is a good improvement for maintainability.

My review has identified a few issues:

  • In lora_adapter.py, there's a potential crash if a LoRA name is not found, an unused strength parameter, and a duplicate entry in a list.
  • The new LoRA config file uses a placeholder path that could be confusing for users.
  • There's a minor inconsistency in one of the updated shell scripts.

I've provided specific comments and suggestions to address these points.

Comment on lines +45 to +47
if lora_name not in self.lora_metadata:
logger.info(f"LoRA {lora_name} not found. Please load it first.")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

When lora_name is not found in self.lora_metadata, an info message is logged, but the function continues execution. This will lead to a KeyError on line 52. You should probably log an error and return False to prevent the crash.

Suggested change
if lora_name not in self.lora_metadata:
logger.info(f"LoRA {lora_name} not found. Please load it first.")
if lora_name not in self.lora_metadata:
logger.error(f"LoRA {lora_name} not found. Please load it first.")
return False

lora_down = lora_weights[f"{prefix_name}.lora_down.weight"]
lora_alpha = lora_weights[f"{prefix_name}.alpha"]
origin = weight_dict[f"{prefix_name}.weight"]
weight_dict[f"{prefix_name}.weight"] = fuse_lora_weights(origin, lora_down, lora_up, lora_alpha)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The alpha parameter, which represents the LoRA strength passed from the configuration, is currently unused in this method. The lora_alpha from the weights file is used instead. To apply the configured strength, you should multiply it with the lora_alpha from the file.

                weight_dict[f"{prefix_name}.weight"] = fuse_lora_weights(origin, lora_down, lora_up, lora_alpha * alpha)

"USE_IMAGE_ID_IN_PROMPT": false,
"lora_configs": [
{
"path": "/path/to/Qwen-Image-Edit-Lightning-4steps-V1.0.safetensors",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The hardcoded path "/path/to/..." is a placeholder. This might be confusing for users and lead to runtime errors if not replaced with the actual path to the LoRA weights. Consider making it more explicit that this is a placeholder that needs to be changed, for example by using a more descriptive name like "/replace/with/your/path/to/lora.safetensors" or adding a _comment field explaining this, similar to _comment_attn on line 58.

Comment on lines +66 to +79
lora_prefixs = [
"attn.add_k_proj",
"attn.add_q_proj",
"attn.add_v_proj",
"attn.to_add_out",
"attn.to_k",
"attn.to_q",
"attn.to_v",
"attn.to_add_out",
"attn.to_out.0",
"img_mlp.net.0.proj",
"txt_mlp.net.0.proj",
"txt_mlp.net.2",
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The lora_prefixs list contains a duplicate entry "attn.to_add_out" on lines 70 and 74. While this is harmless, removing the duplicate would improve code clarity.

        lora_prefixs = [
            "attn.add_k_proj",
            "attn.add_q_proj",
            "attn.add_v_proj",
            "attn.to_add_out",
            "attn.to_k",
            "attn.to_q",
            "attn.to_v",
            "attn.to_out.0",
            "img_mlp.net.0.proj",
            "txt_mlp.net.0.proj",
            "txt_mlp.net.2",
        ]

export PYTHONPATH=${lightx2v_path}:$PYTHONPATH
export DTYPE=BF16
export PROFILING_DEBUG_LEVEL=2
export CUDA_VISIBLE_DEVICES=
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This script sets CUDA_VISIBLE_DEVICES to an empty string, which is inconsistent with the other updated scripts in this PR that set it to 0. For consistency, you might want to set it to 0 here as well.

Suggested change
export CUDA_VISIBLE_DEVICES=
export CUDA_VISIBLE_DEVICES=0

@helloyongyang helloyongyang deleted the qwenimage_lora branch January 7, 2026 10:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants