Skip to content

add wan2.2-S2V-14B training#1006

Merged
Artiprocher merged 3 commits intomodelscope:mainfrom
lzws:main
Oct 22, 2025
Merged

add wan2.2-S2V-14B training#1006
Artiprocher merged 3 commits intomodelscope:mainfrom
lzws:main

Conversation

@lzws
Copy link
Copy Markdown
Collaborator

@lzws lzws commented Oct 21, 2025

add wan2.2-S2V-14B training

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @lzws, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates comprehensive training support for the Wan2.2-S2V-14B model, enabling both full model training and LoRA-based fine-tuning. It introduces necessary infrastructure for audio data handling during training, including a dedicated audio loading utility and a command-line argument for configuring audio processors. The changes also include example scripts for running these training processes and validating their outputs, along with updated documentation to guide users.

Highlights

  • Wan2.2-S2V-14B Training Support: Full and LoRA training capabilities have been added for the Wan2.2-S2V-14B model, including new shell scripts for initiating training and Python scripts for validation.
  • Audio Data Processing: A new LoadAudio class has been introduced in unified_dataset.py to handle loading and sampling of audio files using librosa.
  • Audio Processor Configuration: The training utility now supports an --audio_processor_config argument, allowing the specification of an audio processor model for the WanVideoPipeline.
  • Documentation Updates: Both English and Chinese README.md files have been updated to reflect the newly added training options for the Wan2.2-S2V-14B model.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds training support for the wan2.2-S2V-14B model, including new training scripts, validation scripts, and necessary code modifications to handle audio data. The changes are well-structured, covering documentation, code, and examples. I've provided a few suggestions to improve code robustness, style, and portability.

def __init__(self, sr=16000):
self.sr = sr
def __call__(self, data: str):
import librosa
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better code organization and to avoid repeated import overhead, it's recommended to move the import librosa statement to the top of the file. This makes dependencies explicit and easier to manage. If librosa is an optional dependency, you can wrap the top-level import in a try...except ImportError block.

--remove_prefix_in_ckpt "pipe.dit." \
--output_path "./models/train/Wan2.2-S2V-14B_full" \
--extra_inputs "input_image,input_audio,s2v_pose_video" \
--use_gradient_checkpointing_offload No newline at end of file
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's a good practice to end files with a newline character. This file is missing a final newline.

--lora_target_modules "q,k,v,o,ffn.0,ffn.2" \
--lora_rank 32 \
--extra_inputs "input_image,input_audio,s2v_pose_video" \
--use_gradient_checkpointing_offload No newline at end of file
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's a good practice to end files with a newline character. This file is missing a final newline.

model_configs = self.parse_model_configs(model_paths, model_id_with_origin_paths, enable_fp8_training=False)
self.pipe = WanVideoPipeline.from_pretrained(torch_dtype=torch.bfloat16, device="cpu", model_configs=model_configs)
if audio_processor_config is not None:
audio_processor_config = ModelConfig(model_id=audio_processor_config.split(":")[0], origin_file_pattern=audio_processor_config.split(":")[1])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code audio_processor_config.split(":") assumes that the audio_processor_config string will always contain a colon. If it doesn't, this will raise an IndexError at runtime. It would be more robust to validate the format before splitting to prevent unexpected crashes.

Suggested change
audio_processor_config = ModelConfig(model_id=audio_processor_config.split(":")[0], origin_file_pattern=audio_processor_config.split(":")[1])
if ":" not in audio_processor_config:
raise ValueError(f"Invalid audio_processor_config format: {audio_processor_config}")
model_id, origin_file_pattern = audio_processor_config.split(":", 1)
audio_processor_config = ModelConfig(model_id=model_id, origin_file_pattern=origin_file_pattern)

width = 832

prompt = "a person is singing"
negative_prompt = "画面模糊,最差质量,画面模糊,细节模糊不清,情绪激动剧烈,手快速抖动,字幕,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The negative prompt contains a duplicated phrase '画面模糊'. Removing the duplication would make it cleaner.

Suggested change
negative_prompt = "画面模糊,最差质量,画面模糊,细节模糊不清,情绪激动剧烈,手快速抖动,字幕,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
negative_prompt = "最差质量,画面模糊,细节模糊不清,情绪激动剧烈,手快速抖动,字幕,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"


pipe = WanVideoPipeline.from_pretrained(
torch_dtype=torch.bfloat16,
device="cuda:0",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Hardcoding the device to cuda:0 can make the script less portable. It's generally better to use cuda and let PyTorch select the default CUDA device. This avoids errors if cuda:0 is unavailable but other GPUs are present.

Suggested change
device="cuda:0",
device="cuda",

width = 832

prompt = "a person is singing"
negative_prompt = "画面模糊,最差质量,画面模糊,细节模糊不清,情绪激动剧烈,手快速抖动,字幕,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The negative prompt contains a duplicated phrase '画面模糊'. Removing the duplication would make it cleaner.

Suggested change
negative_prompt = "画面模糊,最差质量,画面模糊,细节模糊不清,情绪激动剧烈,手快速抖动,字幕,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
negative_prompt = "最差质量,画面模糊,细节模糊不清,情绪激动剧烈,手快速抖动,字幕,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"

@Artiprocher Artiprocher merged commit 5380171 into modelscope:main Oct 22, 2025
LePao1 pushed a commit to LePao1/DiffSynth-Studio that referenced this pull request Feb 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants