@@ -40,6 +40,7 @@ The following Wan models are supported in Diffusers:
4040- [ Wan 2.2 T2V 14B] ( https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B-Diffusers )
4141- [ Wan 2.2 I2V 14B] ( https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B-Diffusers )
4242- [ Wan 2.2 TI2V 5B] ( https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B-Diffusers )
43+ - [ Wan 2.2 S2V 14B] ( https://huggingface.co/Wan-AI/Wan2.2-S2V-14B-Diffusers )
4344
4445> [ !TIP]
4546> Click on the Wan models in the right sidebar for more examples of video generation.
@@ -95,15 +96,15 @@ pipeline = WanPipeline.from_pretrained(
9596pipeline.to(" cuda" )
9697
9798prompt = """
98- The camera rushes from far to near in a low-angle shot,
99- revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
100- for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
101- Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic
99+ The camera rushes from far to near in a low-angle shot,
100+ revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
101+ for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
102+ Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic
102103shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
103104"""
104105negative_prompt = """
105- Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality,
106- low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured,
106+ Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality,
107+ low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured,
107108misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
108109"""
109110
@@ -150,15 +151,15 @@ pipeline.transformer = torch.compile(
150151)
151152
152153prompt = """
153- The camera rushes from far to near in a low-angle shot,
154- revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
155- for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
156- Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic
154+ The camera rushes from far to near in a low-angle shot,
155+ revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
156+ for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
157+ Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic
157158shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
158159"""
159160negative_prompt = """
160- Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality,
161- low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured,
161+ Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality,
162+ low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured,
162163misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards
163164"""
164165
@@ -236,6 +237,129 @@ export_to_video(output, "output.mp4", fps=16)
236237</hfoption >
237238</hfoptions >
238239
240+
241+ ### Wan-S2V: Audio-Driven Cinematic Video Generation
242+
243+ [ Wan-S2V] ( https://huggingface.co/papers/2508.18621 ) by the Wan Team.
244+
245+ *Current state-of-the-art (SOTA) methods for audio-driven character animation demonstrate promising performance for scenarios primarily involving speech and singing. However, they often fall short in more complex film and television productions, which demand sophisticated elements such as nuanced character interactions, realistic body movements, and dynamic camera work. To address this long-standing challenge of achieving film-level character animation, we propose an audio-driven model, which we refere to as Wan-S2V, built upon Wan. Our model achieves significantly enhanced expressiveness and fidelity in cinematic contexts compared to existing approaches. We conducted extensive experiments, benchmarking our method against cutting-edge models such as Hunyuan-Avatar and Omnihuman. The experimental results consistently demonstrate that our approach significantly outperforms these existing solutions. Additionally, we explore the versatility of our method through its applications in long-form video generation and precise video lip-sync editing.*
246+
247+ The project page: https://humanaigc.github.io/wan-s2v-webpage/
248+
249+ This model was contributed by [ M. Tolga Cangöz] ( https://github.com/tolgacangoz ) .
250+
251+ The example below demonstrates how to use the speech-to-video pipeline to generate a video using a text description, a starting frame, an audio, and a pose video.
252+
253+ <hfoptions id =" S2V usage " >
254+ <hfoption id =" usage " >
255+
256+ ``` python
257+ import numpy as np, math
258+ import torch
259+ from diffusers import AutoencoderKLWan, WanSpeechToVideoPipeline
260+ from diffusers.utils import export_to_merged_video_audio, load_image, load_audio, load_video, export_to_video
261+ from transformers import Wav2Vec2ForCTC
262+ import requests
263+ from PIL import Image
264+ from io import BytesIO
265+
266+
267+ model_id = " Wan-AI/Wan2.2-S2V-14B-Diffusers"
268+ audio_encoder = Wav2Vec2ForCTC.from_pretrained(model_id, subfolder = " audio_encoder" , dtype = torch.float32)
269+ vae = AutoencoderKLWan.from_pretrained(model_id, subfolder = " vae" , torch_dtype = torch.float32)
270+ pipe = WanSpeechToVideoPipeline.from_pretrained(
271+ model_id, vae = vae, audio_encoder = audio_encoder, torch_dtype = torch.bfloat16
272+ )
273+ pipe.to(" cuda" )
274+
275+ headers = {" User-Agent" : " Mozilla/5.0" }
276+ url = " https://upload.wikimedia.org/wikipedia/commons/4/46/Albert_Einstein_sticks_his_tongue.jpg"
277+ resp = requests.get(url, headers = headers, timeout = 30 )
278+ image = Image.open(BytesIO(resp.content))
279+
280+ audio, sampling_rate = load_audio(" https://github.com/Wan-Video/Wan2.2/raw/refs/heads/main/examples/Five%20Hundred%20Miles.MP3" )
281+ # pose_video_path_or_url = "https://github.com/Wan-Video/Wan2.2/raw/refs/heads/main/examples/pose.mp4"
282+
283+ def get_size_less_than_area (height ,
284+ width ,
285+ target_area = 1024 * 704 ,
286+ divisor = 64 ):
287+ if height * width <= target_area:
288+ # If the original image area is already less than or equal to the target,
289+ # no resizing is needed—just padding. Still need to ensure that the padded area doesn't exceed the target.
290+ max_upper_area = target_area
291+ min_scale = 0.1
292+ max_scale = 1.0
293+ else :
294+ # Resize to fit within the target area and then pad to multiples of `divisor`
295+ max_upper_area = target_area # Maximum allowed total pixel count after padding
296+ d = divisor - 1
297+ b = d * (height + width)
298+ a = height * width
299+ c = d** 2 - max_upper_area
300+
301+ # Calculate scale boundaries using quadratic equation
302+ min_scale = (- b + math.sqrt(b** 2 - 2 * a * c)) / (2 * a) # Scale when maximum padding is applied
303+ max_scale = math.sqrt(max_upper_area / (height * width)) # Scale without any padding
304+
305+ # We want to choose the largest possible scale such that the final padded area does not exceed max_upper_area
306+ # Use binary search-like iteration to find this scale
307+ find_it = False
308+ for i in range (100 ):
309+ scale = max_scale - (max_scale - min_scale) * i / 100
310+ new_height, new_width = int (height * scale), int (width * scale)
311+
312+ # Pad to make dimensions divisible by 64
313+ pad_height = (64 - new_height % 64 ) % 64
314+ pad_width = (64 - new_width % 64 ) % 64
315+ pad_top = pad_height // 2
316+ pad_bottom = pad_height - pad_top
317+ pad_left = pad_width // 2
318+ pad_right = pad_width - pad_left
319+
320+ padded_height, padded_width = new_height + pad_height, new_width + pad_width
321+
322+ if padded_height * padded_width <= max_upper_area:
323+ find_it = True
324+ break
325+
326+ if find_it:
327+ return padded_height, padded_width
328+ else :
329+ # Fallback: calculate target dimensions based on aspect ratio and divisor alignment
330+ aspect_ratio = width / height
331+ target_width = int (
332+ (target_area * aspect_ratio)** 0.5 // divisor * divisor)
333+ target_height = int (
334+ (target_area / aspect_ratio)** 0.5 // divisor * divisor)
335+
336+ # Ensure the result is not larger than the original resolution
337+ if target_width >= width or target_height >= height:
338+ target_width = int (width // divisor * divisor)
339+ target_height = int (height // divisor * divisor)
340+
341+ return target_height, target_width
342+
343+ height, width = get_size_less_than_area(first_frame.height, first_frame.width, 480 * 832 )
344+
345+ prompt = " Einstein singing a song."
346+
347+ output = pipe(
348+ prompt = prompt, image = image, audio = audio, sampling_rate = sampling_rate,
349+ height = height, width = width, num_frames_per_chunk = 80 ,
350+ # pose_video_path_or_url=pose_video_path_or_url,
351+ ).frames[0 ]
352+ export_to_video(output, " output.mp4" , fps = 16 )
353+
354+ # Lastly, we need to merge the video and audio into a new video, with the duration set to
355+ # the shorter of the two and overwrite the original video file.
356+ export_to_merged_video_audio(" output.mp4" , " audio.mp3" )
357+ ```
358+
359+ </hfoption >
360+ </hfoptions >
361+
362+
239363### Any-to-Video Controllable Generation
240364
241365Wan VACE supports various generation techniques which achieve controllable video generation. Some of the capabilities include:
@@ -281,10 +405,10 @@ The general rule of thumb to keep in mind when preparing inputs for the VACE pip
281405
282406 # use "steamboat willie style" to trigger the LoRA
283407 prompt = """
284- steamboat willie style, golden era animation, The camera rushes from far to near in a low-angle shot,
285- revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
286- for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
287- Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic
408+ steamboat willie style, golden era animation, The camera rushes from far to near in a low-angle shot,
409+ revealing a white ferret on a log. It plays, leaps into the water, and emerges, as the camera zooms in
410+ for a close-up. Water splashes berry bushes nearby, while moss, snow, and leaves blanket the ground.
411+ Birch trees and a light blue sky frame the scene, with ferns in the foreground. Side lighting casts dynamic
288412 shadows and warm highlights. Medium composition, front view, low angle, with depth of field.
289413 """
290414
@@ -353,6 +477,12 @@ The general rule of thumb to keep in mind when preparing inputs for the VACE pip
353477 - all
354478 - __ call__
355479
480+ ## WanSpeechToVideoPipeline
481+
482+ [[ autodoc]] WanSpeechToVideoPipeline
483+ - all
484+ - __ call__
485+
356486## WanVideoToVideoPipeline
357487
358488[[ autodoc]] WanVideoToVideoPipeline
@@ -361,4 +491,4 @@ The general rule of thumb to keep in mind when preparing inputs for the VACE pip
361491
362492## WanPipelineOutput
363493
364- [[ autodoc]] pipelines.wan.pipeline_output.WanPipelineOutput
494+ [[ autodoc]] pipelines.wan.pipeline_output.WanPipelineOutput
0 commit comments