Skip to content

Conversation

@yiyixuxu
Copy link
Collaborator

@yiyixuxu yiyixuxu commented Nov 22, 2025

https://huggingface.co/collections/hunyuanvideo-community/hunyuanvideo-15

e.g.

import torch
dtype = torch.bfloat16
device = "cuda:0"

from diffusers import HunyuanVideo15Pipeline
from diffusers.utils import export_to_video

repo_id= "hunyuanvideo-community/HunyuanVideo-1.5-Diffusers-480p_t2v"

pipeline = HunyuanVideo15Pipeline.from_pretrained(repo_id, torch_dtype=dtype)
pipeline.enable_model_cpu_offload()
pipeline.vae.enable_tiling()

prompt="A close-up shot captures a scene on a polished, light-colored granite kitchen counter, illuminated by soft natural light from an unseen window. Initially, the frame focuses on a tall, clear glass filled with golden, translucent apple juice standing next to a single, shiny red apple with a green leaf still attached to its stem. The camera moves horizontally to the right. As the shot progresses, a white ceramic plate smoothly enters the frame, revealing a fresh arrangement of about seven or eight more apples, a mix of vibrant reds and greens, piled neatly upon it. A shallow depth of field keeps the focus sharply on the fruit and glass, while the kitchen backsplash in the background remains softly blurred. The scene is in a realistic style."

generator = torch.Generator(device=device).manual_seed(1)

video = pipeline(
    prompt=prompt,
    generator=generator,
    num_inference_steps=50,
).frames[0]
export_to_video(video, "t2v_480_output.mp4", fps=24)

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants