Skip to content

HVision-NKU/StoryDiffusion

Repository files navigation

StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation Paper page

[Paper]   [Project Page]   [Jittor Version]  [πŸ€— Comic Generation Demo ] Replicate Run Comics Demo in Colab


Official implementation of StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation.

Demo Video

28a42f-2cf8-a5b-ec52-f3aaba486f.mp4

Update History

You can visit here to visit update history.

🌠 Key Features:

StoryDiffusion can create a magic story by generating consistent images and videos. Our work mainly has two parts:

  1. Consistent self-attention for character-consistent image generation over long-range sequences. It is hot-pluggable and compatible with all SD1.5 and SDXL-based image diffusion models. For the current implementation, the user needs to provide at least 3 text prompts for the consistent self-attention module. We recommend at least 5 - 6 text prompts for better layout arrangement.
  2. Motion predictor for long-range video generation, which predicts motion between Condition Images in a compressed image semantic space, achieving larger motion prediction.

πŸ”₯ Examples

Comics generation

1

Image-to-Video generation (Results are HIGHLY compressed for speedοΌ‰

Leveraging the images produced through our Consistent Self-Attention mechanism, we can extend the process to create videos by seamlessly transitioning between these images. This can be considered as a two-stage long video generation approach.

Note: results are highly compressed for speed, you can visit our website for the high-quality version.

Two-stage Long Videos Generation (New Update)

Combining the two parts, we can generate very long and high-quality AIGC videos.

Video1 Video2 Video3

Long Video Results using Condition Images

Our Image-to-Video model can generate a video by providing a sequence of user-input condition images.

Video1 Video2 Video3
Video4 Video5 Video6

Short Videos

Video1 Video2 Video3
Video4 Video5 Video6

🚩 TODO/Updates

  • Comic Results of StoryDiffusion.
  • Video Results of StoryDiffusion.
  • Source code of Comic Generation
  • Source code of gradio demo
  • Source code of Video Generation Model
  • Pretrained weight of Video Generation Model

πŸ”§ Dependencies and Installation

conda create --name storydiffusion python=3.10
conda activate storydiffusion
pip install -U pip

# Install requirements
pip install -r requirements.txt

How to use

Currently, we provide two ways for you to generate comics.

Use the jupyter notebook

You can open the Comic_Generation.ipynb and run the code.

Start a local gradio demo

Run the following command:

(Recommend) We provide a low GPU Memory cost version, it was tested on a machine with 24GB GPU-memory(Tesla A10) and 30GB RAM, and expected to work well with >20 G GPU-memory.

python gradio_app_sdxl_specific_id_low_vram.py

Contact

If you have any questions, you are very welcome to email ypzhousdu@gmail.com and zhoudaquan21@gmail.com

Disclaimer

This project strives to impact the domain of AI-driven image and video generation positively. Users are granted the freedom to create images and videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.

Related Resources

Following are some third-party implementations of StoryDiffusion.

API

BibTeX

If you find StoryDiffusion useful for your research and applications, please cite using this BibTeX:

@article{zhou2024storydiffusion,
  title={StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation},
  author={Zhou, Yupeng and Zhou, Daquan and Cheng, Ming-Ming and Feng, Jiashi and Hou, Qibin},
  journal={NeurIPS 2024},
  year={2024}
}

About

Accepted as [NeurIPS 2024] Spotlight Presentation Paper

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published