Infinity⭐️: Unified Spacetime AutoRegressive Modeling for Visual Generation
- Nov 7, 2025: 🔥 Paper, Training and Inference Codes && Checkpoints && Demo released!
- Sep 18, 2025: 🎉 InfinityStar is accepted as NeurIPS 2025 Oral.
We provide a demo website for you to play with InfinityStar and generate videos. Enjoy the fun of bitwise video autoregressive modeling!
- Training Code
- Web Demo
- InfinityStar Inference Code
- InfinityStar Models Checkpoints
- InfinityStar-Interact Checkpoints & Inference Code
We introduce InfinityStar, a unified spacetime autoregressive framework for high-resolution image and dynamic video synthesis. Building on the recent success of autoregressive modeling in both vision and language, our purely discrete approach jointly captures spatial and temporal dependencies within a single architecture. This unified design naturally supports a variety of generation tasks such as text-to-image, text-to-video, image-to-video, and long-duration video synthesis via straightforward temporal autoregression. Through extensive experiments, InfinityStar scores 83.74 on VBench, outperforming all autoregressive models by large margins, even surpassing diffusion competitors like HunyuanVideo. Without extra optimizations, our model generates a 5s, 720p video approximately 10x faster than leading diffusion-based methods. To our knowledge, InfinityStar is the first discrete autoregressive video generator capable of producing industrial-level 720p videos. We release all code and models to foster further research in efficient, high-quality video generation.
- We use FlexAttention to speedup training, which requires
torch>=2.5.1. - Install other pip packages via
pip3 install -r requirements.txt.
We provide a comprehensive workflow for training and finetuning our model, covering data organization, feature extraction, and training scripts. For detailed instructions, please refer to data/README.md.
-
720p Video Generation: Use
tools/infer_video_720p.pyto generate 5-second videos at 720p resolution. Due to the high computational cost of training, our released 720p model is trained for 5-second video generation. This script also supports image-to-video generation by specifying an image path.python3 tools/infer_video_720p.py
-
480p Variable-Length Video Generation: We also provide an intermediate checkpoint for 480p resolution, capable of generating videos of 5 and 10 seconds. Since this model is not specifically optimized for Text-to-Video (T2V), we recommend using the experimental Image-to-Video (I2V) and Video-to-Video (V2V) modes for better results. To specify the video duration, you can edit the
generation_durationvariable intools/infer_video_480p.pyto either 5 or 10. This script also supports image-to-video and video continuation by providing a path to an image or a video.python3 tools/infer_video_480p.py
If our work assists your research, feel free to give us a star ⭐ or cite us using:
@article{InfinityStar,
title={InfinityStar: Unified Spacetime AutoRegressive Modeling for Visual Generation},
author={Jinlai Liu and jian Han and Bin Yan and Hui Wu and Fengda Zhu and Xing Wang and Yi Jiang and Bingyue Peng and Zehuan Yuan},
journal={Advances in neural information processing systems},
year={2025},
}
This project is licensed under the MIT License - see the LICENSE file for details.