Skip to content

The official implementation of "Compositional Generative Model of Unbounded 4D Cities". (TPAMI 2026)

License

Notifications You must be signed in to change notification settings

hzxie/CityDreamer4D

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

96 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CityDreamer4D: Compositional Generative Model of Unbounded 4D Cities

Haozhe Xie, Zhaoxi Chen, Fangzhou Hong, Ziwei Liu

S-Lab, Nanyang Technological University

Quality Gate Status codefactor badge Counter arXiv YouTube

CityDreamer4D Forward Cam - Daytime

Changelog🔥

  • [2025/09/01] Added training and inference instructions.
  • [2025/08/27] Released source code.
  • [2025/08/24] CityDreamer4D accepted by TPAMI.
  • [2025/01/16] Released the CityTopia dataset.
  • [2025/01/15] Repository created.

Cite this work📝

@article{xie2025citydreamer4d,
  title     = {Compositional Generative Model of Unbounded 4{D} Cities},
  author    = {Xie, Haozhe and 
               Chen, Zhaoxi and 
               Hong, Fangzhou and 
               Liu, Ziwei},
  journal   = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  volume    = {48},
  number    = {1},
  pages     = {312-328},
  doi       = {10.1109/TPAMI.2025.3603078},
  year      = {2026}
}

Datasets📚

Pretrained Models🧠

GoogleEarth

CityTopia

Installation⚙️

Assume that you have installed CUDA and PyTorch in your Python (or Anaconda) environment.

The CityDreamer source code is tested in PyTorch 2.4.1 with CUDA 11.8 in Python 3.10. You can use the following command to install PyTorch built on CUDA 11.8.

pip install torch==2.4.1+cu118 torchvision==0.19.1+cu118 --extra-index-url https://download.pytorch.org/whl/cu118

After that, the Python dependencies can be installed as following.

git clone https://github.com/hzxie/CityDreamer4D
cd CityDreamer4D
CITY_DREAMER_HOME=`pwd`
pip install -r requirements.txt

The CUDA extensions can be compiled and installed with the following commands.

cd $CITY_DREAMER_HOME/extensions
for e in `ls -d */`
do
  cd $CITY_DREAMER_HOME/extensions/$e
  pip install .
done

Inference🚀

For the GoogleEarth dataset, 24 GB of VRAM is sufficient (tested on an RTX 3090). For the CityTopia dataset, at least 48 GB of VRAM is required (tested on an A6000).

CityTopia-style Generation

To generate a CityTopia-style city, first download the CityTopia dataset (CityTopia-Annotations-1080p.zip). Then run:

python3 scripts/dataset_generator.py --data_dir /path/to/citytopia
python3 scripts/traffic_scenario_generator.py --city City01 --steps 120
python3 scripts/inference.py \
  --dataset CITY_SAMPLE \
  --city_sample_dir /path/to/citytopia/City01 \
  --bg_ckpt /path/to/bg-ckpt.pth \
  --bldg_ckpt /path/to/bldg-ckpt.pth \
  --car_ckpt /path/to/car-ckpt.pth

GoogleEarth-style Generation

The script also supports generating cities in GoogleEarth style. Make sure you have downloaded the OSM dataset before running:

python3 scripts/inference.py \
  --dataset GOOGLE_EARTH \
  --city_osm_dir /path/to/osm \
  --bg_ckpt /path/to/bg-ckpt.pth \
  --bldg_ckpt /path/to/bldg-ckpt.pth

The generated video will be saved at output/rendering.mp4.

Training🏋️

This section provides instructions for training on the CityTopia dataset. For training with the GoogleEarth dataset, please refer to the CityDreamer README.

Dataset Preparation

To generate a CityTopia-style city, first download the CityTopia dataset (CityTopia-Annotations-1080p.zip). Then run:

python3 scripts/dataset_generator.py --data_dir /path/to/citytopia

Background Stuff Generator Training

Update config.py

Make sure the config matches the following lines.

cfg.CONST.DATASET                                = "CITY_SAMPLE"
cfg.NETWORK.GANCRAFT.SKY_ENABLED                 = True

Launch Training 🚀

torchrun --nnodes=1 --nproc_per_node=8 --standalone run.py

Building Instance Generator Training

Update config.py

Make sure the config matches the following lines.

cfg.CONST.DATASET                                = "CITY_SAMPLE"
cfg.NETWORK.GANCRAFT.STYLE_DIM                   = 256
cfg.NETWORK.GANCRAFT.ENCODER                     = "LOCAL"
cfg.NETWORK.GANCRAFT.ENCODER_OUT_DIM             = 64
cfg.NETWORK.GANCRAFT.POS_EMD                     = "SIN_COS"
cfg.NETWORK.GANCRAFT.POS_EMD_INCUDE_CORDS        = False
cfg.TRAIN.GANCRAFT.REC_LOSS_FACTOR               = 0
cfg.TRAIN.GANCRAFT.PERCEPTUAL_LOSS_FACTOR        = 0
cfg.TEST.GANCRAFT.CROP_SIZE                      = (360, 180)

Launch Training 🚀

torchrun --nnodes=1 --nproc_per_node=8 --standalone run.py

Vehicle Instance Generator Training

Update config.py

Make sure the config matches the following lines.

cfg.CONST.DATASET                                = "CITY_SAMPLE"
cfg.NETWORK.GANCRAFT.STYLE_DIM                   = 256
cfg.NETWORK.GANCRAFT.POS_EMD                     = "SIN_COS"
cfg.TEST.GANCRAFT.CROP_SIZE                      = (360, 180)

Launch Training 🚀

torchrun --nnodes=1 --nproc_per_node=8 --standalone run.py

License📄

This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.

About

The official implementation of "Compositional Generative Model of Unbounded 4D Cities". (TPAMI 2026)

Topics

Resources

License

Stars

Watchers

Forks