Skip to content


Folders and files

Last commit message
Last commit date

Latest commit



15 Commits

Repository files navigation


This is a PyTorch implementation of our paper:

Hierarchical Discrete Distribution Decomposition for Match Density Estimation (CVPR 2019)

Zhichao Yin, Trevor Darrell, Fisher Yu

We propose a framework suitable for learning probabilistic pixel correspondences. It has applications including but not limited to stereo matching and optical flow, with inherent uncertainty estimation. HD3 achieves state-of-the-art results for both tasks on established benchmarks (KITTI & MPI Sintel).

arxiv preprint: (


This code has been tested with Python 3.6, PyTorch 1.0 and CUDA 9.0 on Ubuntu 16.04.

Getting Started

  • Install PyTorch 1.0 and we recommend using anaconda3 for managing the python environment. You can install all the dependencies by the following:
pip install -r requirements.txt

Model Training

To train a model on a specific dataset, simply run

bash scripts/

Note the scripts contain several placeholders which you should replace with your customized choices. For instance, you can specify the dataset type (e.g. FlyingChairs) via --dataset_name, alternate the network architecture via --encoder and --decoder, and switch the task (stereo or flow) you solve via --task. You can also partly load the weights of a pretrained backbone network via --pretrain_base (download ImageNet pretrained DLA-34 here), or strictly initialize the weights from a pretrained model via --pretrain.

You can then start a tensorboard session by

tensorboard --logdir=/path/to/log/files --port=8964

and visualize your training progress by accessing https://localhost:8964 on you browser.

  • We provide the learning rate schedules and augmentation configurations in all of our experiments. For other detailed hyperparameters, please refer to our paper so as to reproduce our result.

Model Inference

To test a model on a folder of images, please run

bash scripts/

Please provide the list of image pair names and pass it to --data_list. This script will generate predictions for every pair of images and save them in the --save_folder with the same folder hierarchy as input images. You can choose the saved flow format (e.g. png or flo) via --flow_format. When the folder contains images of different input sizes (e.g. KITTI), please make sure the --batch_size is 1.

  • When the ground truth is available, you can optionally enable the argument --evaluate to calculate the End-Point-Error of your predictions. Please make sure the list consists of img-name1 img-name2 gtruth-name in each line.

Model Zoo

We provide pretrained models for all of our experiments. To download them, simply run

bash scripts/

The names of the models come in the format of model-name_dataset-names. Models are named as hd3f/hd3s for optical flow and stereo matching. A suffix of c is appended for models with context module. The dataset_names indicates our dataset schedule for training the model. You should be able to obtain similar results by running the test script we provide.


If you find our work or our repo useful in your research, please consider citing our paper:

author = {Yin, Zhichao and Darrell, Trevor and Yu, Fisher},
title = {Hierarchical Discrete Distribution Decomposition for Match Density Estimation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}


  • Why are the model outputs different even for the same input in different runs?

    Some PyTorch ops are non-deterministic (e.g. torch.tensor.scatter_add_). If you fix all the random seeds for Python and PyTorch, you shall get identical results.

  • Why does the model finetuned on the KITTI dataset exhibit artifacts in the sky regions?

    This is due to the limited amount of data during finetuning stage. Effective solutions to resolve it include an additional smoothness loss term during finetuning and knowledge distillation from the model pretrained on the synthetic datasets.

  • Why does my evaluation metric look abnormal?

    Please confirm the synthetic dataset you are using is DispNet/FlowNet2.0 dataset subsets rather than the original complete version (the data format has subtle differences actually).


We thank Houning Hu for making the teaser image, Simon Niklaus for the correlation operator and Clément Pinard for the FlowNet implementation.


Code for Hierarchical Discrete Distribution Decomposition for Match Density Estimation (CVPR 2019)







No releases published


No packages published