Skip to content

avirambh/ScopeFlow

Repository files navigation

ScopeFlow: Dynamic Scene Scoping for Optical Flow

The official Pytorch code for ScopeFlow: Dynamic Scene Scoping for Optical Flow (A. Bar-Haim & L. Wolf, CVPR 2020).

frames

Our model was leading the MPI Sintel benchmark from October 2019 to March 2020.
sintel_leaderboard

With this repository, we provide our multi-stage pipeline and configurations for training optical flow models. We encourage others to try this pipeline and test it on other optical flow model architectures.

Repository structure

  • Main files for execution: train.py, evaluate.py.
  • Config directory: common configuration files for training and evaluation.
  • Datasets directory: datasets loaders.
  • Lib directory: the main training functionality.
  • Utils directory: common python, pytorch and optical flow helpers.
  • Models directory: initial supported models.
  • Checkpoints directory: an example of a pre-trained model.

Installation

  1. This code was developed with:

    • Python 3.6
    • PyTorch 0.4.1 (Ubuntu 14+, CUDA 8.0)
  2. The python packages (for CUDA 8.0) are specified in the requirements file, for installation:

    virtualenv <venv_path> --python=python3.6
    . <venv_path>/bin/activate
    pip3 install -r requirements_cuda8.txt
    
  3. The default model use the CUDA 8.0 correlation package, for installation:

    (optional) sudo apt-get install gcc-5 g++-5 -y
    bash -x scripts/install_correlation.sh
    
  4. Datasets used in this project:

    We place our datasets under 'data' directory, in the same directory as this repository, for reconfiguration please see the configuration files under config directory. For KITTI datasets, we extract both under the same directory named 'kitticomb'.

Pretrained models

In order to download pre-trained models, please run:

bash scripts/download_models.sh

Training

For starting a new training from scratch or from our checkpoints, use the training yamls provided under 'config/training/' directory, and the command line arguments. Please make sure that your data directory is properly configured.

For example, finetuning Sintel can be done with:

python train.py -f config/training/sintel_ft.yaml

In order to visualize the augmentation process (need to stop manually with ctrl + c):

python train.py -f config/training/sintel_ft.yaml --show_aug true --crop_min_ratio 0.5 --num_workers 0 --batch_size 1

In order to specify non-default GPU devices, please use CUDA_VISIBLE_DEVICES=<COMMA_SEP_GPU_NUMBERS> as a prefix to the training command.

To see all supported configurations:

python train.py --help

Inference

For evaluation, please use the training yamls provided under 'config/evaluation/' directory. Please make sure that your data directory is properly configured.

For example, evaluating Sintel combined model can be done with:

python evaluate.py -f config/evaluation/eval_template_sintel.yaml

For saving the flow and occlusion results under the output directory use:

python evaluate.py -f config/evaluation/eval_template_sintel.yaml --save_result_png true --save_result_occ true

To see all supported configurations:

python evaluate.py --help

Citation

If you find this work useful, please cite our paper:

@InProceedings{Bar-Haim_2020_CVPR,
author = {Bar-Haim, Aviram and Wolf, Lior},
title = {ScopeFlow: Dynamic Scene Scoping for Optical Flow},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Credits

This repository uses training functionality (e.g. progress bar, general training flow, argument parser, logger and optical flow common functionality) from the following great repositories:

About

Dynamic Scene Scoping for Optical Flow (CVPR 2020)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published