Skip to content
Evaluation Framework for DAVIS 2017 Semi-supervised and Unsupervised used in the DAVIS Challenges
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
davis2017 Added zero padding for missing objects in semi-supervised #1 May 13, 2019
pytest Added code Apr 22, 2019
results Updated RVOS masks to 20 proposals Apr 24, 2019
.gitignore Added .gitignore Apr 22, 2019
README.md Update README.md Apr 22, 2019
evaluation_codalab.py Added code Apr 22, 2019
evaluation_method.py Added code Apr 22, 2019
setup.cfg Update setup.cfg Apr 22, 2019
setup.py Added code Apr 22, 2019

README.md

DAVIS 2017 Semi-supervised and Unsupervised evaluation package

This package is used to evaluate semi-supervised and unsupervised video multi-object segmentation models for the DAVIS 2017 dataset.

This tool is also used to evaluate the submissions in the Codalab site for the Semi-supervised DAVIS Challenge and the Unsupervised DAVIS Challenge

Installation

# Download the code
git clone https://github.com/davisvideochallenge/davis2017-evaluation.git && cd davis2017-evaluation
# Install it - Python 3.6 or higher required
python setup.py install

If you don't want to specify the DAVIS path every time, you can modify the default value in the variable default_davis_path in evaluation_method.py(the following examples assume that you have set it). Otherwise, you can specify the path in every call using using the flag --davis_path /path/to/DAVIS when calling evaluation_method.py.

Once the evaluation has finished, two different CSV files will be generated inside the folder with the results:

  • global_results-SUBSET.csv contains the overall results for a certain SUBSET.
  • per-sequence_results-SUBSET.csv contain the per sequence results for a certain SUBSET.

If a folder that contains the previous files is evaluated again, the results will be read from the CSV files instead of recomputing them.

Evaluate DAVIS 2017 Semi-supervised

In order to evaluate your semi-supervised method in DAVIS 2017, execute the following command substituting results/semi-supervised/osvos by the folder path that contains your results:

python evaluation_method.py --task semi-supervised --results_path results/semi-supervised/osvos

The semi-supervised results have been generated using OSVOS.

Evaluate DAVIS 2017 Unsupervised

In order to evaluate your unsupervised method in DAVIS 2017, execute the following command substituting results/unsupervised/rvos by the folder path that contains your results:

python evaluation_method.py --task unsupervised --results_path results/unsupervised/rvos

The unsupervised results example have been generated using RVOS.

Evaluation running in Codalab

In case you would like to know which is the evaluation script that is running in the Codalab servers, check the evaluation_codalab.py script.

This package runs in the following docker image: scaelles/codalab:anaconda3-2018.12

Citation

Please cite both papers in your publications if DAVIS or this code helps your research.

@article{Caelles_arXiv_2019,
  author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}},
  title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation},
  journal = {arXiv},
  year = {2019}
}
@article{Pont-Tuset_arXiv_2017,
  author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
  title = {The 2017 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1704.00675},
  year = {2017}
}
You can’t perform that action at this time.