Skip to content
Code for "Towards Segmenting Anything That Moves"
Python MATLAB Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
annotations Small script to print sequences from annotations json Nov 8, 2018
davis
detectron_pytorch @ cfa2a75 Use github detectron instead of bitbucket one Jul 27, 2019
fbms Path->string before np.load Jul 18, 2019
flow
flyingthings3d Add scripts to remove last frame from FBMS/FT3D annotations Aug 15, 2018
git-state @ e11470b Add git-state submodule Oct 20, 2018
hacky Shell script to run tracker on MOT Aug 15, 2018
hed Add script to compute hed boundaries Sep 8, 2018
keuper_iccv15 Log when finished Feb 7, 2019
licenses Add licenses Jul 26, 2018
mot17 Move track.py to separate directory Oct 18, 2018
oneoff
release Add davis training sequences Sep 6, 2019
shell Add shell script for splitting YTVOS into coco/noncoco Jun 30, 2019
third_party Add option to evaluate with non-davis-2016 annotations Feb 11, 2019
tracker Support frameN filename-format Aug 16, 2019
unused Load pickles directly from sequence dir, do not merge Jul 30, 2018
utils Quieter scripts, handle multiple extensions, other cleanup for release Jul 27, 2019
ytvos Add shell script for splitting YTVOS into coco/noncoco Jun 30, 2019
.gitignore Move .gitignore rule to subdir Jul 29, 2019
.gitmodules Use github detectron instead of bitbucket one Jul 27, 2019
README.md Fix issue link Sep 13, 2019
detectron_to_fgbg_masks.py Remove unnecessary print Jul 18, 2019
env.sh
eval_objectness_json.py Remove reference to fbms Sep 8, 2018
numpy_to_fgbg_masks.py Configurable --npy-extension, handle existing directories Jul 18, 2019
requirements.txt Add DAVIS 16 release code Jul 18, 2019

README.md

Towards Segmenting Anything That Moves

[Pre-print] [Website]

Achal Dave, Pavel Tokmakov, Deva Ramanan

Setup

  1. Download models and extract them to release/models
  2. Install pytorch 0.4.0.
  3. Run git submodule update --init.
  4. Setup detectron-pytorch.
  5. Setup flownet2. If you just want to use the appearance stream, you can skip this step.
  6. Install requirements with pip install -r requirements.txt1.
  7. Copy ./release/example_config.yaml to ./release/config.yaml, and edit fields marked with ***EDIT THIS***.
  8. Add root directory to PYTHONPATH: source ./env.sh activate.

Running models

All scripts needed for running our models on standard datasets, as well as on new videos, are provided in the ./release directory. Outside of the release directory, this repository contains a number of scripts which are not used for the final results. They can be safely ignored, but are provided in case anyone finds them useful.

Run on your own video

  1. Extract frames: To run the model on your own video, first dump the frames from your video. For a single video, you can just use

    ffmpeg -i video.mp4 %04d.jpg

    Alternatively, you can use this script to extract frames in parallel on multiple videos.

  2. Run joint model: To run the joint model, run the following commands:

    # Inputs
    FRAMES_DIR=/path/to/frames/dir
    # Outputs
    OUTPUT_DIR=/path/to/output/dir
    
    python release/custom/run.py \
    --model joint \
    --frames-dir ${FRAMES_DIR} \
    --output-dir ${OUTPUT_DIR}
  3. Run appearance only model: To run only the appearance model, you don't need to compute optical flow, or set up flownet2:

    python release/custom/run.py \
    --model appearance \
    --frames-dir ${FRAMES_DIR} \
    --output-dir ${OUTPUT_DIR}

FBMS, DAVIS 2016/2017, YTVOS

The instructions for FBMS, DAVIS 2016/2017 and YTVOS datasets are roughly the same. Once you have downloaded the dataset and edited the paths in ./release/config.yaml, run the following scripts:

# or davis16, davis17, ytvos
dataset=fbms
python release/${dataset}/compute_flow.py
python release/${dataset}/infer.py
python release/${dataset}/track.py
# For evaluation:
python release/${dataset}/evaluate.py

Note that by default, we use our final model trained on COCO, FlyingThings3D, DAVIS, and YTVOS. For YTVOS, we provide the option to run using a model that was trained without YTVOS, to evaluate generalization. To activate this, pass --without-ytvos-train to release/ytvos/infer.py and release/ytvos/track.py.


1: This should contain all the requirements, but this was created manually so I may be missing some pip modules. If you run into an import error, try pip installing the module, and/or file an issue.

You can’t perform that action at this time.