Skip to content
Branch: master
Find file History
andrewluchen and Copybara-Service add +x bit to run.sh, change tf -> tf.compat.v1, freeze requirements …
…for tf.contrib

PiperOrigin-RevId: 276586556
Latest commit 541ef81 Oct 24, 2019
Permalink
Type Name Latest commit message Commit time
..
Failed to load latest commit information.
data_example First release of code for the paper at https://arxiv.org/abs/1904.04998. Jul 31, 2019
README.md Release code for generating odometry trajectories and the respective … Sep 13, 2019
__init__.py First release of code for the paper at https://arxiv.org/abs/1904.04998. Jul 31, 2019
consistency_losses.py First release of code for the paper at https://arxiv.org/abs/1904.04998. Jul 31, 2019
depth_prediction_net.py Release checkpoints trained on Cityscapes, KITTI and their mixtures. Aug 22, 2019
model.py Support initializing the training from an ImageNet checkpoint, and so… Aug 29, 2019
motion_prediction_net.py First release of code for the paper at https://arxiv.org/abs/1904.04998. Jul 31, 2019
randomized_layer_normalization.py First release of code for the paper at https://arxiv.org/abs/1904.04998. Jul 31, 2019
reader.py First release of code for the paper at https://arxiv.org/abs/1904.04998. Jul 31, 2019
requirements.txt Release code for generating odometry trajectories and the respective … Sep 13, 2019
run.sh add +x bit to run.sh, change tf -> tf.compat.v1, freeze requirements … Oct 24, 2019
train.py Support initializing the training from an ImageNet checkpoint, and so… Aug 29, 2019
trajectory_inference.py Release code for generating odometry trajectories and the respective … Sep 13, 2019
transform_depth_map.py First release of code for the paper at https://arxiv.org/abs/1904.04998. Jul 31, 2019
transform_utils.py First release of code for the paper at https://arxiv.org/abs/1904.04998. Jul 31, 2019

README.md

Depth from Video in the Wild: Unsupervised Monocular Depth Learning from Unknown Cameras

This repository contains a preliminary release of code for the paper bearing the title above, at https://arxiv.org/abs/1904.04998, and to appear at ICCV 2019. The code is based on the Struct2depth [repository] (https://github.com/tensorflow/models/tree/master/research/struct2depth) (see the respective paper here), and utilizes the same data format.

This release supports training a depth and motion prediciton model, with either learned or specified camera intrinsics. The motion model produces 6 degrees of freedom of camera motion, and a dense translation vector field for every pixel in the scene. As an input, the code needs triplets of RGB frames with possibly-moving objects masked out.

Sample command line:

python -m depth_from_video_in_the_wild.train \
   --checkpoint_dir=$MY_CHECKPOINT_DIR \
   --data_dir=$MY_DATA_DIR \
   --imagenet_ckpt=$MY_IMAGENET_CHECKPOINT

MY_CHECKPOINT_DIR is where the trained model checkpoints are to be saved.

MY_DATA_DIR is where the training data (in Struct2depth's format) is stored. The data_example folder contains a single training example expressed in this format.

MY_IMAGENET_CHECKPOINT is a path to a pretreained ImageNet checkpoint to intialize the encoder of the depth prediction model.

On Cityscapes we used the default batch size (4), for KITTI we used a batch size of 16 (add --batch_size=16 to the training command).

A command line for running a single training step on the single example in data_example (for testing):

python -m depth_from_video_in_the_wild.train \
  --data_dir=depth_from_video_in_the_wild/data_example \
  --checkpoint_dir=/tmp/my_experiment --train_steps=1

To use the given intrinsics instead of learning them, add --nolearn_intrinsics to the coomand.

Pretrained checkpoints and respective depth metrics

The table below provides checkpoints trained on Cityscapes, KITTI and their mixture, with the respective Absolute Relative depth error metrics. The metrics slightly differ from the results in Table A3 in the paper because for the latter we averaged the metrics over multiple checkpoints, whereas in the table below the metrics relate to a specific checkpoint. All checkpoints were harvested after training on nearly 4M images (since the datasets are much smaller than 4M, this of course means multiple epochs).

Trained on Intirinsics Abs Rel on Cityscapes Abs Rel on KITTI Checkpoint
Cityscapes Learned 0.1279 0.1729 download
KITTI Learned 0.1679 0.1262 download
Cityscapes + KITTI Learned 0.1196 0.1231 download

Pretrained checkpoints and respective odometry results

The command for generating a trajectory from a checkpoint given an odometry test set is:

python -m depth_from_video_in_the_wild.trajectory_inference \
  --checkpoint_path=$YOUR_CHECKPOINT_PATH \
  --odometry_test_set_dir=$DIRECTORY_WHERE_YOU_STORE_THE_ODOMETRY_TEST_SET \
  --output_dir=$DIRECTORY_WHERE_THE_TRAJECTORIES_WILL_BE_SAVED \
  --alsologtostderr

We observed that odometry generally took longer to converge. The table below lists the checkpoints used to evaluate odometry on in the paper. All checkpoints were trained on KITTI. The training batch size was 16, and the learning rate and number of training steps is given in the table.

Intirinsics Learning rate Training steps Checkpoint Seq. 09 Seq. 10
Given 3e-5 480377 download trajectory trajectory
Learned 1e-4 413174 download trajectory trajectory
Learned & corrected --- same --- --- as --- ---- above --- trajectory trajectory

The code for generating "Learned & corrected" is not yet publically available.

YouTube8M IDs of the videos used in the paper

1ofm 2Ffk 2Gc7 2hdG 4Kdy 4gbW 70eK 77cq 7We1 8Eff 8W2O 8bfg 9q4L A8cd AHdn Ai8q B8fJ BfeT C23C C4be CP6A EOdA Gu4d IdeB Ixfs Kndm L1fF M28T M92S NSbx NSfl NT57 Q33E Qu62 U4eP UCeG VRdE W0ch WU6A WWdu WY2M XUeS YLcc YkfI ZacY aW8r bRbL d79L d9bU eEei ePaw iOdz iXev j42G j97W k7fi kxe2 lIbd lWeZ mw3B nLd8 olfE qQ8k qS6J sFb2 si9H uofG yPeZ zger

The YouTube8M website provides the instructions for mapping them you YouTube IDs. Two consecutive frames were sampled off of each video every second.

You can’t perform that action at this time.