Skip to content

Latest commit

 

History

History
138 lines (104 loc) · 4.4 KB

README.md

File metadata and controls

138 lines (104 loc) · 4.4 KB

Custom Datasets

COLMAP Scenes

All of our sample scenes have been generated by COLMAP. To run ADOP on your COLMAP reconstruction use the colmap2adop executable.

Important! This expects a dense reconstruction to be completed! A sparse point cloud will provide bad results, even if augmented!

Usage Example:

export SCENE_BASE=/local.home/Projects/TRIPS/additional_material/colmap/Playground/

build/bin/colmap2adop --sparse_dir SCENE_BASE/sparse/0/ \
    --image_dir scenes/tt_playground/images/ \
    --point_cloud_file SCENE_BASE/fused.ply \
    --output_path scenes/playground_test \
    --scale_intrinsics 1 --render_scale 1

You can also checkout the colmap2adop.sh

render_scale

The render_scale (see command above) dictates how large the rendered image is relative to the ground truth images. For example, if the input images are 4000x3000 and the render_scale is 0.5 then the adop_viewer will render the images in 2000x1500. This of course impacts efficiency and memory consumption.

The render_scale can also be updated manually by modifying the dataset.ini in the corresponding scene folder.

We recommend to set render_scale before training, because training and inference should be done at the same scale.

Point Cloud Augmentation

ADOP relies on the point cloud to store texture and geometry information. If the point cloud is too sparse compared to the image resolution, holes might appear or texture detail is lost. A simple trick to improve the render quality is therefore to just duplicate the points and add a small offset to the new points. This can be done by running the preprocess_pointcloud executable. Usage:

cd ADOP
# Double the number of points of the tt_train scene
./build/bin/preprocess_pointcloud --scene_path scenes/tt_train --point_factor 2

The tanks and temples scenes that we have included in the supplementary material have been processed with the following settings:

Scene point_factor
tt_playground 2
tt_lighthouse 4
tt_m60 2
tt_train 3

Other Scenes

To train ADOP on your scene you must convert it to the following format. If you have reconstructed the scene with COLMAP you can use the COLMAP2ADOP converter below.

+ scenes/my_dataset/
  - dataset.ini [required]
        Copy this from a sample scene and update the pathes.
  - Camera files [required]
        For each camera one ini file. Checkout sample scenes.
        If all images are captured by the same camera, only one camera file is required!
        camera0.ini
        camera1.ini
        ...
  - point_cloud.ply [required]
        positions, normals
  - images.txt      [required]
        0000.jpg
        0001.jpg
        ...
  - camera_indices.txt [required]
        For each image the corresponding camera. Example:
        0
        0
        1
        0
        ...
  - masks.txt [optional]
        0000.png
        0001.png
        ...
  - poses.txt [required]
        Camera -> World Transformation
        qx qy qz qw tx ty tz
   - exposure.txt [optional]
        Initialization of the Exposure Value (EV). For example from the jpg EXIF data.
        13.1
        14
        10.5
        ...

COLMAP Usage

When using COLMAP, you need to do the complete sparse reconstruction as well as part of the dense reconstruction to arrive at posed images as well as a dense point cloud.

COLMAP GUI

Using the GUI, we need to do the following steps:

  • File -> New Project: selecting the images and a file for the database to store
  • Processing -> Feature Extraction:
    • Usually use RADIAL as the camera model
    • [optional] check shared for all images if all images where created by the same camera (we support multiple cameras, however this impacts speed)
    • Extract
  • Processing -> Feature Matching:
  • Reconstruction -> Start Reconstruction

(this completes the sparse reconstruction)

  • Reconstruction -> Dense Reconstruction: Select a workspace (I used reco_name/dense/)
    • Undistort
    • Stereo
    • Fusion

COLMAP CLI

Using the command line interface, use the following colmap subcommands:

  • feature_extractor

  • exhaustive_matcher (or other matcher)

  • mapper

  • image_undistorter

  • patch_match_stereo

  • stereo_fusion

(we don't need a mesher, as we use the dense point cloud instead of a triangle mesh)