Skip to content

82magnolia/panoramic-depth-calibration

Repository files navigation

Panoramic Depth Calibration

Official PyTorch implementation of Calibrating Panoramic Depth Estimation for Practical Localization and Mapping (ICCV 2023) [Paper] [Video].

Our method calibrates a pre-trained panoramic depth estimation network to new, unseen domains using test-time adaptation. The resulting network can be used for downstream tasks such as visual navigation or map-free localization. Below we show a qualitative sample, where our adaptation scheme leads to largely improved depth predictions amidst salt-and-pepper noise.

In this repository, we provide the implementation and instructions for running our calibration method. If you have any questions regarding the implementation, please leave an issue or contact 82magnolia@snu.ac.kr.

Installation

GPUs supporting CUDA < 11.3

First setup a conda environment.

conda create -n pytorch3d python=3.9
conda activate pytorch3d

Then, follow the instructions from PyTorch3D to install the library. Here, please use the following to install PyTorch3D after all the other dependencies are installed.

conda install pytorch3d -c pytorch3d

Then, install other dependencies with pip install -r requirements.txt.

GPUs supporting CUDA >= 11.3

For GPUs supporting CUDA versions greater than 11.3 (e.g., RTX3090), installation is much more straightforward. Run the following sequence of commands.

conda create -n pytorch3d python=3.9
conda activate pytorch3d
conda install -c pytorch pytorch=1.11.0 torchvision cudatoolkit=11.3
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1110/download.html
pip install -r requirements.txt

Dataset Preparation (Stanford 2D-3D-S & OmniScenes)

First ownload the panorama images (pano) and poses (pose) from the following link (download the one without XYZ) and the point cloud (pcd_not_aligned) from the following link. Also, download the 3D line segments through the following link. Then, place the data in the directory structure below.

panoramic-depth-calibration/data
└── stanford (Stanford2D-3D-S Dataset)
    ├── pano (panorama images)
    │   ├── area_1
    │   │  └── *.png
    │   ⋮
    │   │
    │   └── area_6
    │       └── *.png
    ├── pcd (point cloud data)
    │   ├── area_1
    │   │   └── *.txt
    │   ⋮
    │   │
    │   └── area_6
    │       └── *.txt
    └── pose (json files containing ground truth camera pose)
        ├── area_1
        │   └── *.json
        ⋮
        │
        └── area_6
            └── *.json

To obtain results in OmniScenes, please refer to the download instructions below. Note we are using the old version of OmniScenes for this repository. In addition, download the 3D line segments through the following link. Then, place the data in the directory structure below.

panoramic-depth-calibration/data
└── omniscenes (OmniScenes Dataset)
    ├── change_handheld_pano (panorama images)
    │   ├── handheld_pyebaekRoom_1_scene_2 (scene folder)
    │   │  └── *.jpg
    │   ⋮
    │   │
    │   └── handheld_weddingHall_1_scene_2 (scene folder)
    │       └── *.jpg
    └── change_handheld_pose (json files containing ground truth camera pose)
    |   ├── handheld_pyebaekRoom_1_scene_2 (scene folder)
    |   │   └── *.json
    |   ⋮
    |   │
    |   └── handheld_pyebaekRoom_1_scene_2 (scene folder)
    |       └── *.json
    ⋮
    └── pcd (point cloud data)
        ├── pyebaekRoom_1.txt
        │
        ⋮
        │
        └── weddingHall_1.txt

Running

Pre-trained model download

To get started, download the pre-trained depth estimation model from the link. Then, run the following command to place the model weights in the repository.

cd ~/Projects/panoramic-depth-estimation/  # Assume that the code repository is situated here
mkdir pretrained_depth_models
mv ~/Downloads/unet_release.pth pretrained_depth_models/  # Assume model weights are downloaded here

Online calibration

Run the following command for Stanford 2D-3D-S.

mkdir log
python main.py --config config/stanford.ini --log log/LOG_FOLDER

Similarly, run the following command for OmniScenes.

mkdir log
python main.py --config config/omniscenes.ini --log log/LOG_FOLDER

Checking logs

After calibration, the log folder will contain the config.ini used to run the experiment, calibrated depth estimation network model.pth, and a log file containing performance metrics. To view the metrics, run the following command.

python process_logger.py log/LOG_FOLDER/result.pkl

Running on single scenes

Note that one can also run calibration selectively for designated scenes only. For example, to run calibation only on images in room_4 from OmniScenes, run the following command.

python main.py --config configs/omniscenes.ini --override 'room_type=room_4' --log log/LOG_FOLDER

Citation

If you find this repository useful, please cite

@InProceedings{Kim_2023_ICCV,
    author    = {Kim, Junho and Lee, Eun Sun and Kim, Young Min},
    title     = {Calibrating Panoramic Depth Estimation for Practical Localization and Mapping},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {8830-8840}
}

About

Official PyTorch implementation of Calibrating Panoramic Depth Estimation for Practical Localization and Mapping (ICCV 2023).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages