Skip to content

callum-rhodes/U-ARE-ME

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

U-ARE-ME: Uncertainty-Aware Rotation Estimation in Manhattan Environments

Aalok Patwardhan*, Callum Rhodes*, Gwangbin Bae, Andrew J. Davison. (* indicates equal contribution.)

Dyson Robotics Lab, Imperial College London

This code accompanies the U-ARE-ME paper (2024).

Initial Setup

A suitable CUDA enabled graphics card is required to run the system 'out of the box'. At least 4GB of VRAM is required for running the normals model.

Python>=3.9 is required.

We recommend using python venv to set up your environment Run the following:

Clone the repository

git clone https://github.com/callum-rhodes/U-ARE-ME.git
cd U-ARE-ME

Create a python virtual environment

# Create virtual environment
python3 -m venv venv

# Activate virtual environment
. venv/bin/activate

# Install pytorch as per instructions at https://pytorch.org/get-started/locally/
pip3 install torch torchvision torchaudio

# Install the rest of the requirements
pip3 install -r requirements.txt

We use a pretrained surface normal estimation network (visit DSINE (CVPR 2024) for more info). Download the model weights from here. Once downloaded, create a 'checkpoints' directory in the main repository and paste the dsine_v00.pt file into the checkpoints folder e.g.

mkdir checkpoints
mv ~/Downloads/dsine_v00.pt checkpoints/

Run demo

Input can be a video file, path to images, or webcam (default) To save the rotation estimates add the --save_trajectory argument

You can edit further parameters in the config.yml file.

# Webcam input
python uareme.py
# Video file input
python uareme.py --input <myvideo.mp4>
# Images input (with wildcard pattern)
python uareme.py --input 'path/to/images/patterns/*_img.png' # Wildcard path must be in quotes

Faster inference

By default the pytorch JIT model is used. For speeding up inference, we provide utilities for running a Tensor RT precompiled model. In testing this gives about 1.5x inference speedup vs pytorch.

Since Tensor RT models should be compiled for the specific target hardware, the default model will need to be rebuilt.

We use torch2trt to do this. Follow the setup instructions from the repo (step 1 and prerequisites).

To create the Tensort RT model, run :

python utils/create_trt.py

This will create a new model in 'checkpoints' with '..._trt.pth'. To run using this model, change the use_trt parameter in config.yml to True.

Citation

If you find this code/work to be useful in your research, please consider citing the following:

U-ARE-ME:

@misc{patwardhan2024uareme,
    title={U-ARE-ME: Uncertainty-Aware Rotation Estimation in Manhattan Environments}, 
    author={Aalok Patwardhan and Callum Rhodes and Gwangbin Bae and Andrew J. Davison},
    year={2024},
    eprint={2403.15583},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

DSINE (CVPR 2024):

@inproceedings{bae2024dsine,
    title={Rethinking Inductive Biases for Surface Normal Estimation},
    author={Gwangbin Bae and Andrew J. Davison},
    booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2024}
}

Acknowledgement

This research has been supported by the EPSRC Prosperity Partnership Award with Dyson Technology Ltd.

About

Uncertainty-Aware Rotation Estimation in Manhattan Environments using only monocular cues.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages