Welcome to the RoboDepth Competition! 🤖
- This is the first challenge on robust depth estimation under corruptions, which is associated with the 40th IEEE Conference on Robotics and Automation (ICRA 2023).
- In this competition, we target on probing the Out-of-Distribution (OoD) robustness of depth estimation models under common corruptions.
- There are 18 different corruption types in total, ranging from three perspectives: weather and lighting conditions, sensor failure and movement, and data processing issues.
- There are two tracks in this competition, including self-supervised depth estimation of outdoor scenes (track 1) and fully-supervised depth estimation of indoor scenes (track 2).
This competition is sponsored by Baidu Research, USA.
- 🌐 - Competition page: https://robodepth.github.io.
- 🔧 - Competition toolkit: https://github.com/ldkong1205/RoboDepth.
- 🚘 - Evaluation server (track 1): https://codalab.lisn.upsaclay.fr/competitions/9418.
- 🚖 - Evaluation server (track 2): https://codalab.lisn.upsaclay.fr/competitions/9821.
- - Official GitHub account: https://github.com/RoboDepth.
- 📫 - Contact: robodepth@outlook.com.
- [2023.01.01] - Competition launches
- [2023.01.02] - Track 1 (self-supervised depth estimation) starts
- [2023.01.15] - Track 2 (fully-supervised depth estimation) starts
- [2023.05.25] - Competition ends
- [2023.05.29] - Workshop & discussion
- [2023.06.02] - Release of results @ ICRA 2023
In this track, the participants are expected to adopt the data from the raw KITTI dataset for model training. You can download this dataset by running:
wget -i splits/kitti_archives_to_download.txt -P kitti_data/Then unzip with:
cd kitti_data/ unzip "*.zip" cd ..Please note that this dataset weighs about
175GB
, so make sure you have enough space tounzip
too!
The training split of this dataset is defined in the
splits/
folder of this codebase. By default, we require all participants to train their depth estimation models using Zhou's subset of the standard Eigen split of KITTI, which is designed for self-supervised monocular training.
⚠️ Regarding the data augmentation to be adopted during the training phase, please refer to the Terms & Conditions section.
In this track, the participants are expected to adopt our generated data for model evaluation. There are multiple ways of accessing this evaluation set. In particular, you can download the data from Google Drive via the following link:
🔗 https://drive.google.com/file/d/14Z0k2lhpk0D0pkyzIcHyk4Ce0wS3IcfF/view?usp=sharing.
Alternatively, you can download the data from this CodaLab page. Please note that you need to register for this track first before entering the downloading page.
This evaluation set weighs about
100MB
. It includes 500 corrupted images, generated under the mentioned 18 corruption types. In this competition, we will evaluate the model performance using the ground-truth depth of these images. The participants are required to submit the prediction file to this evaluation server. For more details on the submission, please refer to the Submission section.
In this track, the participants are expected to adopt the data from the NYU Depth Dataset V2 for model training. You can download this dataset from Google Drive with the following link:
🔗 https://drive.google.com/file/d/1wC-io-14RCIL4XTUrQLk6lBqU2AexLVp/view?usp=sharing.
Alternatively, you can download the data to the server by running:
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1wC-io-14RCIL4XTUrQLk6lBqU2AexLVp' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1wC-io-14RCIL4XTUrQLk6lBqU2AexLVp" -O nyu.zip && rm -rf /tmp/cookies.txtThen unzip with:
unzip nyu.zip
⚠️ Regarding the data augmentation to be adopted during the training phase, please refer to the Terms & Conditions section.
In this track, the participants are expected to adopt our generated data for model evaluation. There are multiple ways of accessing this evaluation set. In particular, you can download the data from Google Drive via the following link:
🔗 https://drive.google.com/file/d/1HIJxmNBFaHwSUABnkEgnFm9EdBBgvozP/view?usp=sharing.
Alternatively, you can download the data from this CodaLab page. Please note that you need to register for this track first before entering the downloading page.
This evaluation set weighs about
12MB
. It includes 200 corrupted images, generated under the mentioned 15 corruption types (excluded:fog
,frost
, andsnow
). In this competition, we will evaluate the model performance using the ground-truth depth of these images. The participants are required to submit the prediction file to this evaluation server. For more details on the submission, please refer to the Submission section.
In this track, the participants are expected to submit their predictions to the CodaLab server for model evaluation. Specifically, you can access the server of this track via the following link:
🔗 https://codalab.lisn.upsaclay.fr/competitions/9418.
In order to make a successful submission and evaluation, you need to follow these instructions:
[Registration]
You will need to register for this track on CodaLab before you can make a submission. To achieve this, apply for a CodaLab account if you do not have one, with your email. Then, go to the server page of this track and pressParticipate
; you will see aSign In
button. Click it for registration.
[File Preparation]
You will need to prepare the model prediction file for submission. Specifically, the evaluation server of this track accepts the.zip
file of your model predictions innumpy array
format. You can follow the example below, which is modified based on the evaluation code from MonoDepth2:
- Step 1: Generate your model predictions with:
pred_disps = [] with torch.no_grad(): for data in dataloader: input_color = data[("color", 0, -1)].to(device) output = depth_decoder(encoder(input_color)) pred_disp, _ = disp_to_depth(output[("disp", 0)], opt.min_depth, opt.max_depth) pred_disp = pred_disp.cpu()[:, 0].numpy() pred_disps.append(pred_disp)- Step 2: After evaluating every sample in the evaluation set, save the prediction file with:
output_path = os.path.join(opt.save_pred_path, "disp.npy") np.save(output_path, pred_disps)
- Step 3: Compress the saved
.npy
file with:zip disp.zip disp.npy- Step 4: Download
disp.zip
from your computing machine.
[Submission & Evaluation]
You will need to submit yourdisp.zip
file manually to the evaluation server. To achieve this, go to the server page of this track and pressParticipate
; you will see aSubmit / View Results
button. Click it for submission. You are encouraged to fill in the submission info with your team name, method name, and method description. Then, click theSubmit
button and select yourdisp.zip
file. After successfully uploading the file, the server will automatically evaluate the performance of your submission and put the results on the leaderboard.
⚠️ Do not close the page when you are uploading the prediction file.
[View Result]
You can view your scores by pressing theResults
button. Following the same configuration with MonoDepth2, we evaluate the model performance with 7 metrics:abs_rel
,sq_rel
,rmse
,rmse_log
,a1
,a2
, anda3
.
In this track, the participants are expected to submit their predictions to the CodaLab server for model evaluation. Specifically, you can access the server of this track via the following link:
🔗 https://codalab.lisn.upsaclay.fr/competitions/9821.
In order to make a successful submission and evaluation, you need to follow these instructions:
[Registration]
You will need to register for this track on CodaLab before you can make a submission. To achieve this, apply for a CodaLab account if you do not have one, with your email. Then, go to the server page of this track and pressParticipate
; you will see aSign In
button. Click it for registration.
[File Preparation]
You will need to prepare the model prediction file for submission. Specifically, the evaluation server of this track accepts the.zip
file of your model predictions innumpy array
format. You can follow the example below, which is modified based on the evaluation code from the Monocular-Depth-Estimation-Toolbox:
- Step 1: Generate your model predictions with:
Please notice that you will need to sort the file paths before inference. As been pointed out by @Zhyever in this issue, you can achieve this via the following line of code:pred_disps = [] for batch_indices, data in zip(loader_indices, data_loader): with torch.no_grad(): result = model(return_loss=False, rescale=True, **data) pred_disps.append(result)img_infos = sorted(img_infos, key=lambda x: x['filename'])
- Step 2: After evaluating every sample in the evaluation set, save the prediction file with:
output_path = os.path.join(opt.save_pred_path, "disp.npz") np.savez_compressed(output_path, data=pred_disps)
- Step 3: Compress the saved
.npz
file with:zip disp.zip disp.npz- Step 4: Download
disp.zip
from your computing machine.
[Submission & Evaluation]
You will need to submit yourdisp.zip
file manually to the evaluation server. To achieve this, go to the server page of this track and pressParticipate
; you will see aSubmit / View Results
button. Click it for submission. You are encouraged to fill in the submission info with your team name, method name, and method description. Then, click theSubmit
button and select yourdisp.zip
file. After successfully uploading the file, the server will automatically evaluate the performance of your submission and put the results on the leaderboard.
⚠️ Do not close the page when you are uploading the prediction file.
[View Result]
You can view your scores by pressing theResults
button. Following the same configuration in the Monocular-Depth-Estimation-Toolbox, we evaluate the model performance with 9 metrics:a1
,a2
,a3
,abs_rel
,sq_rel
,rmse
,rmse_log
,log10
, andsilog
.
This competition is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:
- That the data in this competition comes “AS IS”, without express or implied warranty. Although every effort has been made to ensure accuracy, we do not accept any responsibility for errors or omissions.
- That you may not use the data in this competition or any derivative work for commercial purposes such as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain.
- That you include a reference to RoboDepth (including the benchmark data and the specially generated data for academic challenges) in any work that makes use of the benchmark. For research papers, please cite our preferred publications as listed on our webpage.
To ensure a fair comparison among all participants, we require:
- All participants must follow the exact same data configuration when training and evaluating their algorithms. Please do not use any public or private datasets other than those specified for model training.
- The theme of this competition is to probe the out-of-distribution robustness of depth estimation models. Therefore, any use of the 18 corruption types designed in this benchmark is strictly prohibited, including any atomic operation that comprises any one of the mentioned corruptions.
- For Track 1: Please stick with the default data augmentations used in the MonoDepth2 codebase.
- For Track 2: Please stick with the default data augmentations used in the Monocular-Depth-Estimation-Toolbox codebase.
- To ensure the above two rules are followed, each participant is requested to submit the code with reproducible results before the final result is announced; the code is for examination purposes only and we will manually verify the training and evaluation of each participant's model.
😊 If you have any questions or concerns, please get in touch with us at robodepth@outlook.com.