img-responsive

Introduction


Depth estimation from monocular and stereo images plays an essential role in real-world visual perception systems. Although promising results have been achieved, the current learning-based depth estimation models are trained and tested on clean datasets while ignoring the out-of-distribution (OoD) situations. Common corruptions, however, tend to happen in practical scenarios, which is safety-critical for applications like autonomous driving and robot navigation. To raise attention among the community to robust depth estimation, we propose the RoboDepth challenge.

Our RoboDepth is the very first benchmark that targets probing the OoD robustness of depth estimation models under common corruptions. There are 18 corruption types in total, ranging from three perspectives:

     1. Weather and lighting conditions, such as sunny, low-light, fog, frost, snow, contrast, etc.
     2. Sensor failure and movement, such as potential blurs (defocus, glass, motion, zoom) caused by motion.
     3. Data processing issues, such as noises (Gaussian, impulse, ISO) happen due to hardware malfunctions.

Toolkit: https://github.com/ldkong1205/RoboDepth.
Contact: robodepth@outlook.com.

This competition is sponsored by Baidu Research, USA.



Challenge Tracks


Track 1 

Self-Supervised Learning Track

  • Only monocular sequences are used for model training without the supervision of depth maps

Track 2

Fully-Supervised Learning Track

  • The participants can make full use of all depth maps released to train the model


Submission & Evaluation


In both two tracks, the participants are expected to submit their predictions to the CodaLab server for model evaluation. In order to make a successful submission and evaluation, you need to follow the instructions listed here, including: 1registration, 2file preparation, 3submission & evaluation, and 4result view.



Awards


Track 1 

Self-Supervised Learning Track

  • 1st prize: $ 500
  • 2nd prize: $ 300
  • 3rd prize: $ 200

Track 2

Fully-Supervised Learning Track

  • 1st prize: $ 500
  • 2nd prize: $ 300
  • 3rd prize: $ 200


Timeline


  • Release of Training and Evaluation Data

    Download the data from the competition toolkit. 

  • Competition Server Online @ CodaLab

  • Submission Deadline

    Don't forget to include the code link in your submissions. 

  • Award Decision Announcement

    Associated with the ICRA 2023 conference formality. 



Organizers


Lingdong Kong

NUS Computing
lingdong@comp.nus.edu.sg

Hanjiang Hu

CMU, Safe AI Lab
hanjianghu@cmu.edu

Shaoyuan Xie

HUST
shaoyuanxie@hust.edu.cn

Yaru Niu

CMU, Safe AI Lab
yarun@andrew.cmu.edu

Benoit Cottereau

CNRS
benoit.cottereau@cnrs.fr

Lai Xing Ng

A*STAR, I2R
ng_lai_xing@i2r.a-star.edu.sg

Ding Zhao

CMU, Safe AI Lab
dingzhao@cmu.edu

Hesheng Wang

SJTU
wanghesheng@sjtu.edu.cn

Wei Tsang Ooi

NUS Computing
ooiwt@comp.nus.edu.sg



Acknowledgements


Special thanks to Jiarun Wei and Shuai Wang for building up the original template from the SeasonDepth website.



Sponsor


We thank Baidu Research for the support towards this competition.

Affiliations


This project is supported by DesCartes, a CNRS@CREATE program on Intelligent Modeling for Decision-Making in Critical Urban Systems.

We are actively looking for more sponsors. If you are interested, please contact robodepth@outlook.com.

img-responsive



Terms & Conditions


This competition is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:

1. That the data in this competition comes “AS IS”, without express or implied warranty. Although every effort has been made to ensure accuracy, we do not accept any responsibility for errors or omissions.
2. That you may not use the data in this competition or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain.
3. That you include a reference to RoboDepth (including the benchmark data and the specially generated data for academic challenges) in any work that makes use of the benchmark. For research papers, please cite our preferred publications as listed on our webpage.

To ensure a fair comparison among all participants, we require:

1. All participants must follow the exact same data configuration when training and evaluating their algorithms. Please do not use any public or private datasets other than those specified for model training.
2. The theme of this competition is to probe the out-of-distribution robustness of depth estimation models. Theorefore, any use of the 18 corruption types designed in this benchmark is strictly prohibited, including any atomic operation that is comprising any one of the mentioned corruptions.
3. To ensure the above two rules are followed, each participant is requested to submit the code with reproducible results before the final result is announced; the code is for examination purposes only and we will manually verify the training and evaluation of each participant's model.

Back to top

© The RoboDepth Organizing Team. All Rights Researved.