img-responsive

Introduction


Depth estimation from monocular and stereo images plays an essential role in real-world visual perception systems. Although promising results have been achieved, the current learning-based depth estimation models are trained and tested on clean datasets while ignoring the out-of-distribution (OoD) situations. Common corruptions, however, tend to happen in practical scenarios, which is safety-critical for applications like autonomous driving and robot navigation. To raise attention among the community to robust depth estimation, we propose the RoboDepth challenge.

Our RoboDepth is the very first benchmark that targets probing the OoD robustness of depth estimation models under common corruptions. There are 18 corruption types in total, ranging from three perspectives:

     1. Weather and lighting conditions, such as sunny, low-light, fog, frost, snow, contrast, etc.
     2. Sensor failure and movement, such as potential blurs (defocus, glass, motion, zoom) caused by motion.
     3. Data processing issues, such as noises (Gaussian, impulse, ISO) happen due to hardware malfunctions.

Contact: robodepth@outlook.com.



Challenge Tracks


Track 1 

Self-Supervised Learning Track

  • Only monocular sequences are used for model training without the supervision of depth maps

Track 2

Fully-Supervised Learning Track

  • The participants can make full use of all depth maps released to train the model


Evaluation


To be available soon.



Awards


Track 1 

Self-Supervised Learning Track

  • 1st prize: $ 500
  • 2nd prize: $ 300
  • 3rd prize: $ 200

Track 2

Fully-Supervised Learning Track

  • 1st prize: $ 500
  • 2nd prize: $ 300
  • 3rd prize: $ 200


Timeline


  • Competition Server Online @ CodaLab

    To be available soon.

  • Release of Training Sets

    To be available soon.

  • Release of Test Sets

    To be available soon.

  • Submission Deadline

    Don't forget to include the code link in your submissions. 

  • Award Decision Announcement



Organizers


Lingdong Kong

NUS Computing
lingdong@comp.nus.edu.sg

Hanjiang Hu

CMU, Safe AI Lab
hanjianghu@cmu.edu

Shaoyuan Xie

HUST
shaoyuanxie@hust.edu.cn

Yaru Niu

CMU, Safe AI Lab
yarun@andrew.cmu.edu

Benoit Cottereau

CNRS
benoit.cottereau@cnrs.fr

Lai Xing Ng

A*STAR, I2R
ng_lai_xing@i2r.a-star.edu.sg

Wei Tsang Ooi

NUS Computing
ooiwt@comp.nus.edu.sg

Ding Zhao

CMU, Safe AI Lab
dingzhao@cmu.edu

Hesheng Wang

SJTU
wanghesheng@sjtu.edu.cn



Acknowledgements


Special thanks to Jiaru Wei and Shuai Wang for building up the original template from the SeasonDepth website.



Affiliations


This project is supported by DesCartes, a CNRS@CREATE program on Intelligent Modeling for Decision-Making in Critical Urban Systems.

We are actively looking for more sponsors. If you are interested, please contact robodepth@outlook.com.

img-responsive

Back to top

© The RoboDepth Organizing Team. All Rights Researved.