Learning-to-Drive (L2D) Challenge
This workshop will have a challenge track on Learning to Drive with Camera Videos, Navigation Maps and a Vehicle’s CAN bus signals. Specifically, the driving model will learn to predict – given a set of sensor inputs – driving maneuvers consisting of steering wheel angle and vehicle speed at a point in the future. Participants are allowed and encouraged to explore various sensor modalities supplied through our challenge dataset to achieve this goal. The methods of the participants are evaluated based on their Mean-Square-Error (MSE) score and their novelty. We will also provide the results of several published models for comparison. Awards will be given to the challenge winners.
Our challenge dataset will have a wide range of sensors available:- Multiple camera configurations, ranging from a single front facing camera to multiple views providing full surround vision.
- A standard industrial map with over 21 common road attributes, i.e. distance to road furniture, road curvature, intersection geometry, speed limits, etc.
- Visual and numeric route planning modules.
- Odometry via 3-axis acceleration and gyro IMU. GPS coordinates.
- “End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners”, Simon Hecker , Dengxin Dai, Luc Van Gool, ECCV 2018.
- “Learning Accurate, Comfortable and Human-like Driving”, Simon Hecker , Dengxin Dai, Luc Van Gool, ArXiv 2019.
The L2D is now moved to AIcrowd. Please find all details of registration here to participate! There will be awards for the challenge winners!
*Challenge Participants are required to submit a 4 or 4+ page document using the ICCV 2019 paper template to describe data processing steps, network architectures, other implementation details such as hyper-parameters of the network training and the results.
* Please submit the document through the CMT system at https://cmt3.research.microsoft.com/ADW2019 by 20-10-2019 [11:59 p.m. Pacific Standard Time] in order to win the challenge.
*It is allowed to include the names of the authors and the affiliations.
*Please be aware that the novelty of the method is also evaluated. The winners will be invited to present their work in the workshop.