Skip to content
Permalink
Browse files

improvements

  • Loading branch information...
AndreaCensi committed Oct 12, 2018
1 parent 1f2fdca commit 50d80f9ffa82acb2c8bf82fcbcfb0752425224a7
@@ -2,15 +2,70 @@

Maintainer: Andrea Censi, Liam Paull



<p style='text-align: center'>
<img src="AIDO-768x512.png" width="60%"/>
</p>

<abbr>ML</abbr>, deep learning, and deep reinforcement learning have shown remarkable success on a variety of tasks in the very recent past. However, the ability of these methods to supersede classical approaches on physically embodied agents is still unclear. In particular, it remains to be seen whether learning-based approached can be completely trusted to control safety-critical systems such as self-driving cars.

This live competition is designed to explore which approaches work best for what tasks and subtasks in a complex robotic system. The participants will need to design algorithms that implement either part or all of the management and navigation required for a fleet of self-driving miniature taxis.

We call this competition the "AI Driving Olympics" because there will be a set of different trials that correspond to progressively more sophisticated behaviors for the cars. These vary in complexity, from the reactive task of lane following to more complex and "cognitive" behaviors, such as obstacle avoidance, point-to-point navigation, and finally coordinating a vehicle fleet while adhering to the entire set of the "rules of the road". We will provide baseline solutions for the tasks based on conventional autonomy architectures; the participants will be free to replace any or all of the components with custom learning-based solutions.

The competition will be live at NIPS, but participants will not need to be physically present---they will just need to send their source code packaged as a Docker image.

There will be qualifying rounds in simulation, similar to the recent DARPA Robotics Challenge,
Participants will not need to be physically present---they will just need to send their source code packaged as a Docker image. There will be qualifying rounds in simulation, similar to the recent DARPA Robotics Challenge,
and we will make available the use of "robotariums," which are facilities that allow remote experimentation in a reproducible setting.

**Keywords**: robotics, safety-critical AI, self-driving cars, autonomous mobility on demand, machine learning, artificial intelligence.
**AIDO 1** is in conjuction with NIPS 2018.

**AIDO 2** is in conjuction with ICRA 2019.


## Leaderboards {#book-leaderboard nonumber notoc}

See the leaderboards at the site [`https://challenges.duckietown.org/`](https://challenges.duckietown.org)
to check who is currently winning.

## Book organization {#book-org nonumber notoc}

[](#part:aido-introduction) provides a high-level overview of the scientific motivation and the various
tasks.

[](#part:aido-rules) describes the logistics.

[](#part:manual) is a reference manual for setting up your environment.

[](#part:embodied) describes the embodied tasks.

[](#part:task-amod) describes the AMOD tasks.

[](#part:developers) containes information for challenges organizers.



<!--
### LF: Lane following
<img style="width: 24em" src="https://challenges.duckietown.org/v3/humans/challenges/aido1_lf1-v3/leaderboard/image.png"/>
For more details, see [the online leaderboard](https://challenges.duckietown.org/v3/humans/challenges/aido1_lf1-v3/leaderboard).
### LF: Lane following + vehicles
Not online yet.
### NAV: Navigation
Not online yet.
### AMOD: Simulated Autonomous Mobility on Demand
<img style="width: 24em" src="https://challenges.duckietown.org/v3/humans/challenges/aido1_amod1-v3/leaderboard/image.png"/>
For more details, see [the online leaderboard](https://challenges.duckietown.org/v3/humans/challenges/aido1_amod1-v3/leaderboard).
-->
@@ -1,4 +1,4 @@
# Introduction {#part:aido-introduction status=beta}
# Introduction {#part:aido-introduction}

Maintainer: Julian Zilly

@@ -1,4 +1,4 @@
# Overview of the competition {#aido-overview status=beta}
# Overview of the competition {#aido-overview status=ready}

What are the *AI Driving Olympics*? The AI Driving Olympics (AIDO) are a set of robotics challenges designed to exemplify the unique characteristics of data science in the context of autonomous driving.

@@ -1,75 +1,5 @@
# Embodied individual robot tasks {#embodied_tasks status=beta}

$$
\newcommand{\AC}[1]{{\color{blue}AC: #1}}
\newcommand{\JZ}[1]{{\color{olive}JZ: #1}}
\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}
% Robot:
\newcommand{\dynamical}{\mathcal{D}}
\newcommand{\robot}{\mathcal{R}} % Robot
\newcommand{\config}{\mathcal{Q}} % Configuration space (of robot)
\newcommand{\sensors}{\{z\}} % Sensor set
\newcommand{\bandwidth}{\mathcal{B}}
\newcommand{\computation}{\mathcal{C}}
\newcommand{\memory}{\mathcal{M}}
\newcommand{\actuators}{\mathcal{A}}
\newcommand{\knowledge}{\mathcal{K}}
\newcommand{\perception}{P}
\newcommand{\control}{U}
\newcommand{\actions}{\mathcal{U}}
% Robot mathematics
\newcommand{\operator}{T}
% Groups:
\newcommand{\groups}{G}
%\newcommand{\group}{g}
\newcommand{\groupalgebra}{\mathfrak{g}}
% Scene space
\newcommand{\timespace}{\mathbb{T}}
\newcommand{\environment}{E}
\newcommand{\scene}{\xi}
\newcommand{\scenespace}{\Xi}
\newcommand{\universe}{U}
% Sensor space
\newcommand{\sensor}{\zeta}
\newcommand{\sensorproj}{z}
\newcommand{\sensorspace}{Z}
\newcommand{\projection}{\pi}
\newcommand{\projectionspace}{\Pi}
\newcommand{\viewport}{v}
\newcommand{\viewportspace}{\mathcal{V}}
% Data space
\newcommand{\dataspace}{\mathcal{X}}
\newcommand{\data}{x}
\newcommand{\dataproj}{\phi}
\newcommand{\datakernel}{\psi}
% Output space
\newcommand{\outputy}{y}
\newcommand{\outputspace}{\mathcal{Y}}
% Task space
\newcommand{\task}{T}
\newcommand{\taskspace}{\mathcal{T}}
\newcommand{\objective}{\mathcal{J}}
\newcommand{\robotictask}{RT}
\newcommand{\rules}{\Phi}
\newcommand{\constraints}{\Lambda}
% Action space
\newcommand{\action}{u}
\newcommand{\actionspace}{\mathcal{U}}
\newcommand{\nuisance}{\nu}
% Other characteristics / symbols
\newcommand{\place}{\eta}
\newcommand{\image}{I}
\newcommand{\noise}{n}
\newcommand{\pose}{p}
\newcommand{\shape}{S}
\newcommand{\albedo}{\rho}
% Information theory
\newcommand{\information}{\mathcal{I}}
\newcommand{\expectation}{\mathbb{E}}
% Optimization
\newcommand{\loss}{L}
$$
# Embodied individual robot tasks {#embodied_tasks status=ready}


There are three embodied individual robotic tasks.

@@ -198,9 +128,7 @@ The Duckietown robotariums will be built in five institutions:

1. At ETH Zürich. The projected size is sufficient to allocate 20 robots continuously running (20 robots on the road + 20 robots in charging stations).
2. At National Chiao Tung University, Taiwan. The size will be similar to the ETH Zürich installation.
3. At Tsinghua University, People's Republic of China. The size will be similar to the ETH Zürich installation.
4. At the University of Montréal. The size is to be determined; it will likely be smaller than Zürich and Taiwan.
5. At Georgia Tech. Size to be determined.
3. At the University of Montréal. The size is to be determined; it will likely be smaller than Zürich and Taiwan.


These robotariums will remain available after the competition ends, for follow-up editions,
@@ -1,4 +1,4 @@
# Task: Lane following (LF) {#lf status=beta}
# Task: Lane following (LF) {#lf status=ready}


The first task of the *AI Driving Olympics* is "Lane following".
@@ -1,4 +1,4 @@
# Task: Lane following + Dynamic vehicles (LFV) {#lf_v status=beta}
# Task: Lane following + Dynamic vehicles (LFV) {#lf_v status=ready}

The second task of the *AI Driving Olympics* is "Lane following with dynamic vehicles".
This task is an extension of Task LF to include additional rules of the road and other moving vehicles and static obstacles.
@@ -1,4 +1,4 @@
# Task: Navigation + Dynamic vehicles (NAVV) {#nav_v status=beta}
# Task: Navigation + Dynamic vehicles (NAVV) {#nav_v status=ready}

The third task of the *AI Driving Olympics* is "Navigation with dynamic vehicles".
This task is an extension of task LF and task LFV and now focuses on navigating from location "A" to location "B" within Duckietown. The task also includes a map of Duckietown as input.
@@ -1,75 +1,4 @@
$$
\newcommand{\AC}[1]{{\color{blue}AC: #1}}
\newcommand{\JZ}[1]{{\color{olive}JZ: #1}}
\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}
% Robot:
\newcommand{\dynamical}{\mathcal{D}}
\newcommand{\robot}{\mathcal{R}} % Robot
\newcommand{\config}{\mathcal{Q}} % Configuration space (of robot)
\newcommand{\sensors}{\{z\}} % Sensor set
\newcommand{\bandwidth}{\mathcal{B}}
\newcommand{\computation}{\mathcal{C}}
\newcommand{\memory}{\mathcal{M}}
\newcommand{\actuators}{\mathcal{A}}
\newcommand{\knowledge}{\mathcal{K}}
\newcommand{\perception}{P}
\newcommand{\control}{U}
\newcommand{\actions}{\mathcal{U}}
% Robot mathematics
\newcommand{\operator}{T}
% Groups:
\newcommand{\groups}{G}
%\newcommand{\group}{g}
\newcommand{\groupalgebra}{\mathfrak{g}}
% Scene space
\newcommand{\timespace}{\mathbb{T}}
\newcommand{\environment}{E}
\newcommand{\scene}{\xi}
\newcommand{\scenespace}{\Xi}
\newcommand{\universe}{U}
% Sensor space
\newcommand{\sensor}{\zeta}
\newcommand{\sensorproj}{z}
\newcommand{\sensorspace}{Z}
\newcommand{\projection}{\pi}
\newcommand{\projectionspace}{\Pi}
\newcommand{\viewport}{v}
\newcommand{\viewportspace}{\mathcal{V}}
% Data space
\newcommand{\dataspace}{\mathcal{X}}
\newcommand{\data}{x}
\newcommand{\dataproj}{\phi}
\newcommand{\datakernel}{\psi}
% Output space
\newcommand{\outputy}{y}
\newcommand{\outputspace}{\mathcal{Y}}
% Task space
\newcommand{\task}{T}
\newcommand{\taskspace}{\mathcal{T}}
\newcommand{\objective}{\mathcal{J}}
\newcommand{\robotictask}{RT}
\newcommand{\rules}{\Phi}
\newcommand{\constraints}{\Lambda}
% Action space
\newcommand{\action}{u}
\newcommand{\actionspace}{\mathcal{U}}
\newcommand{\nuisance}{\nu}
% Other characteristics / symbols
\newcommand{\place}{\eta}
\newcommand{\image}{I}
\newcommand{\noise}{n}
\newcommand{\pose}{p}
\newcommand{\shape}{S}
\newcommand{\albedo}{\rho}
% Information theory
\newcommand{\information}{\mathcal{I}}
\newcommand{\expectation}{\mathbb{E}}
% Optimization
\newcommand{\loss}{L}
$$

# Fleet-level social tasks {#social_tasks status=beta}
# Fleet-level social tasks {#social_tasks status=ready}

This section provides a brief introduction to the fleet-level social task "Autonomous Mobility-on-Demand".

@@ -1,4 +1,4 @@
# Task: Autonomous Mobility-on-Demand on AMoDeus {#amod status=beta}
# Task: Autonomous Mobility-on-Demand on AMoDeus {#amod status=ready}

In this task, we zoom out of Duckietown and switch to a task so big that it is not yet accessible in Duckietown but only in simulation. This is likely changing as Duckietowns across the world experience a tremendous growth in population, partly due to the surprising fertility rate of the Duckies themselves, partly due to the significant immigration of Duckies of engineering background.

@@ -90,7 +90,4 @@ Note that the these objectives are not merged into one single number.



For general rules about participation and what happens with submitted code, collected logs and evaluated scores, please see [general rules](#general_rules).


This concludes the exposition of the rules of the AI Driving Olympics. Rules and their evaluation are subject to changes to ensure practicability and fairness of scoring.
@@ -1,6 +1,5 @@
# Other rules {#other-rules status=beta}
# General rules {#other-rules status=beta}

TODO: some repetition

## Protocol

@@ -26,7 +25,7 @@ For validation of submitted code and evaluation for finals at NIPS a surprise en

### Submission of entries

Upon enrollment in the competition (https://www2.duckietown.org/nips-2018-competition/register-for-nips-2018), participants can submit their code in the form of a docker container to a task or module of the AI-DO. Scripts will be provided for creating the container image in a conforming way.
Upon [enrollment in the competition](https://www.duckietown.org/research/ai-driving-olympics/get-started), participants can submit their code in the form of a docker container to a task or module of the AI-DO. Scripts will be provided for creating the container image in a conforming way.

The system will schedule to run the code on the cloud on the challenges selected by the user, and, if simulations pass, on the robotariums.

@@ -36,7 +35,6 @@ Participants are required to open source their solutions source code. If auxilia

Submitted code will be evaluated in simulation and if sufficient on physical robotariums. Scores and logs generated with submitted code will be made available.

How to get started: http://docs.duckietown.org/AIDO/out/aido_quickstart.html.

### Simulators

@@ -57,5 +55,14 @@ When an experiment is run in a validation robotarium, the only output to the use
### Leaderboards

After each run in a robotarium, the participants can see the metrics statistics in the competition website.


Leaderboards are reset at the beginning of October 2018.
## Eligibility

Employees and affiliates of nuTonomy and Amazon AWS are ineligible from participation in the competition. Employees and affiliates of nuTonomy and Amazon AWS may submit baseline solutions that will be reported in a special leaderboard.

Students of ETH Zurich, Georgia Tech, NCTU, Tsinghua, UCLA, TTIC, are eligible to participate in the competition as part of coursework, if they do not work in the organization of the competition.

## Intellectual property

After the competition in the beginning of 2019, we can ask you to contribute a paper to a forthcoming book detailing the lessons learned and successful approaches in the AI Driving Olympics.
Oops, something went wrong.

0 comments on commit 50d80f9

Please sign in to comment.
You can’t perform that action at this time.