Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
cleaned up references, removed mentioning of module submissions
  • Loading branch information
jzilly committed Aug 5, 2018
1 parent ea32522 commit 8f7d41d
Show file tree
Hide file tree
Showing 5 changed files with 1,459 additions and 266 deletions.
50 changes: 25 additions & 25 deletions book/AIDO/10_motivation/20_motivation.md
Expand Up @@ -114,20 +114,20 @@ Tasks within which code to control multiple robots or agents is submitted while

Participants may submit code to each challenge individually. Tasks proposed in the *AI Driving Olympics* are ordered first by type and secondly by increasing difficulty in a way which encourages modular reuse of solutions to previous tasks.

## Submission types
## Submission

There are two ways of participating in the AI Driving Olympics.
<!-- There are two ways of participating in the AI Driving Olympics. -->

### End-to-end type
<!-- ### End-to-end type -->

You are evaluated on the [objectives](#part:aido-rules) defined for the task you are submitting to.

Either you can provide an end-to-end solution; or you can choose from a zoo of architectures, with interchangeable modules. You can write your own modules, or you can use those made available.
<!-- Either you can provide an end-to-end solution; or you can choose from a zoo of architectures, with interchangeable modules. You can write your own modules, or you can use those made available. -->


#### Learning protocol
### Learning protocol

We use this process:
There are different ways to learn on Duckietown data and simulator interactions. We use this process:

**Learning**

Expand All @@ -141,9 +141,9 @@ We use this process:

After the evaluation in robotarium, the sensorimotor logs as well as violation metric annotations are made available to everybody to be used in off-policy learning.

In addition, the developer also gets the logs of intermediate signals produced by their agent. (assuming these logs are reasonably small.)
In addition, the developer also gets the logs of intermediate signals produced by their agent (assuming these logs are reasonably small).

### Modules type
<!-- ### Modules type
Another mode of submission is that people can also compete in simpler tasks by creating modules for some well-defined tasks such as:
Expand All @@ -159,44 +159,44 @@ This metric is formalized uses supervised learning from logged data and unsuperv
* **Supervised learning from logged data:** You are given as input the input data, and the output data, produced either using a baseline solution, or by a ground truth system.
* **Unsupervised learning from logged data:** You have access to other unlabeled logs.
* **Unsupervised learning from logged data:** You have access to other unlabeled logs. -->

**Evaluation:**
<!-- **Evaluation:**
We compute a set of metrics (using e.g. ground truth data) but these are not used for winning.
The module wins if it is used in an end-to-end entry that wins.
The module wins if it is used in an end-to-end entry that wins. -->




<!-- For a mathematical introduction to solving tasks in the context in robotics, please refer to [](#general_problem). -->


<cite id="bib:Singh">TODO: find paper Singh</cite>
<!-- <cite id="bib:Singh">TODO: find paper Singh</cite> -->

<cite id="bib:darpa_grand_challenge">TODO: find paper DARLA</cite>
<!-- <cite id="bib:darpa_grand_challenge">TODO: find paper DARLA</cite> -->

<cite id="bib:cmu_self_driving_original">TODO: find paper autonomous_cmu</cite>
<!-- <cite id="bib:cmu_self_driving_original">TODO: find paper autonomous_cmu</cite> -->

<cite id="bib:autonomous_germany">TODO: find paper autonomous_germany</cite>
<!-- <cite id="bib:autonomous_germany">TODO: find paper autonomous_germany</cite> -->

<cite id="bib:robotarium">TODO: find paper Robotarium</cite>
<!-- <cite id="bib:robotarium">TODO: find paper Robotarium</cite> -->

<cite id="bib:AprilTags">TODO: find paper AprilTags</cite>
<!-- <cite id="bib:AprilTags">TODO: find paper AprilTags</cite> -->

<cite id="bib:amodeus">TODO: find paper amodeus</cite>
<!-- <cite id="bib:amodeus">TODO: find paper amodeus</cite> -->


<cite id="bib:DARLA">TODO: find paper DARLA</cite>
<!-- <cite id="bib:DARLA">TODO: find paper DARLA</cite> -->

<cite id="bib:overview_autonomous_vision">TODO: find paper overview_autonomous_vision</cite>
<!-- <cite id="bib:overview_autonomous_vision">TODO: find paper overview_autonomous_vision</cite> -->

<cite id="bib:japan_self_driving">TODO: find paper japan_self_driving</cite>
<!-- <cite id="bib:japan_self_driving">TODO: find paper japan_self_driving</cite> -->

<cite id="bib:autonomous_nvidia">TODO: find paper autonomous_nvidia</cite>
<!-- <cite id="bib:autonomous_nvidia">TODO: find paper autonomous_nvidia</cite> -->

<cite id="bib:paull2017duckietown">TODO: find paper paull2017duckietown</cite>
<!-- <cite id="bib:paull2017duckietown">TODO: find paper paull2017duckietown</cite> -->

<cite id="bib:schwarting2018planning">TODO: find paper schwarting2018planning</cite>
<!-- <cite id="bib:schwarting2018planning">TODO: find paper schwarting2018planning</cite> -->

<cite id="bib:Pfeiffer2017FromRobots">TODO: find paper Pfeiffer2017FromRobots</cite>
<!-- <cite id="bib:Pfeiffer2017FromRobots">TODO: find paper Pfeiffer2017FromRobots</cite> -->
2 changes: 1 addition & 1 deletion book/AIDO/10_motivation/35_embodied_tasks.md
Expand Up @@ -256,7 +256,7 @@ In 1979, Tsugawa et al. \cite{japan_self_driving} developed a first application

Already in 1987, Dickmanns and Zapp developed vision-based driving algorithms based on recursive estimation and were able to drive a car on structured roads at high speeds \cite{autonomous_germany}.

Using machine learning for lane following dates back to 1989, where a neural network was trained to drive the Carnegie Mellon autonomous navigation test vehicle \cite{cmu_self_driving_original}. Similarly, the company Nvidia demonstrated that also in modern time an approach similar to \cite{cmu_self_driving_original} can yield interesting results \cite{autonomous_nvidia}.
Using machine learning for lane following dates back to 1989, where a neural network was trained to drive the Carnegie Mellon autonomous navigation test vehicle \cite{cmu_self_driving_original}. Similarly, the company Nvidia demonstrated that also in modern time an approach similar to \cite{cmu_self_driving_original} can yield interesting results \cite{nvidia_autonomous}.

Driving within Duckietowns has been likewise been explored by the creators of Duckietown \cite{paull2017duckietown} where they describe implemented model-based solutions for autonomous driving specifically in the Duckietown environment.

Expand Down
2 changes: 1 addition & 1 deletion book/AIDO/10_motivation/46_task_amod.md
Expand Up @@ -3,7 +3,7 @@
In this task, we zoom out of Duckietown and switch to a task so big that it is not yet accessible in Duckietown but only in simulation. This is likely changing as Duckietowns across the world experience a tremendous growth in population, partly due to the surprising fertility rate of the Duckies themselves, partly due to the significant immigration of Duckies of engineering background.


In this task, we will therefore use the simulation platform AMoDEUs \cite{amodeus} to orchestrate thousands of robotic taxis in cities to pick up and deliver customers in this virtual autonomous mobility-on-demand system composed of **robotaxis**.
In this task, we will therefore use the simulation platform [AMoDEUs](https://github.com/idsc-frazzoli/amodeus) to orchestrate thousands of robotic taxis in cities to pick up and deliver customers in this virtual autonomous mobility-on-demand system composed of **robotaxis**.

The design of operational policies for AMoD systems is a challenging tasks for various reasons that make it also very suitable as a machine learning problem:

Expand Down

0 comments on commit 8f7d41d

Please sign in to comment.