Permalink
Browse files

book

  • Loading branch information...
AndreaCensi committed Jun 22, 2018
1 parent 71dcad8 commit 474127f2c73ec167724f27af59d8366fc6c1c7c4
Showing with 3,003 additions and 0 deletions.
  1. +3 −0 book/learning_materials/00_title_learning_materials.md
  2. +3 −0 book/learning_materials/09_part_autonomy.md
  3. +46 −0 book/learning_materials/10_autonomy/19_autonomy_vehicles/autonomous_vehicles.md
  4. +75 −0 book/learning_materials/10_autonomy/19_autonomy_vehicles/autonomous_vehicles.slides.md
  5. +3 −0 book/learning_materials/10_autonomy/19_autonomy_vehicles/blind-person.png
  6. +3 −0 book/learning_materials/10_autonomy/19_autonomy_vehicles/image-20180521163335599.png
  7. +199 −0 book/learning_materials/10_autonomy/20_autonomy_overview/20_autonomy_overview.md
  8. +3 −0 book/learning_materials/10_autonomy/20_autonomy_overview/assets/autonomy_overview_block.jpg
  9. +3 −0 book/learning_materials/10_autonomy/20_autonomy_overview/assets/autonomy_overview_block.svg
  10. +3 −0 book/learning_materials/10_autonomy/20_autonomy_overview/assets/camera.svg
  11. +3 −0 book/learning_materials/10_autonomy/20_autonomy_overview/assets/gps.svg
  12. +3 −0 book/learning_materials/10_autonomy/20_autonomy_overview/assets/imu.svg
  13. BIN book/learning_materials/10_autonomy/20_autonomy_overview/assets/line_detections.pdf
  14. +3 −0 book/learning_materials/10_autonomy/20_autonomy_overview/assets/radar.svg
  15. +3 −0 book/learning_materials/10_autonomy/20_autonomy_overview/assets/robot_pose_2d.svg
  16. BIN book/learning_materials/10_autonomy/20_autonomy_overview/assets/stop_sign_detection.pdf
  17. +3 −0 book/learning_materials/10_autonomy/20_autonomy_overview/assets/velo.svg
  18. +8 −0 book/learning_materials/10_autonomy/21_amod/AMOD_intro.md
  19. +133 −0 book/learning_materials/10_autonomy/21_amod/AMOD_intro.slides.md
  20. +3 −0 book/learning_materials/10_autonomy/21_amod/assets/image-20180522140641324.png
  21. +3 −0 book/learning_materials/10_autonomy/21_amod/assets/image-20180522140658996.png
  22. +3 −0 book/learning_materials/10_autonomy/21_amod/assets/image-20180522140706254.png
  23. +3 −0 book/learning_materials/10_autonomy/21_amod/assets/image-20180522140718141.png
  24. +111 −0 book/learning_materials/10_autonomy/40_representations.md
  25. +17 −0 book/learning_materials/10_autonomy/autonomy.bib
  26. +3 −0 book/learning_materials/20_part_systems.md
  27. +3 −0 book/learning_materials/21_systems/30_modern_robotic_systems/example_cloud_architecture.png
  28. +193 −0 book/learning_materials/21_systems/30_modern_robotic_systems/modern_robotic_systems.md
  29. +137 −0 ...materials/21_systems/30_software_architectures/20_event_based/20_event_based_signal_processing.md
  30. +46 −0 book/learning_materials/21_systems/30_software_architectures/20_event_based/event_based.bib
  31. +60 −0 book/learning_materials/21_systems/30_software_architectures/35_middleware.md
  32. +47 −0 book/learning_materials/21_systems/30_software_architectures/50_contracts.md
  33. +186 −0 book/learning_materials/21_systems/30_software_architectures/60_configuration.md
  34. BIN book/learning_materials/21_systems/35_system_architectures/1881-1877-1-PB.pdf
  35. BIN book/learning_materials/21_systems/35_system_architectures/albus02rcs.pdf
  36. BIN book/learning_materials/21_systems/35_system_architectures/albus06learning.pdf
  37. +1 −0 book/learning_materials/21_systems/35_system_architectures/system_architectures.bib
  38. +125 −0 book/learning_materials/21_systems/35_system_architectures/system_architectures.md
  39. BIN book/learning_materials/21_systems/35_system_architectures/vehicles_chap1.pdf
  40. +65 −0 book/learning_materials/21_systems/36_autonomy_architectures/36_autonomy_architectures.md
  41. +3 −0 book/learning_materials/49_perception_basics.md
  42. +20 −0 book/learning_materials/50_perception_basics/10_CV_basics.md
  43. +18 −0 book/learning_materials/50_perception_basics/15_camera_geometry.md
  44. +15 −0 book/learning_materials/50_perception_basics/18_calibration.md
  45. +18 −0 book/learning_materials/50_perception_basics/19_image_filtering.md
  46. +3 −0 book/learning_materials/50_perception_basics/20_illumination.md
  47. +93 −0 book/learning_materials/50_perception_basics/27_anti_instagram/27_anti_instagram.md
  48. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/ff.png
  49. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/ff_closed.png
  50. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/ff_closed_ff.png
  51. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/ff_closed_ff_clipped.png
  52. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/grad.png
  53. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/grad_cnt.png
  54. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/grad_cnt_masked.png
  55. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/grad_cnt_masked_th.png
  56. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/grad_th.png
  57. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/grad_th_dilated.png
  58. +3 −0 book/learning_materials/50_perception_basics/27_anti_instagram/grad_th_dilated_zeroed.png
  59. +16 −0 book/learning_materials/50_perception_basics/30_line_detection.md
  60. +3 −0 book/learning_materials/50_perception_basics/40_feature_extraction.md
  61. +3 −0 book/learning_materials/50_perception_basics/50_place_recognition.md
  62. +26 −0 book/learning_materials/50_perception_basics/computer-vision-bib.bib
  63. +13 −0 book/learning_materials/60_filtering/10_filtering1.md
  64. +11 −0 book/learning_materials/60_filtering/20_filtering2.md
  65. +3 −0 book/learning_materials/69_planning.md
  66. +3 −0 book/learning_materials/70_modeling_control.md
  67. +668 −0 book/learning_materials/71_duckiebot_modeling/10_basic_car_modeling-kin-and-dyn.md
  68. +65 −0 book/learning_materials/71_duckiebot_modeling/30_odometry_calibration.md
  69. +27 −0 book/learning_materials/71_duckiebot_modeling/basic_dynamics_kinematics.bib
  70. +3 −0 book/learning_materials/71_duckiebot_modeling/mod-dyn-dc-motor-electrical.png
  71. +3 −0 book/learning_materials/71_duckiebot_modeling/mod-dyn.png
  72. +3 −0 book/learning_materials/71_duckiebot_modeling/mod-fbd.png
  73. +3 −0 book/learning_materials/71_duckiebot_modeling/mod-fig1-fixed.png
  74. +3 −0 book/learning_materials/71_duckiebot_modeling/mod-kin-icc.png
  75. +3 −0 book/learning_materials/71_duckiebot_modeling/mod-kin.png
  76. +3 −0 book/learning_materials/71_duckiebot_modeling/mod-pure-rolling-simpler.png
  77. +3 −0 book/learning_materials/71_duckiebot_modeling/mod-pure-rolling.png
  78. +3 −0 book/learning_materials/72_planning/10_mission_planning.md
  79. +3 −0 book/learning_materials/72_planning/20_discrete_planning.md
  80. +3 −0 book/learning_materials/72_planning/30_motion_planning.md
  81. +3 −0 book/learning_materials/72_planning/40_RRT.md
  82. +3 −0 book/learning_materials/75_control/10_feedback.md
  83. +3 −0 book/learning_materials/75_control/20_PID.md
  84. +3 −0 book/learning_materials/75_control/30_MPC.md
  85. +3 −0 book/learning_materials/79_advanced_perception.md
  86. +3 −0 book/learning_materials/80_advanced_perception/10_object_detection.md
  87. +3 −0 book/learning_materials/80_advanced_perception/20_object_classification.md
  88. +3 −0 book/learning_materials/80_advanced_perception/30_object_tracking.md
  89. +3 −0 book/learning_materials/80_advanced_perception/40_reacting_to_obstacles.md
  90. +3 −0 book/learning_materials/80_advanced_perception/50_semantic_segmentation.md
  91. +3 −0 book/learning_materials/80_advanced_perception/65_text_recognition.md
  92. +3 −0 book/learning_materials/82_SLAM/10_SLAM_formulation.md
  93. +3 −0 book/learning_materials/82_SLAM/20_SLAM_broad_categories.md
  94. +3 −0 book/learning_materials/82_SLAM/30_VINS.md
  95. +3 −0 book/learning_materials/82_SLAM/40_advanced_place_recognition.md
  96. +3 −0 book/learning_materials/83_fleet_management.md
  97. +3 −0 book/learning_materials/84_fleet_level/10_fleet_level1.md
  98. +5 −0 book/learning_materials/84_fleet_level/20_fleet_level2.md
  99. +4 −0 book/learning_materials/85_bibliography.md
  100. +1 −0 book/learning_materials/90_templates/00_part_template.md
  101. +199 −0 book/learning_materials/90_templates/01_theory-chapter-template.md
  102. +149 −0 book/learning_materials/90_templates/10_symbols.md
  103. +17 −0 book/learning_materials/90_templates/theory-bib-template.bib
@@ -0,0 +1,3 @@
# Learning materials {#book:learning-materials status=ready}

Maintainer: Liam Paull
@@ -0,0 +1,3 @@
# Autonomy {#part:autonomy}


@@ -0,0 +1,46 @@
# Autonomous Vehicles {#autonomous-vehicles status=draft}

Maintainer: Andrea Censi

[Slides for this section.](#autonomous-vehicles-slides)

## Autonomous Vehicles in the News {#autonomous-vehicles-news}

These days it is hard to separate the fact from the fiction when it comes to autonomous vehicles, particularly self-driving cars. Virtually every major car manufacturer has pledged to deploy some form of self-driving technology in the next five years. In addition, there are many startups and software companies which are also known to be developing self-driving car technology.

Here's a non-exhaustive list of some of companies that are actively developing autonomous cars:

* [Waymo](https://waymo.com/) (Alphabet/Google group)
* [Tesla Autopilot project](https://www.tesla.com/en_CA/autopilot?redirect=no)
* [Uber Advanced Technologies Group](https://www.uber.com/info/atg/)
* Cruise Automation
* [nuTonomy](http://nutonomy.com/)
* [Toyota Research Institute](http://www.tri.global/)
* [Aurora Innovation](https://aurora.tech/)
* [Zoox](http://zoox.com/)
* [Audi](https://techcrunch.com/2017/06/06/audi-is-the-first-to-test-autonomous-vehicles-in-new-york/)
* [Nissan's autonomous car](https://www.nissanusa.com/blog/autonomous-drive-car)
* [Baidu](http://usa.baidu.com/adu/)
* Apple "Project Titan" (no official details released)
* [Drive.ai](https://www.drive.ai/)

## Levels of Autonomy {#autonomy-levels}

Before even discussing any detailed notion of autonomy, we have to specify exactly what we are talking about. In the United States, the governing body is the NHTSA, and they have recently (Oct 2016) redefined the so-called "levels of autonomy" for self-driving vehicles.

In broad terms, they are as follows


* **Level 0**: the human driver does everything;
* **Level 1**: an automated system on the vehicle can sometimes assist the human
driver conduct some parts of the driving task;
* **Level 2**: an automated system on the vehicle can actually conduct some parts of the driving task, while the human continues to monitor the driving environment and performs the rest of the driving task;
* **Level 3**: an automated system can both actually conduct some parts of the driving task and monitor the driving environment in some instances, but the human driver must be ready to take back control when the automated system requests;
* **Level 4**: an automated system can conduct the driving task and monitor the driving environment, and the human need not take back control, but the automated system can operate only in certain environments and under certain conditions; and
* **Level 5**: the automated system can perform all driving tasks, under all conditions that a human driver could perform them.


## The two paths to autonomy

TODO for Andrea Censi: insert slides of the two paths

@@ -0,0 +1,75 @@
# Autonomous Vehicles {#autonomous-vehicles-slides status=draft type=slides nonumber=1 label-name="🎦 Autonomous Vehicles"}

Maintainer: Andrea Censi

## Information

Provides:

* The state-of-the-art of AVs
* Levels of autonomy

Requires:

* None

Credits:

- Liam Paull (Université de Montréal) - Sept 6, 2017

## A booming industry

- [Waymo](https://waymo.com/) (Alphabet/Google group)
- [Tesla Autopilot project](https://www.tesla.com/en_CA/autopilot?redirect=no)
- [Uber Advanced Technologies Group](https://www.uber.com/info/atg/)
- [Cruise Automation](https://getcruise.com/)
- [nuTonomy](http://nutonomy.com/)
- [Toyota Research Institute](http://www.tri.global/) (Broader than just autonomous cars)
- [Aurora Innovation](https://aurora.tech/)
- [Zoox](http://zoox.com/)
- [Audi](https://techcrunch.com/2017/06/06/audi-is-the-first-to-test-autonomous-vehicles-in-new-york/)
- [Nissan's autonomous car](https://www.nissanusa.com/blog/autonomous-drive-car)
- [Baidu](http://usa.baidu.com/adu/)
- Apple "Project Titan" (no official details released)

## The leves of autonomy

- **Level 0**: the human driver does everything;
- **Level 1**: an automated system on the vehicle can sometimes assist the human driver conduct some parts of the driving task;
- **Level 2**: an automated system on the vehicle can actually conduct some parts of the driving task, while the human continues to monitor the driving environment and performs the rest of the driving task;
- **Level 3**: an automated system can both actually conduct some parts of the driving task and monitor the driving environment in some instances, but the human driver must be ready to take back control when the automated system requests;
- **Level 4**: an automated system can conduct the driving task and monitor the driving environment, and the human need not take back control, but the automated system can operate only in certain environments and under certain conditions
- **Level 5**: the automated system can perform all driving tasks, under all conditions that a human driver could perform them.

## Level 2

TODO: Add video here

## Level 3 - the handover problem

TODO: add video here

## Level 3 - example failure

TODO: add video here

- This accident resulted in a fatality

## The potential - societal impact

- For some less fortunate, a self-driving car would be more than a “convenience”

<figure class="stretch">
<img src="assets/blind-person.png"/>
</figure>

## The Potential - Economic

TODO: add video

<figure class="stretch">
<img src="assets/image-20180521163335599.png"/>
</figure>

- Prediction: the trucking industry will look very different in 10 years
- What about all the jobs lost?
@@ -0,0 +1,199 @@
# Autonomy overview {#autonomy-overview status=ready}

Assigned: Liam Paull

This unit introduces some basic concepts ubiquitous in autonomous vehicle navigation.


## Basic Building Blocks of Autonomy {#basic-blocks}

The minimal basic backbone processing pipeline for autonomous vehicle navigation is shown in [](#fig:autonomy_block_diagram).

<div figure-id="fig:autonomy_block_diagram" figure-caption="The basic building blocks of any autonomous vehicle">
<img src="assets/autonomy_overview_block.jpg" style='width: 30em; height:auto'/>
</div>

For an autonomous vehicle to function, it must achieve some level of performance for all of these components. The level of performance required depends on the *task* and the *required performance*. In the remainder of this section, we will discuss some of the most basic options. In [the next section](#advanced-blocks) we will briefly introduce some of the more advanced options that are used in state-of-the-art autonomous vehicles.

### Sensors {#sensors}


<div figure-id="fig:sensors" figure-class="flow-subfigures" figure-caption="Some common sensors used for autonomous navigation">
<div figure-id="subfig:velo" figure-caption="Velodyne 3D Laser Scanner">
<img src="assets/velo.pdf" style='width: 20ex; height: auto'/>
</div>
<div figure-id="subfig:camera" figure-caption="Camera">
<img src="assets/camera.pdf" style='width: 20ex; height: auto'/>
</div>
<div figure-id="subfig:radar" figure-caption="Automotive Radar">
<img src="assets/radar.pdf" style='width: 20ex; height: auto'/>
</div>
<div figure-id="subfig:gps" figure-caption="GPS Receiver">
<img src="assets/gps.pdf" style='width: 20ex; height: auto'/>
</div>
<div figure-id="subfig:imu" figure-caption="Inertial Measurement Unit">
<img src="assets/imu.pdf" style='width: 20ex; height: auto'/>
</div>
</div>

TODO for Liam Paull: Actually we can directly include the SVG files.

<style>

/*figure.flow-subfigures div.generated-figure-wrap { display: inline-block; }*/

</style>

\begin{definition}[Sensor]\label{def:sensor}
A *sensor* is a device that or mechanism that is capable of generating a measurement of some external phyiscal quantity
\end{definition}

In general, sensors have two major types. *Passive* sensors generate measurements without affecting the environment that they are measuring. Examples include inertial sensors, odometers, GPS receivers, and cameras. *Active* sensors emit some form of energy into the environment in order to make a measurement. Examples of this type of sensor include Light Detection And Ranging (LiDAR), Radio Detection And Ranging (RaDAR), and Sound Navigation and Ranging (SoNAR). All of these sensors emit energy (from different spectra) into the environment and then detect some property of the energy that is reflected from the environment (e.g., the time of flight or the phase shift of the signal)


### Raw Data Processing {#data-processing}

The raw data that is input from a sensor needs to be processed in order to become useful and even understandandable to a human.

First, **calibration** is usually required to convert units, for example from a voltage to a physical quantity. As a simple example consider a thermometer, which measures temperature via an expanding liquid (usually mercury). The calibration is the known mapping from amount of expansion of liquid to temperature. In this case it is a linear mapping and is used to put the markings on the thermometer that make it useful as a sensor.

We will distiguish between two fundamentally types of calibrations.

\begin{definition}[Intrinsic Calibration]\label{def:intrinsic-calibration}
An *intrinsic calibration* is required to determine sensor-specific paramaters that are internal to a specific sensor.
\end{definition}

\begin{definition}[Extrinsic Calibration]\label{def:extrinsic-calibration}
An *extrinsic calibration* is required to determine the external configuration of the sensor with respect to some reference frame.
\end{definition}

<div class="check" markdown="1">
For more information about reference frames check out [](#reference_frames)
</div>

Calibration is very important consideration in robotics. In the field, the most advanced algorithms will fail if sensors are not properly calibrated.


Once we have properly calibrated data in some meaningful units, we often do some preprocessing to reduce the overall size of the data. This is true particularly for sensors that generate a lot of data, like cameras. Rather than deal with every pixel value generated by the camera, we will process an image to generate feature-points of interest. In "classical" computer vision many different feature descriptors have been proposed (Harris, BRIEF, BRISK, SURF, SIFT, etc), and more recently Convolutional Neural Networks (CNNs) are being used to learn these features.

<!-- dimensionality reduction -->

The important property of these features is that they should be as easily to associate as possible across frames. In order to achieve this, the feature descriptors should be invariant to nuissance parameters.

<div figure-id="fig:line_detections" figure-caption="Top: A raw image with feature points indicated. Bottom: Lines projected onto ground plane using extrinsic calibration and ground projections">
<img src="line_detections.pdf" style='width: 20em; height: auto'/>
</div>


### State Estimation {#state-estimation}

Now that we have used our sensors to generate a set of meaningful measurements, we need to combine these measurements together to produce an estimate of the underlying hidden *state* of the robot and possibly to environment.

\begin{definition}[State]
The state $\state_t \in \statesp$ is a *sufficient statistic* of the environment, i.e. it contains all sufficient information required for the robot to carry out its task in that environment. This can (and usually does) include the *configuration* of the robot itself.
\end{definition}

What variables are maintained in the state space $\statesp$ depends on the problem at hand. For example we may just be interested in a single robot's configuration in the plane, in which case $\state_t \equiv \pose_t$. However, in other cases, such as simultaneous localization and mapping, me may also be tracking the map in the state space.

According to Bayesian principles, any system parameters that are not fully known and deterministic should be maintained in the state space.

In general, we do not have direct access to values in $\state$, instead we rely on our (noisy) sensor measurements to tell us something about them, and then we *infer* the values.

<div figure-id="fig:lane_following">
<figcaption>Lane Following in Duckietown. *Top Right*: Raw image; *Bottom Right*: Line detections; *Top Left*: Line projections and estimate of robot position within the lane (green arrow); *Bottom Left*: Control signals sent to wheels.</figcaption>
<dtvideo src="vimeo:232324847"/>
</div>

The animation in [](#fig:lane_following) shows the lane following procedure. The output of the state estimator produces the **green arrow** in the top left pane.



### Navigation and Planning {#navigation-planning}

<div figure-id="fig:nested_control" figure-caption="An example of nested control loops">
<img src="assets/keynote_figs.001.jpg" style='width: 30em; height: auto'/>
</div>

In general we decompose the task of controlling an autonomous vehicle into a series of **nested control loops**.



The loops are called nested since the output of the outer loop is used as the reference input to the inner loop. An example is shown in [](#fig:nested_control).

Recommended: If [](#fig:nested_control) is **VERY** mysterious to you, then you may want to have a quick look in a basic feedback control textbook. For example [](#bib:principles_robot_motion) or [](#bib:planning_algorithms).

In this case we show three loops. At the outer loop, some goal state is provided. The actual state of the robot is used as the feedback. The controller is the block labeled `Navigation and Motion Planning`. The job of this block is generate a **feasible path** from the current state to the goal state. This is executed in **configuration space** rather than the state space (although these two spaces may happen to be the same they are fundamentally conceptually different.

<div figure-id="fig:navigation">
<figcaption>Navigation in Duckietown</figcaption>
<dtvideo src="vimeo:232333925"/>
</div>



### Control

The next inner loop of the nested controller in [](#fig:nested_control) is the `Vehicle Controller`, which takes as input the reference trajectory generated by the `Navigation and Motion Planning` block and the current configuration of the robot, and uses the error between the two to generate a control signal.

The most basic feedback control law (See [](#feedback-control) is called PID (for proportional, integral, derivative) which will be discussed in [](#PID-control). For an excellent introduction to this control policy see [](#fig:control).

<div figure-id="fig:control">
<figcaption>Controlling Self Driving Cars</figcaption>
<iframe style='width: 20em; height:auto' src="https://www.youtube.com/embed/4Y7zG48uHRo" frameborder="0" allowfullscreen="true"></iframe>
</div>

We will also investigate some more advanced non-linear control policies such as [Model Predictive Control](#MPC-control), which is an optimization based technique.

### Actuation {#actuation}

The very innermost control loop deals with actually tracking the correct voltage to be sent to the motors. This is generally executed as close to the hardware level as possible. For example we have a `Stepper Motor HAT` [See the parts list](#acquiring-parts-c0).



### Infrastructure and Prior Information {#overview-infrastructure}

In general, we can make the autonomous navigation a simpler one by exploiting existing structure, infastructure, and contextual prior knowledge.

Infrastructure example: Maps or GPS satellites

Structure example: Known color and location of lane markings

Contextual prior knowledge example: Cars tend to follow the *Rules of the Road*


## Advanced Building Blocks of Autonomy {#advanced-blocks}

The basic building blocks enable static navigation in Duckietown. However, many other components are necessary for more realistic scenarios.

### Object Detection {#object-detection}

<div figure-id="fig:object_detection" figure-caption="Advanced Autonomy: Object Detection">
<img src="stop_sign_detection.pdf" style='width: 20em; height:auto'/>
</div>

One key requirement is the ability to detect objects in the world such as but not limited to: signs, other robots, people, etc.


### SLAM {#slam}

The simultaneous localization and mapping (SLAM) problem involves simultaneously estimating not only the robot state but also the **map** at the same time, and is a fundamental capability for mobile robotics. In autonomous driving, generally the most common application for SLAM is actual in the map-building task. Once a map is built then it can be pre-loaded and then used for pure localization. A demonstration of this in Duckietown is shown in [](#fig:localization).

<div figure-id="fig:localization">
<figcaption>Localization in Duckietown</figcaption>
<dtvideo src="vimeo:232333888"/>
</div>


### Other Advanced Topics

Other topics that will be covered include:

* Visual-inertial navigation (VINS)

* Fleet management and coordination

* Scene segmentation

* Deep perception

* Text recognition
@@ -0,0 +1,8 @@
# Autonomous Mobility on Demand: Market, Technology, and Society {#amodintro status=draft}

Assigned: Andrea Censi

[Slides for this section](#amodintro-slides)



Oops, something went wrong.

0 comments on commit 474127f

Please sign in to comment.