Skip to content
Permalink
Browse files

New build system (#439)

New build system.

Published at http://docs.duckietown.org/
  • Loading branch information...
AndreaCensi committed Apr 29, 2018
1 parent 4f1e18d commit d220acbb1141a7aad9139f9b352c16f1c75e0fbd
Showing with 140 additions and 101 deletions.
  1. +1 −35 00-part-fall2017projects.md
  2. +35 −0 10_templates/00-instructions.md
  3. +2 −0 11_heroes_system_architecture/10-preliminary-design-document-heroes-system-architect.md
  4. +3 −1 12_smart_city/10-preliminary-design-document-smart-city.md
  5. +3 −0 12_smart_city/plug1.jpg
  6. +3 −1 13_sysid/10-preliminary-design-sysid.md
  7. +7 −8 13_sysid/12-final-project-report-sysid.md
  8. +2 −0 14_controllers/10-preliminary-design-document-controllers.md
  9. +1 −1 14_controllers/20-intermediate-design-document-controllers.md
  10. +3 −7 14_controllers/30-final-project-report-controllers.md
  11. +2 −0 15_saviors/10-preliminary-design-document-saviors.md
  12. +2 −0 16_navigators/10-preliminary-design-document-navigators.md
  13. +5 −5 16_navigators/20-intermediate-design-document-navigators.md
  14. +5 −5 16_navigators/20-navigators-final-report.md
  15. +2 −0 17_parking/10-preliminary-design-document-parking.md
  16. +2 −0 18_explicit-coord/10-preliminary-design-document-explicit-coord.md
  17. +3 −3 18_explicit-coord/20-intermediate-design-document-explicit-coord.md
  18. +1 −1 18_explicit-coord/30-explicit-coord-final-report.md
  19. +1 −1 19_impicit-coord/30-final-project-report-implicit-coord.md
  20. +2 −0 20_single_slam/20_preliminary_design_single_slam.md
  21. +2 −0 21_fleet_planning/10-fleet-level-planning-preliminary-design-doc.md
  22. +9 −10 21_fleet_planning/30-fleet-planning-final-report.md
  23. +1 −0 ...fer/75-preliminary-design-document-transfer-learning.md → 22_transfer_learning/00_transfer_pdd.md
  24. +12 −8 22_transfer_learning/transfer-learning-operation_manual.md
  25. +3 −3 22_transfer_learning/transfer_learning.md
  26. +2 −0 23_super_learning/10-preliminary-design-document-super-learning.md
  27. +2 −0 24_neural_slam/10_preliminary_design_document_neural_slam.md
  28. +2 −0 25_visual_odometry/10-preliminary-design-document-visual-odometry.md
  29. +1 −1 25_visual_odometry/20-writeup-visual-odometry.md
  30. +2 −0 27_anti_instagram/10-preliminary-design-document-anti-instagram.md
  31. +8 −8 27_anti_instagram/30-final-project-report-anti-instagram.md
  32. +3 −0 27_anti_instagram/98-images-anti-instagram/comparison_colorspace/original.jpg
  33. +3 −0 27_anti_instagram/98-images-anti-instagram/comparison_colorspace/original2.jpg
  34. +1 −1 27_anti_instagram/anti-instagram-readme-out-of-place.md
  35. +2 −0 29_distrubuted_est/10-preliminary-design-document-distributed-est-fleet-wireless-communication.md
  36. +2 −2 29_distrubuted_est/demoInstructions_fleetcommunication.md
@@ -1,37 +1,3 @@
# Fall 2017 projects {#part:fall-2017-projects status=beta}
# Fall 2017 projects {#book:fall-2017-projects status=ready}

Welcome to the Fall 2017 projects.

## Instructions for using the template {#fall-2017-projects-instructions}

1. Make a copy of the template `10_templates` folder and paste it inside `/atoms_85_fall2017_projects`.

2. Rename the folder to the next available integer followed by the short group name. E.g.: `10_templates` becomes `11_first_group_name` for the first group, then `12_second_group_name` for the second, and so forth.

3. Edit the `10-preliminary-design-document-template` file name by substituting `template` with `group-name`

4. Open the preliminary design document and personalize the template to your group.

Note: All groups have got their unique ID number and folders are renamed according to the following table. You are allowed and encouraged to use short names. Please merge from the master. New pull requests conflicting to this table will be rejected.

## Group names and ID numbers

| ID | Group name | Short name |
|----|--------------------------------------|--------------------------|
| 11 | The Heroes | heroes |
| 12 | The Architects | smart-city |
| 13 | The Identifiers | sysid |
| 14 | The Controllers | controllers |
| 15 | The Saviors | saviors |
| 16 | The Navigators | navigators |
| 17 | Parking | parking |
| 18 | The Coordinators | explicit-coord |
| 19 | Formations and implicit coordination | implicit-coord |
| 20 | Distributed estimation | distributed-est |
| 21 | Fleet-level planning | fleet-planning |
| 22 | Transfer-learning | transfer-learning |
| 23 | Supervised learning | super-learning |
| 24 | Neural-slam | neural-slam |
| 25 | Visual-odometry | visual-odometry |
| 26 | Single-slam | single-slam |
| 27 | Anti-instagram | anti-instagram |
@@ -0,0 +1,35 @@
# Instructions {#part:fall-2017-projects-instructions status=ready}

# Instructions for using the template {status=ready}

1. Make a copy of the template `10_templates` folder and paste it inside `/atoms_85_fall2017_projects`.

2. Rename the folder to the next available integer followed by the short group name. E.g.: `10_templates` becomes `11_first_group_name` for the first group, then `12_second_group_name` for the second, and so forth.

3. Edit the `10-preliminary-design-document-template` file name by substituting `template` with `group-name`

4. Open the preliminary design document and personalize the template to your group.

Note: All groups have got their unique ID number and folders are renamed according to the following table. You are allowed and encouraged to use short names. Please merge from the master. New pull requests conflicting to this table will be rejected.

## Group names and ID numbers

| ID | Group name | Short name |
|----|--------------------------------------|--------------------------|
| 11 | The Heroes | heroes |
| 12 | The Architects | smart-city |
| 13 | The Identifiers | sysid |
| 14 | The Controllers | controllers |
| 15 | The Saviors | saviors |
| 16 | The Navigators | navigators |
| 17 | Parking | parking |
| 18 | The Coordinators | explicit-coord |
| 19 | Formations and implicit coordination | implicit-coord |
| 20 | Distributed estimation | distributed-est |
| 21 | Fleet-level planning | fleet-planning |
| 22 | Transfer-learning | transfer-learning |
| 23 | Supervised learning | super-learning |
| 24 | Neural-slam | neural-slam |
| 25 | Visual-odometry | visual-odometry |
| 26 | Single-slam | single-slam |
| 27 | Anti-instagram | anti-instagram |
@@ -1,3 +1,5 @@
# The Heroes {#part:heroes}

# The Heroes quests: preliminary report {#heroes-pdd status=ready}

The “Heroes” team is a special task force with the responsibility to make sure that “everything works” and create a smooth experience for the rest of the teams, in terms of developing own projects, integration with other teams and documentation. Apart from that, each of the heroes will also have their own individual quest...
@@ -1,3 +1,5 @@
# Smart city {#part:smart-city}

# PDD - Smart City {#smartcity-pdd status=beta}


@@ -86,7 +88,7 @@ Since the most common PD in Duckietown is a Raspberry Pi, we can design a USB pl
Problem: The primary problem with this approach is its difficult, especially given the project’s short timeframe. However, we could focus on designing and building a prototype that works that could then be mass produced and implemented for the whole Duckietown sometime in the future.


![Plug 1](plug1.png)
![Plug 1](plug1.jpg)
![Plug 2](plug2.png)
![Plug 3](plug3.png)

Git LFS file not shown
@@ -1,3 +1,5 @@
# System Identification {#part:fall2017-sysid-city status=ready}

# System Identification: preliminary report {#sysid-pdd status=beta}


@@ -101,7 +103,7 @@ The same procedure can be done to get $c_l$.

Using the assumption that we can measure $\dot \theta$ we will then get the semiaxis length $L$.

<div figure-id="fig:mod-kin" figure-caption="Relevant notations for modeling a differential drive robot">
<div figure-id="fig:mod-kin-modeling" figure-caption="Relevant notations for modeling a differential drive robot">
<img src="mod-kin.png" style='width: 30em; height:auto'/>
</div>

@@ -1,11 +1,10 @@
# System identification: final report {#sysid-final-report status=beta}


TODO: JT: switch intermediate and first videos

## The final result {#sysid-final-result}

<div figure-id="fig:demo_succeeded-sysid">
<div figure-id="fig:demo_succeeded-sysid-2">
<figcaption>Demo of the calibration procedure
</figcaption>
<dtvideo src='vimeo:251027149'/>
@@ -87,18 +86,18 @@ Hereby, the Duckiebot is placed on a line (e.g. tape). Afterwards the joystick d
Now the human operator commands the Duckiebot to go straight for around 2m.

Observe the Duckiebot from the point where it started moving and annotate on which side of the tape
the Duckiebot drifted ([](#fig:wheel_calibration_lr_drift)).
the Duckiebot drifted ([](#fig:wheel_calibration_lr_drift-2)).


<div figure-id="fig:wheel_calibration_lr_drift" figure-caption="Left/Right drift">
<div figure-id="fig:wheel_calibration_lr_drift-2" figure-caption="Left/Right drift">
<img src="wheel_calibration_lr_drift.jpg" style='width: 30em'/>
</div>

If the Duckiebot drifted to the left side of the tape, decrease the value of $t$, for example:

duckiebot: $ rosservice call /${VEHICLE_NAME}/inverse_kinematics_node/set_trim -- 0.01

Or Changing the trim in a negative way, e.g. to -0.01:
Or changing the trim in a negative way, e.g. to -0.01:

duckiebot: $ rosservice call /${VEHICLE_NAME}/inverse_kinematics_node/set_trim -- -0.01

@@ -114,6 +113,7 @@ The parameters of the Duckiebot are saved in the file
### Opportunity {#sysid-final-opportunity}

#### Current shortcomings

* Human in the loop
* The car is not able to calibrate itself without human input
* The procedure is laborious and can be long
@@ -162,7 +162,7 @@ Hence, we first construct a theoretical model and then we try to fit the model t

The Duckiebot was modeled as a symmetric rigid body, according to the following figure.

<div figure-id="fig:mod-kin" figure-caption="Schematics of differential drive robot [](#bib:Modeling)">
<div figure-id="fig:mod-kin-another" figure-caption="Schematics of differential drive robot [](#bib:Modeling)">
<img src="mod-kin.png" style='width: 30em; height:auto'/>
</div>

@@ -304,15 +304,14 @@ To reproduce the results see the [operation manual](#demo-sysid) which includes

For recording the Rosbag, the Duckiebot has to be placed in front of the chessboard at a distance of slightly more than 1 meter in front of the chessboard (~2 duckie tiles), as shown in the image. The heading has to be set iteratively to maximize the time the Duckiebot sees the chessboard.

<div figure-id="fig:calibration_setup" figure-caption="The calibration setup">
<div figure-id="fig:calibration_setup-2" figure-caption="The calibration setup">
<img src="calibration_setup.jpg" style='width: 30em'/>
</div>

You then have to run the calibration procedure

duckiebot $ roslaunch calibration commands.launch veh:=robot name


The program will publish at a frequency of 30 Hz in the topic robot_name/wheels_driver_node/wheels_cmd the following commands :

- A ramp (the same increasing voltage command to the right and left wheels), of the form
@@ -1,3 +1,5 @@
# The Controllers {#part:controllers status=ready}

# The Controllers: preliminary report {#controllers-pdd status=ready}


@@ -15,7 +15,7 @@ This consists of 3 parts:



<col2 align='center' style="text-align:left" id='checkoff-people-intermediate-report' figure-id="tab:checkoff-people-intermediate-report" figure-caption="Intermediate Report Supervisors">
<col2 align='center' style="text-align:left" figure-id="tab:checkoff-people-intermediate-report-controllers" figure-caption="Intermediate Report Supervisors">
<span>System Architects</span> <span>Sonja Brits, Andrea Censi</span>
<span>Software Architects</span> <span>Breandan Considine, Liam Paull</span>
<span>Vice President of Safety</span> <span>Miguel de la Iglesia, Jacopo Tani</span>
@@ -10,13 +10,9 @@ _The objective of this report is to bring justice to your hard work during the s
-->

## The final result {#controllers-final-result}


<!--
_Let's start from a teaser._
* Post a video of your best results (e.g., your demo video)
-->

<div figure-id="fig:demo_video">
<div figure-id="fig:demo_video_controllers">
<figcaption>The Controllers Demo Video</figcaption>
<dtvideo src="vimeo:250511342"/>
</div>
@@ -47,7 +43,7 @@ The overall goal of our project is to stay in the lane while driving and stoppin
<!--
- Was there a baseline implementation in Duckietown which you improved upon, or did you implemented from scratch? Describe the "prior work"
-->
<center><img figure-id="fig:curve_plot" figure-caption="Pose of Duckiebot in a curve element." src="curve_plot.png" alt="Curve plot" style="width: 200px;"/></center>
<center><img figure-id="fig:curve_plot2" figure-caption="Pose of Duckiebot in a curve element." src="curve_plot.png" alt="Curve plot" style="width: 200px;"/></center>

From last year’s project, the baseline implementation of a pose estimator and a controller were provided to us for further improvement. The prior pose estimator was designed to deliver the pose for a Duckiebot on straight lanes only. If the Duckiebot was in or before a curve and in the middle of the lane, the estimated pose showed an offset **d**, see definition of **d** in figure below. The existing controller worked reasonably on straight lines. Although, due to the inputs from the pose estimator to the controller, the Duckiebot overshot in the curves and crossed the left/right line during or after the curve.

@@ -1,3 +1,5 @@
# The Saviors {#part:saviors status=ready}

# The Saviors: preliminary report {#saviors-pdd status=beta}


@@ -1,3 +1,5 @@
# The Controllers {#part:navigators status=ready}

# Navigators: preliminary report {#navigators-pdd status=beta}


@@ -8,7 +8,7 @@ TODO: JT: fix intra-duckiebook links
### Logical architecture


<div figure-id="fig:1" figure-caption="Simple step diagram">
<div figure-id="fig:simple-step-diagram" figure-caption="Simple step diagram">
<img src="logicaldiagram2.png" style="width: 60%"/>
</div>

@@ -29,16 +29,16 @@ It is assumed that

* the Duckiebot stops between 0.10m and 0.16m in front of the center of the red stop line, i.e. $d_x \in \lbrack 0.1m,0.16m\rbrack$, has an error of no more than 0.03m with respect to the center of its lane, i.e. $d_y \in \lbrack-0.03m,0.03m\rbrack$, and that the orientation error is smaller than 0.17rad, i.e. $\theta\in\lbrack-0.17rad,0.17rad\rbrack$ (see Fig. 1.2 for details, all values are with respect to the origin of the Duckiebot’s axle-fixed coordinate frame).

* a lane following controller exists that takes as inputs the distance from desired path $d$ and and the orientation error with respect to the path tangent $\theta$ (see Fig. 1.3 for details).
* a lane following controller exists that takes as inputs the distance from desired path $d$ and and the orientation error with respect to the path tangent $\theta$ (see Fig. 1.3 for details). XXX


<div figure-id="fig:2" figure-caption="Pose of the duckiebot in front of an intersection">
<div figure-id="fig:duckiebot_red_line1" figure-caption="Pose of the duckiebot in front of an intersection">
<img src="duckiebot_red_line.png" style="width: 100%"/>
</div>



<div figure-id="fig:3" figure-caption="Pose of the duckiebot relative to the desired path">
<div figure-id="fig:duckiebot_path1" figure-caption="Pose of the duckiebot relative to the desired path">
<img src="duckiebot_path.png" style="width: 100%"/>
</div>

@@ -96,7 +96,7 @@ The *"intersection_localization"*-node publishes the following topic:
It is estimated that it will take approximately 15ms to estimate the Duckiebot's pose once the camera image is received, hence about 15ms of delay can be expected on the published topics. However, the delay will be compensated for by the *"intersection_navigation"*-node.


<div figure-id="fig:4" figure-caption="Pose of the duckiebot inside a four way intersection">
<div figure-id="fig:bot_in_intersection" figure-caption="Pose of the duckiebot inside a four way intersection">
<img src="bot_in_intersection.png" style="width: 100%"/>
</div>

@@ -3,7 +3,7 @@

TODO: JT: add operation manual, fix bibliographic references, math formatting, various typos

## The final result {sec:navigators-final-result}
## The final result {#navigators-final-result}

Video of the final result:

@@ -74,7 +74,7 @@ We seek to find a method that allows a Duckiebot to safely navigate an intersect
* Fleet planning
* Coordinators

<div figure-id="fig:1" figure-caption="Stakeholders Diagramm">
<div figure-id="fig:stakeholders" figure-caption="Stakeholders Diagramm">
<img src="stakeholders_diagram.png" style="width: 60%"/>
</div>

@@ -86,7 +86,7 @@ Duration, i.e. the average time required for the Duckiebot to cross an intersect

## 4 Contribution / Added functionality {#navigators-final-contribution}

<div figure-id="fig:2" figure-caption="Logical architecture diagramm">
<div figure-id="fig:logical-architecture-diagram" figure-caption="Logical architecture diagramm">
<img src="logical_architecture_diagram.png" style="width: 100%"/>
</div>

@@ -103,11 +103,11 @@ It is assumed that:
This is done by the new lane following controller. However, we needed to slightly modify
the controller to account for thresholds wheels’ speed.

<div figure-id="fig:3" figure-caption="Duckiebot's position relative to the red line.">
<div figure-id="fig:duckiebot_red_line" figure-caption="Duckiebot's position relative to the red line.">
<img src="duckiebot_red_line.png" style="width: 100%"/>
</div>

<div figure-id="fig:4" figure-caption="Duckiebot's pose relative to the desired path.">
<div figure-id="fig:duckiebot_path" figure-caption="Duckiebot's pose relative to the desired path.">
<img src="duckiebot_path.png" style="width: 100%"/>
</div>

@@ -1,3 +1,5 @@
# Parking {#part:parking status=ready}

# Parking: preliminary report {#parking-pdd status=ready}


@@ -1,3 +1,5 @@
# Explicit Coordination {#part:explicit-coordination status=ready}

# Explicit Coordination: preliminary report {#explicit-coordination-pdd status=beta}


@@ -74,10 +74,10 @@ Nodes:
* string message: LED_detected/ no_LED_detected with position and/or color and/or frequency


A diagram of our nodes is shown below.
A diagram of our nodes is shown in [](#fig:nodes-coord).


<div figure-id="fig:Nodes" figure-caption="Nodes">
<div figure-id="fig:nodes-coord" figure-caption="Nodes">
<img src="nodes.png" style='width: 80ex; height: auto'/>
</div>

@@ -125,7 +125,7 @@ A four way intersection tile (see image below, center), four three-way intersect

Performance will be evaluated with 3 tests:

<col5 figure-id="tab:Performance" figure-caption="Performance Evaluation">
<col5 figure-id="tab:Performance-eval" figure-caption="Performance Evaluation">

<span>What is evaluated </span>
<span>How</span>
@@ -9,7 +9,7 @@ General notes:

Video of the final result:

<div figure-id="fig:example-embed">
<div figure-id="fig:explicit-coord-video">
<figcaption> Explicit coordination's video </figcaption>
<dtvideo src="vimeo:257762830"/>
</div>
@@ -86,7 +86,7 @@ Our implicit coordination algorithm is inspired by the Carrier Sense Multiple A

Additionally we have implemented rigth priority option in order to accelerate the traffic at the intersection. Rigth priority doesn't allow a Duckiebot to drive and as lang as another Duckiebot is standing right to them at an intersection.

<div figure-id="fig:DemoMap" figure-caption="Process Flow Chart Implicit Coordination">
<div figure-id="fig:FlowChartImplicit" figure-caption="Process Flow Chart Implicit Coordination">
<img src="FlowChartImplicit.png" style='width: 10em'/>
</div>

@@ -1,3 +1,5 @@
# Single SLAM {#part:single-SLAM}

# Single SLAM Project {status=beta}


@@ -1,3 +1,5 @@
# Flee planning {#part:fleet-planning}

# Fleet Planning: Preliminary Report {#fleet-level-planning-pdd status=beta}


Oops, something went wrong.

0 comments on commit d220acb

Please sign in to comment.
You can’t perform that action at this time.