From c876609a32cb661c342f13cae18be5a6ad9db137 Mon Sep 17 00:00:00 2001 From: Simon Muntwiler Date: Fri, 1 Dec 2017 20:43:24 +0100 Subject: [PATCH] Intermediate design document --- ...ntermediate-design-document-controllers.md | 284 ++++++++++++++++++ 1 file changed, 284 insertions(+) create mode 100644 14_controllers/20-intermediate-design-document-controllers.md diff --git a/14_controllers/20-intermediate-design-document-controllers.md b/14_controllers/20-intermediate-design-document-controllers.md new file mode 100644 index 0000000..a918e5c --- /dev/null +++ b/14_controllers/20-intermediate-design-document-controllers.md @@ -0,0 +1,284 @@ +# The Controllers: Intermediate Report {#template-int-report status=ready} + + + + +
+ + + System Architects Sonja Brits, Andrea Censi + Software Architects Breandan Considine, Liam Paull + Vice President of Safety Miguel de la Iglesia, Jacopo Tani + Data Czars Manfred Diaz, Jonathan Aresenault + + +
+ +## Part 1: System interfaces + + + +### Logical architecture + + +
+ + + $d_ref$ Reference distance from center of lane + $d_act$ Actual distance from center of lane + $d_est$ Estimated distance from center of lane + $\theta_act$ Actual angle between robot and center of lane + $\theta_est$ Estimated angle between robot and center of lane + + +
+ +We assume that the Duckiebot is placed on the right lane of the road within a defined boundary for the initial pose. By starting the lane-follower it should begin to follow the lane, whether it is straight or curved, until we stop the lane following module. The module consists of two parts, an pose-estimator part and a lane-controller part. We will briefly describe these two modules: + +**pose-estimator**: Estimates distance $\d_act$ from the center of the lane and the angle $\theta_act$ with respect to the center of the lane. In a curve this angle should be with respect to the the tangent of this curve at the actual position. Additionally the estimator estimates the curvature of the lane. + +**lane-controller**: Given the pose and curvature estimation and a reference **d** the lane-controller will control the Duckiebot along this reference. + +Special events: + +**detection of red line**: After the stop line detection node detects a stop line message, the controller will continually slow down the velocity of the Duckiebot and stop between 6 to 0 cm and an angle of +-10 degree (requirements given by Explicit Coordination Team). + +**intersection**: The Duckiebot will stop between 6 to 0 cm (given by the Explicit Coordination Team) from the red line in order to enable the navigation through the intersection. They will give us the tracking deviation **d_err** and tracking angle **$\theta$_err**, curvature **c**, reference distance **d** (mainly = 0), velocity **v**. The navigation through an intersection is out of scope. The lane-following module will start again after the intersection, triggered by the finite state machine. + +**obstacle avoidance**: We are receiving a d (distance to the middle of the right lane) from the Saviors Team, in case there is an obstacle on the lane. The Saviors Team will compensate the delay and send us the d 30-40 cm from obstacle in advance. Out of scope is the controlled obstacle avoidance involving leaving the right lane. This case is determined by a flag sent from the Saviors module and the Duckiebot will stop. There is also a stop flag received from the stop_line_filter_node. We will introduce priorities to the several flags received to decide how to duckiebot should behave. We also need the velocity of the obstacle avoidance group. + + + +We get the at_stop_line flag, stop_line_filter_node.py defined. +It has a state process part + +Intersection: +Additionally, the Duckiebot will stop between 6 to 0 cm (given by the Explicit Coordination Team) from the red line in order to enable the navigation through the intersection. The navigation through an intersection is out of scope. The Lane Following module will start again after the intersection, triggered by the Finite state machine. They will give us the estimated d and theta. Curvature, reference d (prly 0), velocity, + +Obstacle avoidance: +We are receiving a d (distance to the middle of the right lane) from the Saviors Team, in case there is an obstacle on the lane. The Saviors Team will compensate the delay and send us the d 30-40 cm from obstacle in advance. Out of scope is the controlled obstacle avoidance involving leaving the right lane. This case is determined by a flag sent from the Saviors module and the Duckiebot will stop. +There is also a stop flag received from the stop_line_filter_node. We will introduce priorities to the several flags received to decide how to duckiebot should behave. +We also need the velocity of the obstacle avoidance group. + +Parking +At a stop line, if a parking-lot april tag is detected (by the parking team), the parking flag will be activated. As soon as we enter the parking lot space, they will take over control. We will get a pose_estimation_msg from the parking group, as well as a velocity, distance from middle of the “lane” and a theta. The middle of the lane will be a path calculated by RRT* from the parking group as well. They will just use our lane controller to make the bot move and follow their trajectory. When we leave the parking lot, our controller will take over estimation and control again (when the bot gets to the red stop line from the intersection outside of the parking lot). They will give us an estimation (whenever calculated) and a d. When the parking flag is active, the controller node stops running until the flag is inactive + + + +The Duckiebot should only run at a maximum velocity of x m/s for optimal controllability. The reason for the limited velocity is the low image update frequency which limits our pose estimation update. Hence a lower velocity enhances the performance of our controlled lane following. Image update too slow hence pose estimation limited=> Slower velocity better pose trajectory. Since not every Duckiebot has the same gain set, we will pass the desired velocity to team System Identification. Their module will convert the demanded velocity to the according input voltage for the motor. + +Our goal is to control the deviation from the middle of the lane d smaller than +- 2cm: + +Whenever we detect a red line, we will slow down the duckiebot. + +Intersection 0-6cm , +-10° , we can request to still see the red line when we will stop for the red line + +Savior: not leaving lane, if obstacle flag is active , then we will receive a d and a v. +Emergency stop flag active, we will need to slow down and stop. + + + +Savior: + d enough in advance They will set the obstacle detected flag 20-30cm in front of the obstacle, so we have enough time to avoid the obstacle. + + + +Anti-instagram : has tuned bla very good. Edges superb +Idea: Introducing area of interest of the image to process. Filter out unrelevant image data to speed up the image processing pipeline. If Anti-Instagram could publish area of interest as a node, we could use it in the estimation part to remove all visual clutter in the line segments. + + + + +### Software architecture + +- _Lane Filter Node_ : + - Published topics: + - lane_pose + - Maximum latency to be introduced: ?? + - belief_img + - Maximum latency to be introduced: ?? + - entropy + - Maximum latency to be introduced: ?? + - in_lane + - Maximum latency to be introduced: ?? + - switch + - Maximum latency to be introduced: ?? + - Subscribed topics: + - segment_list + - Latency expected: 143 ms + - velocity + - Latency expected: 0 ms + - car_cmd (not yet) + - Latency expected: [runtime of Lane Controller node] + - flags_from_other_teams + +Total latency from image taken processed through anti-instagram etc to us 70ms (source?) + +- _Lane Controller Node_ : + - Published topic: + - car_cmd + - Maximum latency to be introduced: ?? + - Subscribed topic: + - lane_pose + - Latency expected: [runtime of Lane Filter Node] + + + + + +## Part 2: Demo and evaluation plan + + +_Please note that for this part it is necessary for the VPs for Safety to check off before you submit it. Also note that they are busy people, so it's up to you to coordinate to make sure you get this part right and in time._ + +### Demo plan + +The demo is a short activity that is used to show the desired functionality, and in particular the difference between how it worked before (or not worked) and how it works now after you have done your development. + +It should take a few minutes maximum for setup and running the demo. + +Proposition 1: +Use 2 duckiebots, one with old lane following code and one with the new one to show performance improvement. + +Use hard to follow curvature in which the current implementation struggles to follow the lane. + + + +Proposition 2: +Show it is able to follow the lane even when width is changing or sometimes the white line is missing (is our estimator able to get the correct position?) +Maybe special demo tiles? + +INPUT: Use rviz on notebook to show improved lane estimator, maybe outputs from lane controller. +Visualize on notebook average tracking error from old and new estimator and controller. +Write own tests. + + +- How do you envision the demo? + - We want to make a demo in which we let two duckiebot run simultaneously on test track (closed loop track) “chasing each other”. + - Point out faster decay of transient error + - Point out smaller steady state error + +- What hardware components do you need? + - We need to build a small Duckietown test track: thus we need about 15 tiles, DUCKIEtape and the usual Duckietown decoration. + +### Plan for formal performance evaluation + +- How do you envision the performance evaluation? Is it experiments? Log analysis? + +In contrast with the demo, the formal performance evaluation can take more than a few minutes. + +Ideally it should be possible to do this without human intervention, or with minimal human intervention, for both running the demo and checking the results. + +Performance evaluation by several benchmarking tests tbd yet. +Suggestions: + +Performance: average or median deviation from middle of the lane (or given d) to follow. Summed up error must be as small as possible → maybe add graphic to show how we are going to measure it. + +Use same estimator and implement old and new controller to see performance difference. +Use same controller but different estimator to evaluate performance gain. + +Count amount of times hitting white or yellow lines (ideal case should be 0). + +Count amount of times getting onto the wrong lane. + + + +**Flags received by others** + +These following flags are received by other modules. While one of these flags is true, the duckiebot will behave according to the descriptions in system architecture. + + + flag_at_stop_line True when a red stop line is detected. + flag_at_intersection True when at intersection. This flag is passed to us by the Navigators. + flag_obstacle_detected True when obstacle is in the lane. This flag is passed to us by the Saviours. + flag_obstacle_emergency_stop True when it is not possible to avoid the obstacle and it is necessary to leave the right lane. + flag_at_parking_lot True when stopping at an intersection and the april tag for the parking lot is detected. + flag_parking_stop Per default = true. If false, the lane-follower will move along the given trajectory on the parking lot. + + + +## Part 3: Data collection, annotation, and analysis + +_Please note that for this part it is necessary for the Data Czars to check off before you submit it. Also note that they are busy people, so it's up to you to coordinate to make sure you get this part right and in time._ + +### Collection + +- How much data do you need? +For every feature step we need fixed logs and logs from driving. Baseline has been set and logs have been taken with the current implementation of the code to evaluate current performance. + +- How are the logs to be taken? (Manually, autonomously, etc.) +Manually +Andy and simon have already recorded some logs +Table with log specification d , theta, curve on line + +Autonomous +Logs should also be taken during lane-following-demo to evaluate the estimator and control performance → see performance evaluation + +Describe any other special arrangements. + +- Do you need extra help in collecting the data from the other teams? + + + +### Annotation + +- Do you need to annotate the data? +No, because we will receive the needed edges from the Anti-Instagram group. + +- At this point, you should have you tried using [thehive.ai](https://thehive.ai/) to do it. Did you? +Check out thehive.ai and write down why we don’t need to use it. + +- Are you sure they can do the annotations that you want? +Probably they could, but so do we with our estimator. There are no obstacles or duckies to annotate. + +### Analysis + +We don’t need data annotation since we can all the benchmarking by our own. We are not involved in any obstacle detection so we do not need any obstacles annotated. +- Do you need to write some software to analyze the annotations? + +- Are you planning for it? + + +