Skip to content

Workshop 10 ‐ Image Geometry

gcielniak edited this page Nov 29, 2023 · 2 revisions

Preparations

  • Pull the latest changes to the docker image or, if you work on your own installation of the software, from the repo. Then start up the simulation.
  • Fork and clone the course's repository into your ROS2 workspace. The tutorial code is included in the uol_cmp9767_tutorial package. Build the package and source the workspace before you continue. If you struggle with these steps, check out this practical example on how to work with GitHub repositories.

Task 1: Camera-to-image projection

Run uol_cmp9767_tutorial image_projection_1 node (ros2 run ...) which projects a 3D point from the (front) camera to image coordinates. Inspect the code and experiment with different values. What camera coordinates (x,y) correspond to the image borders of the camera sensor, assuming we are operating on the plane 1 m ahead of the camera (z=1.0)?

Task 2: World-to-image projection

Run uol_cmp9767_tutorial image_projection_2 node. The node visualizes the coordinates of a ground point 5 m in front of the robot specified in the robot frame (base_link) transformed into the camera frame (depth_frame). Place the robot outside the enclosure and find the distance value in front of the camera which will mark exactly the horizon line.

Task 3: Image-to-world projection

Implement a node that will perform simple colour thresholding of the distinctively coloured object and calculate the centre of the segmented object in image coordinates. You can re-use the functionality developed in the previous workshop. At the same time, the node should read the distance value from the depth image at these coordinates. This will require accommodating a different aspect ratio between the colour and depth cameras as they have different fields of view and image resolutions. Then re-project the object's centre from image to camera coordinates by casting a 3D ray and forming a 3D point in camera coordinates together with the distance value. Finally, transform the resulting point into the odometry coordinates and print out its value. You should have now a node that automatically calculates the object's position in world coordinates from the robot's sensors. Check your results with the object's position read from Gazebo. Should be handy to locate the potholes!

There are some technical aspects of this task that are not necessarily trivial. Therefore, if you struggle, have a look at the source file for the image_projection_3 node as a reference.