-
Notifications
You must be signed in to change notification settings - Fork 13
Workshop 3 ‐ TFs and Sensors
Riccardo Polvara edited this page Oct 16, 2023
·
12 revisions
Please refer to Workshop 1 if you don't remember what steps are required for starting your LIMO robot and all its sensors!
- Try the simulation environment (You will know how to start it, right?) or the real robot
- find Rviz
- explore the different sensors (e.g., camera, LiDAR, IMU) in greater details. Find out what type of data they are and on which topics they publish.
- find a way to visualise the
tf
tree (discuss with your student peers)
- make the robot move again and watch the output of the different sensors.
- Display the tf tree of the LIMO robot (
ros2 run rqt_tf_tree rqt_tf_tree
), explain what a frame is (http://wiki.ros.org/tf might help, as might this scientific paper) ( If rqt_tf_tree is not installed. install withsudo apt update && sudo apt install ros-humble-rqt-tf-tree
- find a way to display the position of the robot's camera (which frame does it have?) in global (
/odom
) coordinates- you may either implement Python code, following the TranformListener example or the given
tf_listener.py
, or - figure out how to use a command-line tool:
ros2 run tf2_ros tf_echo
- you may either implement Python code, following the TranformListener example or the given
- Based on the given
tf_listener.py
and on last week'smover.py
, devise your own Python code. Create a publisher that publishesgeometry_msgs/PoseStamped
messages at the position of the closest laser scan reading and displays this pose in Rviz. Some useful pointers:- First, answer which frame the laser scans are in!
- If you wonder how to get from
ranges
to coordinates, the community is never far away: https://answers.ros.org/question/304562/x-y-and-z-coordinates-in-laserscan/
- Basically, follow https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Creating-A-Workspace/Creating-A-Workspace.html
- e.g. create a directory
cmp9767m_ws
as the root of your workspace - Optional (only if you feel confident enough in ROS, not essential):
- Discuss the dependencies you may need and consider
- Complete the
package.xml
with your own information,
- look at useful resources if you don't know git workflows
- decide on a name for your repository and create it to keep all your work in it (e.g,
cmp9767_code
) - you may want to follow the official instructions. - add your own package(s) to the repository and keep track of all developments there. Only add your implementation to your source code repository (what is under
src/
in your workspace)- you may want to include here the
tflistener
or the `mover script, or even the script you wrote as part of Workshop 2
- you may want to include here the
Always Please make sure you keep this implementation safe (i.e. commit it to GitHub)
THIS IS PURELY OPTIONAL. ONLY ENGAGE IF THE ABOVE ALL WORKS. IT'S TO GIVE YOU SOMETHING TO EXPLORE FURTHER.
- create a catkin package
my_opencv_test
, which should depend oncv_bridge
androspy
(remember how to do that?) - be inspired by the implementation of
opencv_test.py
and code some small piece of python code that subscribes to the simulated cameras of your LIMO robot, and e.g. masks out any green stuff in the image - (optional) Also publish the result from the above operation