-
Notifications
You must be signed in to change notification settings - Fork 480
Computer Vision Landing Page #530
Comments
What support we have for computer vision and obstacle avoidance in PX4/Dronecode platform? I am after enough info so that someone who wanted to add obstacle avoidance (for example) to their drones could understand: What docs do we have on these topics?Docs on VIO
Docs on obstacle avoidance:
What is Computer Vision for?My understanding is that there are two main applications for computer vision:
How does VIO work on PX4? (Summary)What I think happens is that VIO requires an external system that supplies position and pose information to PX4. PX4 can then be set up to use this information by telling the estimator to fuse the information from the external source.
My further understanding is that the external source of the messages can be "anything" - ie a black box.. However the supported/documented mechanism is:
How does Obstacle avoidance work on PX4?My guess is that it works much the same way as VIO - there is some stream of messages that you can send to the vehicle to tell it that it needs to move in a particular way irrespective of current navigation mode.
Hardware and Software required.
|
@LorenzMeier @baumanta , @vilhjalmur89, @mrivi , @JonasVautherin I was wondering if you could help me understand our computer vision story so I can improve the docs/entry points on the user and devguide. There are some documents already, but they all assume that you understand the architecture already. I want to assume a user who knows nothing and wants to be able to understand the integration points and what they need - how it works, hardware, software ... All my questions here: #530 (comment) If you can't answer, can you please point me to others who might be able to help? |
@hamishwillee Will it make sense to draw a big picture with all the critical components of computer vision -
Individual pages can be dedicated for each algorithm. Even if it gets repetitive, its better to put separate pages for each algo (unlike https://dev.px4.io/en/ros/external_position_estimation.html)
Lastly, a page on "Deep Learning for Computer Vision" can be added. If we do not have working examples, it can serve as a placeholder for future. |
@lbegani Thanks very much for responding. A diagram would probably help, but I won't be able to comment more on structure on until someone answers to my questions above. My gut feeling though is that right now I don't want to explain every possible component of the system and have a breakdown of the possible paths. I want to explain what we have now, and how you can get up and running. Can you take a shot at answering any of my questions? |
My shot. I might be wrong, you would still need comments from experts -
https://dev.px4.io/en/tutorials/optical_flow.html
OPTICAL Flow? Its not a part of VIO.
There is an ODOMETRY message declared in MAVLink but yet to be handled in PX4
I think the system will be setup to output only one of them.
OPTICAL_FLOW provides displacement info. Distance sensor provides altitude info.
Correct.
Not sure if there are any constraint other than correct values and low latency
I think the algo running in companion board will take input from sensors and output that algo-specific MAVLink message. Can we have multiple algorithms running simultaneously giving position as output? I dont think so.
Correct. PX4 cannot take visual data as input and give pose estimation output.
Not sure. |
I think the overall focus should be on what we have robustly working today and document that well so people can reliably reproduce our results. |
@hamishwillee For obstacle avoidance, right now the only supported communication interface is the offboard one. So the drone needs to be in Offboard mode and from the obstacle avoidance module the setpoints are sent via the Message from FCU to obstacle avoidance (Firmware uORB topic
Message from avoidance to FCU (Firmware uORB topic
This interface can be theoretically used in any mode. However so far the above mentioned PR restricts the usage to mission and rtl. To enable the interface the parameter I guess my description is quite messy, let me know where I need to clarify. |
@mrivi Thanks very much - that helps a hell of a lot - especially with the linked design docs. I'm sure I'll have a lot of questions. Here are just a few:
The obstacle avoidance module obviously needs to have a picture of obstacles.
At the moment the interface appears to be over MAVLink using the TRAJECTORY messages, with ROS then converting these into something else. You have told me the internal uORB messages that PX4 uses - I assume that the plan in future is that we might use RTPS/ROS2 to directly share these with ROS? Sorry, my questions in response are a bit random too. Essentially I'm trying to dig the detail and work out how someone would set this up themselves from end to end, using the solution right now, and as delivered by (PX4/PX4-Autopilot#9270 |
PS Thanks @lbegani I think I'll come back to the VIO bit later. |
The input to both obstacle avoidance algorithms is a point cloud. Currently we are testing with Intel Realsense. Intel provides a ROS node to access their librealsense API so the planner needs only to listen to the provided topic. Yes, the obstacle avoidance is a ROS node. Flow of information with the new interface: There is also a @hamishwillee Hope this clarify some things. Feel free to keep asking questions :) |
Thanks @mrivi - it does, and I will [evil snigger]. I'm mostly committed to MAVLink stuff and general external interfaces now, so might not get back to this until Monday. Just a few now. So I think (on scan) above is enough to understand how things work, but not to set up a system to do this. Does the team have turnkey instructions for your current setup, or can you help create them?
Essentially this page was about explaining what we offer, with plans to offlink to other docs for key information. It makes sense for the team doing the work to document their setup for that linked page. I can certainly help with review and structure once the information is created. Thoughts? |
@hamishwillee |
@baumanta has documented the Aero setup here https://docs.px4.io/en/flight_controller/intel_aero.html |
@mrivi Thanks for that. I was aware of that doc, but did not remember that the setup covered this aspect. I'll try get my head around all of this during the week and create an introductory doc you can review. |
Hi @hamishwillee , I would like to help bring the obstacle avoidance interface into the documentation. How can I help? |
Hi @mrivi , Apologies. This fell off my priority list. Let's start by clarifying how the architecture has changed/how it is now. I see some churn :-) Previously I believe you said:
But I have seen a bit of churn on github, so I suspect that has changed
So basically we need to know how things work now, and further
How we proceed depends on the answers to above. But assume things were as previously I would actually have started by documenting the mavlink protocol for object avoidance - ie "generically" similar to https://mavlink.io/en/services/mission.html |
Hi @hamishwillee,
|
@hamishwillee mavlink |
Mission Mode - Obstacle Avoidance Interface When a mission is uploaded from QGC and the parameter MPC_OBS_AVOID is set to True, the Firmware fills the uORB message Array
index 1:
Index2:
The remaining indices are filled with NaN. The message vehicle_trajectory_waypoint_desired MAVROS translates the Mavlink message into a ROS message called mavros_msgs::trajectory and does the conversion from NED to ENU frames. Messages are published on the ROS topic On the avoidance side, the algorithm plans a path to the waypoint. The position or velocity setpoints generated by the obstacle avoidance to get collision free to the waypoint can be sent to the Firmware with two ROS messages: MAVROS converts the set points from ENU to NED frame and translates the ROS messages into a MAVLINK message On the Firmware side, incoming
The setpoints are tracked by the multicopter position controller. Mission Progression
|
Easy to discover landing page for all things computer vision. Expectation is that you can go to the dev guide and have everything laid out about all components that can be leveraged.
This should also be linked from user guide as a concept.
Link or move docs for into Developer guide.
Other resources:
The text was updated successfully, but these errors were encountered: