Skip to content
This repository has been archived by the owner on Mar 17, 2021. It is now read-only.

Computer Vision Landing Page #530

Closed
hamishwillee opened this issue Jun 3, 2018 · 20 comments
Closed

Computer Vision Landing Page #530

hamishwillee opened this issue Jun 3, 2018 · 20 comments

Comments

@hamishwillee
Copy link
Collaborator

Easy to discover landing page for all things computer vision. Expectation is that you can go to the dev guide and have everything laid out about all components that can be leveraged.

This should also be linked from user guide as a concept.

Link or move docs for into Developer guide.

Other resources:

@hamishwillee
Copy link
Collaborator Author

What support we have for computer vision and obstacle avoidance in PX4/Dronecode platform? I am after enough info so that someone who wanted to add obstacle avoidance (for example) to their drones could understand:
- What features do we support
- What hardware is needed?
- How is the hardware attached?
- What software is required - how is it setup.
- How do the solutions work.

What docs do we have on these topics?

Docs on VIO

Docs on obstacle avoidance:

  • PX4/avoidance - slightly confusing (at least on scan), primarily covering simulation.
  1. What other docs on VIO/Obstacle avoidance do we have?

What is Computer Vision for?

My understanding is that there are two main applications for computer vision:

  • Get local position and pose from vision. A.k.a. as Visual Inertial Odometry (VIO).
    • Main use case is navigation where GPS doesn't work - ie indoors.
  • Obstacle avoidance?
  1. Other than VIO/obstacle avoidance, what is computer vision useful for?

How does VIO work on PX4? (Summary)

What I think happens is that VIO requires an external system that supplies position and pose information to PX4. PX4 can then be set up to use this information by telling the estimator to fuse the information from the external source.

  1. Is that correct? If not, what am I missing?
  2. There seems to be overlap between ATT_POS_MOCAP and VISION_POSITION_ESTIMATE. What is the "difference"/when is one used and not the other? Which ones does PX4 use?
  3. OPTICAL_FLOW provides altitude and position info - is this also fused with the other information?
  4. I assume that the information from the messages will be fused irrespective of mode (ie you don't have to be running in offboard mode). Is that correct?
  5. If not running in offboard mode are there any constraints/requirements for the external system for supplying the information (ie data rates etc?)

My further understanding is that the external source of the messages can be "anything" - ie a black box.. However the supported/documented mechanism is:

  • a stereo camera of some kind connection to a companion computer.
  • the companion computer running ROS, along with a special ROS node (mocap_optitrack ?) to send the position information in the right format to the flight controller.
  1. Is that last point correct? As in 4 it isn't clear which message you would send if you wanted to write your own mavlink service for this.
  2. My understanding is that there is no PX4-only VIO integration - ie you can't connect a stereo camera to PX4 port and from then on have a reliable position/pose estimate. Is that correct?

How does Obstacle avoidance work on PX4?

My guess is that it works much the same way as VIO - there is some stream of messages that you can send to the vehicle to tell it that it needs to move in a particular way irrespective of current navigation mode.

  1. If this is correct, is there any documentation about the protocol?

Hardware and Software required.

  • What hardware do we support for VIO/Obstacle avoidance.
    • cameras, flight controllers, companions (ie by support I mean, "tested" (known/shown to work) and "expected to work" (general class that should work).
    • Can this stack integrate with Dronecode Camera Manager? (and should it)?

@hamishwillee
Copy link
Collaborator Author

@LorenzMeier @baumanta , @vilhjalmur89, @mrivi , @JonasVautherin I was wondering if you could help me understand our computer vision story so I can improve the docs/entry points on the user and devguide.

There are some documents already, but they all assume that you understand the architecture already. I want to assume a user who knows nothing and wants to be able to understand the integration points and what they need - how it works, hardware, software ...

All my questions here: #530 (comment)

If you can't answer, can you please point me to others who might be able to help?

@lbegani
Copy link
Contributor

lbegani commented Jul 3, 2018

@hamishwillee Will it make sense to draw a big picture with all the critical components of computer vision -

  1. Sensors (Monocular, Stereo, IMU, Mag, LIDAR etc)
  2. Sensor Drivers (ROS Nodes etc)
  3. Visual Algorithms (Optical Flow, VIO, Obstacle Avoidance etc)
  4. Messaging Channels (MAVROS, mavlink-router, UART etc)
  5. MAVLink Messages (V_P_E, O_F_R, O_D etc)
  6. PX4 (EKF, LPE etc)

Individual pages can be dedicated for each algorithm. Even if it gets repetitive, its better to put separate pages for each algo (unlike https://dev.px4.io/en/ros/external_position_estimation.html)

  1. Explain the Algorithm
  2. Target platform
  3. Setup steps

Lastly, a page on "Deep Learning for Computer Vision" can be added. If we do not have working examples, it can serve as a placeholder for future.

@hamishwillee
Copy link
Collaborator Author

@lbegani Thanks very much for responding.

A diagram would probably help, but I won't be able to comment more on structure on until someone answers to my questions above.

My gut feeling though is that right now I don't want to explain every possible component of the system and have a breakdown of the possible paths. I want to explain what we have now, and how you can get up and running. Can you take a shot at answering any of my questions?

@lbegani
Copy link
Contributor

lbegani commented Jul 3, 2018

My shot. I might be wrong, you would still need comments from experts -

  1. What other docs on VIO/Obstacle avoidance do we have?

https://dev.px4.io/en/tutorials/optical_flow.html
https://docs.px4.io/en/flight_controller/intel_aero.html

  1. Other than VIO/obstacle avoidance, what is computer vision useful for?

OPTICAL Flow? Its not a part of VIO.

  1. Is that correct? If not, what am I missing?

There is an ODOMETRY message declared in MAVLink but yet to be handled in PX4

  1. There seems to be overlap between ATT_POS_MOCAP and VISION_POSITION_ESTIMATE. What is the "difference"/when is one used and not the other? Which ones does PX4 use?

I think the system will be setup to output only one of them.

  1. OPTICAL_FLOW provides altitude and position info - is this also fused with the other information?

OPTICAL_FLOW provides displacement info. Distance sensor provides altitude info.

  1. I assume that the information from the messages will be fused irrespective of mode (ie you don't have to be running in offboard mode). Is that correct?

Correct.

  1. If not running in offboard mode are there any constraints/requirements for the external system for supplying the information (ie data rates etc?)

Not sure if there are any constraint other than correct values and low latency

  1. Is that last point correct? As in 4 it isn't clear which message you would send if you wanted to write your own mavlink service for this.

I think the algo running in companion board will take input from sensors and output that algo-specific MAVLink message. Can we have multiple algorithms running simultaneously giving position as output? I dont think so.

  1. My understanding is that there is no PX4-only VIO integration - ie you can't connect a stereo camera to PX4 port and from then on have a reliable position/pose estimate. Is that correct?

Correct. PX4 cannot take visual data as input and give pose estimation output.

  1. If this is correct, is there any documentation about the protocol?

Not sure.

@LorenzMeier
Copy link
Member

I think the overall focus should be on what we have robustly working today and document that well so people can reliably reproduce our results.

@mrivi
Copy link
Contributor

mrivi commented Jul 3, 2018

@hamishwillee For obstacle avoidance, right now the only supported communication interface is the offboard one. So the drone needs to be in Offboard mode and from the obstacle avoidance module the setpoints are sent via the SET_POSITION_TARGET_LOCAL_NED mavlink message. The translation between the ROS messages and mavlink message is done by the MAVROS local position plugin.
As soon as this Fimware PR (PX4/PX4-Autopilot#9270) gets merged there will be a new interface along side offboard.
The fcu can send goals to the obstacle avoidance through the TRAJECTORY mavlink message and trajectory mavros plug in and the obstacle avoidance sends back with the collision free waypoints through the same mavlink message and mavros plugin.
The TRAJECTORY mavlink message enables to describe both waypoints and trajectories. Currently the firmware support only waypoints. The message can contain up to 5 waypoints but currently they aren't all used. Each waypoint is described by position, velocity, acceleration, yaw and yaw_speed (not all the fields need to be filled)

Message from FCU to obstacle avoidance (Firmware uORB topic vehicle_trajectory_waypoint_desired)

  • point1: current vehicle position, desired velocity
  • point2: current triplet if in auto mode otherwise point2 is not used

Message from avoidance to FCU (Firmware uORB topic vehicle_trajectory_waypoint)

  • point1: position waypoint, velocity waypoint, yaw
    Both the avoidance algorithms that PX4 has (local and global planner) send only a position waypoint and yaw which is tracked by the position control.

This interface can be theoretically used in any mode. However so far the above mentioned PR restricts the usage to mission and rtl. To enable the interface the parameter MPC_OBS_AVOID need to be set to true in QGC.

I guess my description is quite messy, let me know where I need to clarify.

@hamishwillee
Copy link
Collaborator Author

@mrivi Thanks very much - that helps a hell of a lot - especially with the linked design docs. I'm sure I'll have a lot of questions. Here are just a few:

For obstacle avoidance, right now the only supported communication interface is the offboard one. So the drone needs to be in Offboard mode and from the obstacle avoidance module the setpoints are sent via the SET_POSITION_TARGET_LOCAL_NED mavlink message. The translation between the ROS messages and mavlink message is done by the MAVROS local position plugin.

  1. When is the new solution likely to deliver?
  2. Is the current offboard mode solution really "alongside" or will there be a move to make it work with the same interface as the other modes? If so we have to document this.
  3. There really isn't enough to completely understand how the offboard solution works.
    • What is defining the original path in this case, and how does that information get to the obstacle avoidance system. Or to put it another way, say I write a Dronecode SDK app to drive my vehicle in offboard mode, how does it integrate with the obstacle system. ?
    • I envisage that ROS gets vehicle pose and movement, and sends this along with planned path to obstacle avoidance module. Obstacle avoidance module works out anchor points to avoid obstacles, and sends to the trajectory library, trajectory library sends out SET_POSITION_TARGET_LOCAL_NED for the new path.
  4. Where it says "The translation between the ROS messages and mavlink message is done by the MAVROS local position plugin."
    • At what point do you need to do this translation?

For the new solution and old solution.

The obstacle avoidance module obviously needs to have a picture of obstacles.

  • How does it get that picture
  • What hardware do you need for that
  • Where is this library?
  • How is it installed.
  • How is it "glued" to get the trajectory messages from PX4 and send anchor points to the trajectory library
  • I think the glue in all this might be ROS and various ROS modules. Ie the obstacle avoidance module is a ROS node of some kind?

At the moment the interface appears to be over MAVLink using the TRAJECTORY messages, with ROS then converting these into something else. You have told me the internal uORB messages that PX4 uses - I assume that the plan in future is that we might use RTPS/ROS2 to directly share these with ROS?

Sorry, my questions in response are a bit random too. Essentially I'm trying to dig the detail and work out how someone would set this up themselves from end to end, using the solution right now, and as delivered by (PX4/PX4-Autopilot#9270

@hamishwillee
Copy link
Collaborator Author

PS Thanks @lbegani I think I'll come back to the VIO bit later.

@mrivi
Copy link
Contributor

mrivi commented Jul 4, 2018

  1. Unfortunately, no clue. I wouldn't expect any time soon at the speed things are evolving
  2. For now there is no plan to change offboard
  3. a) In offboard there are two ways of setting the goal: setting the parameters goal_x_param, goal_y_param, goal_z_param in the launch script of the local/global planner or set goal_z_param in the launch script and then set the x, y in Rviz by clicling where you want to go in the environment representation (for the local planner this step is described in the README)
    b) yes, the avoidance gets the drone position through the mavros topic /mavros/local_position/pose and sends the waypoints through /mavros/setpoint_position/local. The mavros node maps the geometry_msgs::PoseStamped ROS message that has been sent on the topic /mavros/setpoint_position/local to the mavlink message SET_POSITION_TARGET_LOCAL_NED.
  4. the user doesn't have to do the translation. It has to launch the MAVROS node and use the defined topics to send messages to and from this node. I don't think that mavros plugins are documented anywhere. A user has to go through the code to understand how the messages are mapped.

The input to both obstacle avoidance algorithms is a point cloud. Currently we are testing with Intel Realsense. Intel provides a ROS node to access their librealsense API so the planner needs only to listen to the provided topic.

Yes, the obstacle avoidance is a ROS node.

Flow of information with the new interface:
PX4 Firmware:drone current state, desired goal uORB vehicle_trajectory_waypoint_desired
->
in mavlink messages uORB is mapped to MAVLINK TRAJECTORY message
-> MAVROS: trajectory plugin converts to ros message mavros_msgs::Trajectory
-> avoidance ROS node subscribes to /mavros/trajectory/desired
-> Avoidance plan a collision free path
-> Avoidance publishes messages of type mavros_msgs::Trajectory on /mavros/trajectory/generated or messages of type nav_msgs::Path on /mavros/trajectory/path
-> MAVROS: trajectory plugin transforms the mavros_msgs::Trajectory or nav_msgs::Path into mavlink TRAJECTORY message
-> PX4 Firmware: mavlink receiver maps TRAJECTORY message to vehicle_trajectory_waypoint
-> pos control tracks the waypoints

There is also a OBSTACLE_DISTANCE mavlink message that enables to sends information on the distance of the obstacle 360 around a drone with a max resolution of 5 deg on the azimuth angle. Elevation is all squished into a bucket. No use of this message is in the Fimware. The plan is to use it to do a basic sense&stop feature in the firmware

@hamishwillee Hope this clarify some things. Feel free to keep asking questions :)

@hamishwillee
Copy link
Collaborator Author

@hamishwillee Hope this clarify some things. Feel free to keep asking questions :)

Thanks @mrivi - it does, and I will [evil snigger]. I'm mostly committed to MAVLink stuff and general external interfaces now, so might not get back to this until Monday.

Just a few now. So I think (on scan) above is enough to understand how things work, but not to set up a system to do this. Does the team have turnkey instructions for your current setup, or can you help create them?

  1. Hardware - vehicle and require peripherals (I am guessing Intel Aero with realsense camera?)
  2. Software - standard ROS installation on the companion computer. How do we get the notes, how do we start it up?

Essentially this page was about explaining what we offer, with plans to offlink to other docs for key information. It makes sense for the team doing the work to document their setup for that linked page. I can certainly help with review and structure once the information is created. Thoughts?

@mrivi
Copy link
Contributor

mrivi commented Jul 6, 2018

@hamishwillee
yes, we're currently testing on Aero with realsense.
ok, I'll discuss with Tanja how to start documenting the HW setup.

@mrivi
Copy link
Contributor

mrivi commented Jul 6, 2018

@baumanta has documented the Aero setup here https://docs.px4.io/en/flight_controller/intel_aero.html

@hamishwillee
Copy link
Collaborator Author

@mrivi Thanks for that. I was aware of that doc, but did not remember that the setup covered this aspect. I'll try get my head around all of this during the week and create an introductory doc you can review.

@mrivi
Copy link
Contributor

mrivi commented Oct 31, 2018

Hi @hamishwillee , I would like to help bring the obstacle avoidance interface into the documentation. How can I help?

@hamishwillee
Copy link
Collaborator Author

Hi @mrivi ,

Apologies. This fell off my priority list. Let's start by clarifying how the architecture has changed/how it is now. I see some churn :-)

Previously I believe you said:

  • Offboard mode - a system (ROS) sends movement commands to object avoidance module, which works out good paths to achieve the desired movement and sends these paths as SET_POSITION_TARGET_LOCAL_NED to the vehicle
  • For other modes ... the firmware sends a desired path using TRAJECTORY to a companion, a ROS node would process the trajectory against a vision generated map of the path and send the vehicle the actual avoidance path to take in another TRAJECTORY. The firmware makes the move in some set of modes.
  • OBSTACLE_DISTANCE exists but not used yet.

But I have seen a bit of churn on github, so I suspect that has changed

  • new messages TRAJECTORY_REPRESENTATION_WAYPOINTS and TRAJECTORY_REPRESENTATION_BEZIER
  • Some discussion of having avoidance module in the firmware?

So basically we need to know how things work now, and further

  • What modes the system works in
  • Whether there is generic instructions for hardware setup, or intel is still only platform
  • how stable all this is.

How we proceed depends on the answers to above. But assume things were as previously I would actually have started by documenting the mavlink protocol for object avoidance - ie "generically" similar to https://mavlink.io/en/services/mission.html

@baumanta
Copy link

baumanta commented Nov 1, 2018

Hi @hamishwillee,
I'll try to answer best to my knowledege:

  • I do not know much about the trajectory representations, that is more @mrivi area of expertise. But the general use has stayed the same. All you said about offboard and TRAJECTORY message is still true.
  • There is an open PR for a collision avoidance library which lives in Firmware but is completely independent of the avoidance on the companion PR #10785. This library uses LaserScan data. This can either com from a laser scanner or the Obstacle avoidance on the companion is also able to provide the data. This Collision avoidance is not exactly an obstacle avoidance. It does not aim at finding a path around obstacles. But rather break if something is too close. It is usable right now in Manula Position control only.

@hamishwillee
Copy link
Collaborator Author

@baumanta Thank you. I'd better wait for @mrivi because TRAJECTORY message no longer exists, which implies that lots of other things might have changed.

We should document the new collision avoidance behaviour too. I will discuss that on the PR.

@mrivi
Copy link
Contributor

mrivi commented Nov 2, 2018

@hamishwillee mavlink TRAJECTORY had to different types bezier or waypoint. We have restructure them to TRAJECTORY_REPRESENTATION_WAYPOINTS and TRAJECTORY_REPRESENTATION_BEZIER. The fields are the same as in the old TRAJECTORY message.
I am preparing a description of what is implemented. I'll post it here as soon as it's ready.

@mrivi
Copy link
Contributor

mrivi commented Nov 5, 2018

@hamishwillee

Mission Mode - Obstacle Avoidance Interface

When a mission is uploaded from QGC and the parameter MPC_OBS_AVOID is set to True, the Firmware fills the uORB message vehicle_trajectory_waypoint_desired in the following way.

Array waypoints:
index 0 :

  • position: x-y-z NED vehicle local position
  • velocity: x-y-z NED velocity setpoint generated by the active FlightTask
  • Acceleration: vehicle acceleration
  • Yaw: vehicle yaw
  • Yaw_speed: NaN

index 1:

  • position: x-y-z NED local coordinates of the current mission waypoint
  • Velocity: NaN
  • Acceleration: NaN
  • Yaw: yaw setpoint
  • Yaw_speed: yaw speed setpoint

Index2:

  • position: x-y-z NED local coordinates of the next mission waypoint
  • Velocity: NaN
  • Acceleration: NaN
  • Yaw: yaw setpoint
  • Yaw_speed: yaw speed setpoint

The remaining indices are filled with NaN.

The message vehicle_trajectory_waypoint_desiredis mapped into the Mavlink messageTRAJECTORY_REPRESENTATION_WAYPOINTS`. The messages are sent at 5Hz.

MAVROS translates the Mavlink message into a ROS message called mavros_msgs::trajectory and does the conversion from NED to ENU frames. Messages are published on the ROS topic /mavros/trajectory/desired

On the avoidance side, the algorithm plans a path to the waypoint.

The position or velocity setpoints generated by the obstacle avoidance to get collision free to the waypoint can be sent to the Firmware with two ROS messages:
mavros_msgs::trajectory (both velocity and position set points) on ROS topic /mavros/trajectory/generated
nav_msgs::Path (only position setpoints) on ROS topic /mavros/trajectory/path

MAVROS converts the set points from ENU to NED frame and translates the ROS messages into a MAVLINK message TRAJECTORY_REPRESENTATION_WAYPOINTS.

On the Firmware side, incoming TRAJECTORY_REPRESENTATION_WAYPOINTS are translated into uORB vehicle_trajectory_waypoint messages. The array waypoints contains all NAN expect for index 0:

  • Position: position setpoint
  • Velocity: velocity setpoint
  • acceleration: NaN (acceleration setpoints are not supported by the Firmware)
  • Yaw: yaw setpoint
  • Yaw_speed: yaw speed setpoint

The setpoints are tracked by the multicopter position controller.

Mission Progression
The mission logic is handled by the navigator in the same as for flight without obstacle avoidance a part for two differences:

  • in normal missions the vehicle has to reach a waypoint with a certain heading (the vehicle is supposed to reach the waypoint in a straight line from the previous waypoint plus a smaller error). When obstacle avoidance is active, this constraint cannot be fulfilled because the obstacle avoidance algorithm has full control of the vehicle heading such that the vehicle always move in the current field of view. Therefore a waypoint is always reached when the vehicle is within the acceptance radius regardless of its heading
  • navigator updates the triplets when the vehicle has reached the acceptance radius of each waypoint. If a waypoint is inside an obstacle it can happen that it’s never reach and the mission will be stuck. If the vehicle projection on the line previous-current waypoint has passed the current waypoint, the acceptance radius is enlarged such that the current waypoint is set as reached
  • same concept as before the the altitude acceptance of a waypoint. If the vehicle is either above or below a waypoint (within the x-y acceptance radius), the altitude acceptance is modified such that the mission progresses

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants