Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EKF for vision landing #132

Closed
kripper opened this issue Mar 19, 2023 · 10 comments
Closed

EKF for vision landing #132

kripper opened this issue Mar 19, 2023 · 10 comments

Comments

@kripper
Copy link
Collaborator

kripper commented Mar 19, 2023

I'm using visual pose estimation of Apriltags for precision landing (https://github.com/kripper/vision-landing-2).

I see at least 3 problems to address:

  1. how to estimate the attitude of the drone: a) use visual pose estimation of the markers, b) use the drone positioning system (which internally uses all sensors and optical flow), c) do our own position estimation based on the instructed motion / virtual stick actions. or d) combine multiple options.

If the error of the positioning system is too big, maybe it makes sense to only use visual pose estimation?

  1. the visual estimated pose of the marker (=> drone attitude) is never the current pose because of the overall latency (the processed image was taken in the past). This is a big problem especially when you are rotating and translating at the same time, because "forward" and "right" are highly dependent on the current yaw (calculating motion based on a previous yaw will move us to a completely wrong direction). This means, we would have to do corrections on the model using information from the past, to be able to use the model in the future (prediction). I believe EKF provides this solution.

  2. which frame model to use? If the positioning system is accurate, an a) absolute frame model would be the best. Otherwise, if we use vision, maybe a b) frame relative to the target is more convenient?
    IMO absolute positioning is more convenient because it allows us to compare positions of 3 or more objects (e.g. the drone + two other objects).

Most precision landing solutions just send the markers pose to the flight controller. In our case, we will have to implement the motion control in Rosetta using virtual stick commands.

@kripper
Copy link
Collaborator Author

kripper commented Mar 20, 2023

Here I published a Vision Landing implementation using AprilTags:
https://github.com/kripper/vision-landing-2

I also published a MAVLink Camera Simulator for testing.

@chobitsfan
Copy link

What is your experience with the latency drift (pose estimation is never current, so whatever motion instruction you send will always have some error).

I found latency matters more when drone is closer to landing target. This is a trade-off, high resolution image improve marker detect range but increase latency. I am thinking about a 2 stage approach, use high resolution image in the beginning, then switch to low resolution when we are closing to target

@kripper
Copy link
Collaborator Author

kripper commented Mar 20, 2023

I believe you are refering to the "marker detection latency", ie. the time to detect and estimate the pose of the markers when they get close to the camera?

But I'm refering to the overall latency problem (caused by video capture, encoding, transmiting, decoding, detecting and estimating pose, sending mavlink commands, processing mavlink commands).

Imagine your drone takes a picture on t = 0 while it is rotating the yaw and, based on that picture, determines that the landing point is to the left.
But at the time the motion command is executed on t > 0 , the drone already rotated and the landing point is not precisely on the left any more.
As a consequence, the drone will move in a wrong direction and be bouncing back and forth until reaching the target or crashing.

Of course, the lazy solution could be to move slowly or try to reduce the overall latency with better hardware...

But I believe that by using the correct model (EKF), we should be able to avoid moving in wrong directions and get the drone landing fast and precise even with high latency.

Now, in your case, you just send the landing point to the flight controller and the EKF and the motion is implemented there (in Ardupilot, PX4, RosettaDrone, et. al.).

In theory you just have to make sure that you are telling the flight controller the exact timestamp of the image used for the pose estimation and the flight controller should use this information not to move into this direction, but to adjust its estimation model parameters to predict the future pose of the landing point and to compute the direction where it should move to in the present.

But, it seems to me that the implementation and probably also the MAVLink message definition (!) was or is still not mature.

@fnoop said: "There was a particular PR to solve a vital issue that never got implemented and I had been trying to get this resolved for so long that I basically gave up and moved on with life at that point..".
Here are more details: goodrobots/vision_landing#123 (comment)

I'm researching...

@danammeansbear
Copy link

if you can get access to the drone information like Lat,Long,Alt, gimbal angle and heading(north, south, east, west) in degrees then you can use geometry to guesstimate the approximate GPS location then heads towards that location. Create a PID controller based on your vision algorithm to use pose estimation for more precision.

@kripper
Copy link
Collaborator Author

kripper commented Mar 21, 2023

if you can get access to the drone information like Lat,Long,Alt, gimbal angle and heading(north, south, east, west)

Yes, this is alternative 1 b) use the drone positioning system (which internally uses all sensors and optical flow) + 3 a) use an absolute coordinate frame.

How accurate and consistent is your drone's positioning system?
Does your Drone's positioning system jump when the GPS signal changes?

@kripper
Copy link
Collaborator Author

kripper commented Mar 22, 2023

Some info about DJI's internal EKF:
https://roboticsknowledgebase.com/wiki/common-platforms/dji-drone-breakdown-for-technical-projects/

If the GPS jumps, we could maybe fuse the velocities to estimate the drone's attitude.

@The1only what is your experience with KF?

@kripper
Copy link
Collaborator Author

kripper commented Mar 22, 2023

We will probably work with this implementation (WIP):
PX4/PX4-Autopilot#20653

@kripper
Copy link
Collaborator Author

kripper commented Mar 22, 2023

I made some tests with a DJI Mini SE:

  • getVelocityX() has only 1 digit precision, ie. it sends values like 0.1, 0.2, 0.3,...10 cm precision is useless
  • getLatitude() returns NaN when GPS signal is weak. Also useless.

Thus, we will have to estimate the drone's position only based on set velocities (virtual sticks actions) and use the visual estimated pose to adjust the model.

@kripper
Copy link
Collaborator Author

kripper commented Mar 29, 2023

I finally came up with this algorithms:
https://github.com/kripper/SmartLanding

@kripper kripper closed this as completed Mar 29, 2023
@kripper
Copy link
Collaborator Author

kripper commented Apr 8, 2023

...but yaw information is reliable (and required):

WhatsApp Image 2023-04-04 at 22 15 35

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants