Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

4 points problem fitting (Markerless generic model-based tracking) #797

Closed
andreabonacini opened this issue Jul 30, 2020 · 7 comments
Closed

Comments

@andreabonacini
Copy link

Hi,

I'm trying to use the Markerless generic model-based tracking tool for the edge detection of some objects. The aim is to reach the detection using 4 points defined on the same face but sometimes it does not work.

Below i'm showing you an example with a teabox.

Points numeration:
Numerazione

Error:
Errore

Do you know this kind of problem? If yes, why is this happening and is there some way to solve it? Thanks in advance.

Regards,
Andrea

@s-trinh
Copy link
Contributor

s-trinh commented Jul 30, 2020

Hi,

This is due to the planar pose estimation ambiguity:

image

Small uncertainties in the 2D corner locations and the pose can be ambiguous in certain viewpoint. On the same topic, see #668

To solve the issue, try to use a 4th point not on the same plane. Or add a 5th point not on the same plane.

@andreabonacini
Copy link
Author

Thanks for your reply.

I have 2 more questions:

  1. Is there any method to implement the object fitting without using the whole tracking method?
  2. Is there any possibility to evaluate more 4 points combinations fastly instead of repeating many times the initFromPoints() + the tracking function? Because we have a list of 4 points combinations to evaluate in order to fastly get the best fitting/pose of the object (that corresponds to the best 4 points combination)

Regards,
Andrea

@s-trinh
Copy link
Contributor

s-trinh commented Aug 2, 2020

Can you describe a bit more what you want to achieve? For instance, do you have a fixed list of 3D coordinates and unknown correspondind 2D coordinates?


Is there any method to implement the object fitting without using the whole tracking method?

I don't think so, but if you can provide more information about your use case.

Is there any possibility to evaluate more 4 points combinations fastly instead of repeating many times the initFromPoints() + the tracking function? Because we have a list of 4 points combinations to evaluate in order to fastly get the best fitting/pose of the object (that corresponds to the best 4 points combination)

initFromPoints() is used to compute the initial pose to initialise the tracker. If you have another method that provides this info, you can use initFromPose() to initialise the tracker.

Getting this initial pose is tricky. initFromPoints() is an easy way to let the user click on specific points to perform the pose computation and initialise the tracker. Keypoints can be a solution. For state of the art methods, see 6th International Workshop on Recovering 6D Object Pose.

If you have a method that provides the 2d coordinates, you can use vpPose to compute the pose from 2D/3D information. Since you have mentioned "a list of 4 points combinations to evaluate", maybe what you are doing is similar to findMatch()?

@andreabonacini
Copy link
Author

Thanks for the tips.

I think that initFromPose() is not what we are looking for, since the pose matrix is the real aim.

Our goal is to compute the model fitting of an object starting from four 3D points (on the same plane) in the object frame and the four corresponding 2D points on the image plane (provided by our vision algorithm) in order to get the pose in the camera frame.
In this way, it seems to me that findMatch() could be enough, isn't it?

@s-trinh
Copy link
Contributor

s-trinh commented Aug 4, 2020

In this way, it seems to me that findMatch() could be enough, isn't it?

Yes, I think so.

You will need to tweak the RANSAC parameters like the maximum number of iterations, reprojection error threshold, etc. to have good results.

@andreabonacini
Copy link
Author

Thanks again.

Also, about the vector of 2D points: are they the pixel points on the image plane or are they expressed in meters?
I'm asking you this because i don't get where the function findMatch() exploits the intrinsic matrix of the camera.

I imagine that it need that matrix in order to get the correct cMo.

@fspindle
Copy link
Contributor

The vector of 2D points p2D used in findMatch() needs x,y 2D values in meter.

The conversion from pixel to meter has to be done prior calling this function using something similar to:

for (unsigned int i = 0; ...) {
    double x = 0, y = 0;
    // u, v are the coordinates in pixels (u horizontal, v vertical coordinates)
    vpPixelMeterConversion::convertPoint(cam, u, v, x, y);
    p2D[i].set_x(x);
    p2D[i].set_y(y);
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants