Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues getting decent transformations #5

Open
vkee opened this issue Jan 12, 2017 · 2 comments
Open

Issues getting decent transformations #5

vkee opened this issue Jan 12, 2017 · 2 comments

Comments

@vkee
Copy link

vkee commented Jan 12, 2017

Hi,

I am trying to use this algorithm to replace my PCL ICP based system but have not been able to get any decent transformations. I am trying to align a laser scanned model of an object with the scene point cloud of the object in a basic environment. I followed the instructions to create input for the algorithm and got outputs as follows:

ReadFeature ... done.
ReadFeature ... done.
normalize points :: mean[0] = [-0.202804 -0.075849 0.622425]
normalize points :: mean[1] = [0.015940 -0.047936 0.001295]
normalize points :: global scale : 1.000000
Advanced matching : [0 - 1]
points are remained : 2451
[cross check] points are remained : 11
[tuple constraint] 0 tuples (1100 trial, 1100 actual).
[final] matches 0.
Pairwise rigid pose optimization

with transformations of

0 1 2
1.0000000000 0.0000000000 0.0000000000 -0.2187434137
0.0000000000 1.0000000000 0.0000000000 -0.0279131606
0.0000000000 0.0000000000 1.0000000000 0.6211291552
0.0000000000 0.0000000000 0.0000000000 1.0000000000

This is clearly not correct as there is a rotation to align the clouds.

I tried playing with some of the parameters but have not gotten better results.

Do you have any tips for getting better results? I attached two sample files, one of the model and one of the scene, to see if anyone is able to get a good transformation that aligns the model (sample_files.zip)

Thanks!

@syncle
Copy link
Collaborator

syncle commented Jan 12, 2017

Hi Vkee,

Fast global matching utilizes local 3D feature. I guess the provided bottle_full.pcd is laser scanline. This seems to be too thin to extract distinctive local feature that can be matched for bottle_model.pcd. This is why no matching is survived in [tuple constaint] step. We recommend to use more denser point cloud such as depth map of this bottle template.

Thanks.
Jaesik

@vkee
Copy link
Author

vkee commented Jan 12, 2017

The point cloud attached is segmented out of the point cloud returned from a Microsoft Kinect and has the front face of the bottle. Do you think that is not enough for a match? Do you think it would be a better idea to try matching the entire unsegmented scene point cloud?

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants