Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Colored point cloud registration pushing the point cloud away.(might be a bug?) #1261

Closed
lvgeng opened this issue Oct 21, 2019 · 12 comments
Closed
Labels

Comments

@lvgeng
Copy link

lvgeng commented Oct 21, 2019

Colored point cloud registration is a really amazing feature of open3d...
http://www.open3d.org/docs/release/tutorial/Advanced/colored_pointcloud_registration.html

There is a similar problem. As mentioned, the point clouds fly away.
#362

However I have some issue while playing with my own data. The init transformation should be good enough I guess... The color difference is added in the visualization stage, it should not affect the colored point cloud registration.

My testing code and data
python_script_and_data.zip

and results. (each one use the previous result as the initial pose)
original:
Screenshot from 2019-10-21 21-12-32
point to point
Screenshot from 2019-10-21 21-12-44
point to plane
Screenshot from 2019-10-21 21-12-55
registration_colored_icp
Screenshot from 2019-10-21 21-13-15

I understand that it is possible to have a bad result if the data is not so good, but I cannot understand why it pushes the point cloud away after the registration. Is there any solution for that?

My data is here. They are point cloud I generated by other methods.

And the function

result_icp = o3d.registration.registration_colored_icp( source_down, target_down, radius, current_transformation, o3d.registration.ICPConvergenceCriteria(relative_fitness=1e-6, relative_rmse=1e-6, max_iteration=iter))
should I change the relative_fitness or relative_rmse?
And ... what do they do exactly?

@lvgeng lvgeng changed the title Colored point cloud registration pushing the point cloud away. Colored point cloud registration pushing the point cloud away.(might be a bug?) Oct 21, 2019
@syncle
Copy link
Contributor

syncle commented Nov 5, 2019

You need to consider rescale the following parameters.

    voxel_radius = [0.04, 0.02, 0.01]

These values are set for the example dataset - metric units. The parameter indicates 4cm, 2cm, 1cm. The scale of the point sets in the example dataset is 2~3 meters.

With this function, you can check the scale of your point cloud

def compute_base_radius(pcd0, pcd1):
    r1 = np.linalg.norm(source.get_max_bound() - source.get_min_bound())
    r2 = np.linalg.norm(target.get_max_bound() - target.get_min_bound())
    base_radius = min(r1,r2) 
    print("Base radius is : %f" % base_radius)
    return base_radius
   Base radius is : 0.000816

This indicates your point set is too small (span about 0.8mm if we use metric unit) compared to the example dataset. Consider recale or change parameters for colored icp respectively.

@lvgeng
Copy link
Author

lvgeng commented Nov 5, 2019

Well... unfortunately... I have already taken that into consideration, I have tried to scale the voxel_radius and also tried to scale the point cloud. But it does not seem to be a good solution...

    voxel_radius = [0.04, 0.02, 0.01]
`

If the raw data is given to the original demo python script, it would end up with much worse results than my post.

And in my testing I also tried to visualize the downsampled point clouds, they are fine.

I mean, the real problem is that no matter what kind of input data is given, optimising the objective function is supposed to get these point cloud closer to each other, rather than pushing them away, which is unlikely to happen is there is nothing wrong.
Therefore, I think there might be a optimisor problem.

You need to consider rescale the following parameters.

    voxel_radius = [0.04, 0.02, 0.01]

These values are set for the example dataset - metric units. The parameter indicates 4cm, 2cm, 1cm. The scale of the point sets in the example dataset is 2~3 meters.

With this function, you can check the scale of your point cloud

def compute_base_radius(pcd0, pcd1):
    r1 = np.linalg.norm(source.get_max_bound() - source.get_min_bound())
    r2 = np.linalg.norm(target.get_max_bound() - target.get_min_bound())
    base_radius = min(r1,r2) 
    print("Base radius is : %f" % base_radius)
    return base_radius
   Base radius is : 0.000816

This indicates your point set is too small (span about 0.8mm if we use metric unit) compared to the example dataset. Consider recale or change parameters for colored icp respectively.

@syncle
Copy link
Contributor

syncle commented Nov 10, 2019

Could you try making the geometry bigger? Use the function I shared and try to make your geometry bigger to be similar to the geometry scale to the voxel size ratio. I presume there seems to be a numerical issue with tiny geometry.

@lvgeng
Copy link
Author

lvgeng commented Dec 14, 2019

e function I shared and try to make your geometry bigger to be similar to the geometry scale to the voxel size ratio. I presume there seems to be a numerical issue with tiny geometry.

I think there might be something useful.

  1. I scale the point cloud to 1000 times the size. Well... it did not solve the problem. Then I tried to transform the point clouds to somewhere close to the origin point (0,0,0). The result is better than before. I think the key is if the point cloud is far away from the origin, it would be too sensitive to the rotation changes.

  2. According to the theory in the paper, the colored ICP of open3d only works when the initial transport is good enough, which leads to a problem: when the geometry feature is not good enough to generate a proper initial transform, and that is when we really need the colored ICP. It would be much better if we have a vision feature methods that provides a initial transformation.

@MaxChanger
Copy link

@lvgeng Hello, I'm curious about this question, could you describe in detail how to transform the two different point clouds to somewhere close to the origin point (0,0,0) and set the initial transformation?

@lvgeng
Copy link
Author

lvgeng commented Dec 14, 2019

@lvgeng Hello, I'm curious about this question, could you describe in detail how to transform the two different point clouds to somewhere close to the origin point (0,0,0) and set the initial transformation?

Well... first I generate a rough bounding of the entire point cloud and therefore I have its rough center. do a inverse transformation with PointCloud.transform would move the point cloud to somewhere close to (0, 0, 0)

The initial transformation... I have no good solution here. That is the problem I mentioned.
In the tutorial of open3d they have some other methods for that, but they depends on geometry features or requires the reliable RGBD data.

@MaxChanger
Copy link

@lvgeng thank you for your reply

The way you move to is the same as I thought. Is this a very common way to use and could be called pose regularization? I don't remember well

But for two consecutive frames, it can be assumed that there is no translation and rotation, so we can use the identity matrix to initialize, but could we still use the unit identity matrix after your transformation?

@lvgeng
Copy link
Author

lvgeng commented Dec 15, 2019

@lvgeng thank you for your reply

The way you move to is the same as I thought. Is this a very common way to use and could be called pose regularization? I don't remember well

But for two consecutive frames, it can be assumed that there is no translation and rotation, so we can use the identity matrix to initialize, but could we still use the unit identity matrix after your transformation?

If the difference is tiny, yes. But it would lead to weird results if it is not.
And the problem is, I did not find an out of box solution for it...

@MaxChanger
Copy link

There are some related solutions or descriptions #1281 #1286 #1292
Hope these are helpful to you

@rajaneeshdwivedi
Copy link

I've been experiencing similar problems which I have a feeling is related to low-quality depth data. I can confirm that I've been getting much better results by NOT assuming that the registration of adjacent rgbd pairs can be adequately initialised with an identity matrix, but instead providing a ballpark transform estimation with a 5 point RANSAC on a downsampled source/target cloud, as per the docs.

@rajaneeshdwivedi
Copy link

I can also confirm that I'm getting the same level of improvement using fast registration rather than ransac to initialise the local registration, which incurs a negligible increase in running time for make_fragments.py in comparison to an identity initialisation.

@germanros1987
Copy link
Contributor

I think that after @rajaneeshdwivedi contribution we are ready to close the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants