Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rigid icp merged not well #29

Closed
ShiwenH opened this issue May 7, 2019 · 17 comments
Closed

rigid icp merged not well #29

ShiwenH opened this issue May 7, 2019 · 17 comments

Comments

@ShiwenH
Copy link

ShiwenH commented May 7, 2019

Hi I managed to run the rigid_icp example and get a very good result. However when i tried to import frame_1.ply and frame_2.ply into this demo (by changing some example code), I found that these two files are not merged well. It means this is a not small gab between the transformed data.

Could you help to explain. I do not like pcl because it is so heave and slow.

@kzampog
Copy link
Owner

kzampog commented May 7, 2019

Hi!

Those two point clouds were meant as input to non_rigid_icp, since they are not related via a rigid transformation (so this is generally expected). If you suspect that rigid ICP behaves erratically on them, please share some code and sample output, and I will investigate!

Thanks!

@ShiwenH
Copy link
Author

ShiwenH commented May 9, 2019

Hi,

Today I scanned 350 frames with a 3D sensor and then imported the beginning 50 frames into my demo based on this SDK. The result is not bad. but for the beginning 100 frames, it is too bad.

Here is the sample data.
(uploading...will update)

Here is my demo source code. (In this code, I use a new tinyply because I do not know how to use old tinyply you use. you can adjust slightly the saving ply part.)
rigid_icp.txt

I have checked that these frames can be registrated well in some other tools.

Regards.

@kzampog
Copy link
Owner

kzampog commented May 10, 2019

Hi! I'll have a look once the data is uploaded!

@ShiwenH
Copy link
Author

ShiwenH commented May 10, 2019

https://www.dropbox.com/s/xbfbtcm009r87s8/model_back.7z?dl=0

This is the dropbox link.

Thanks so much.

@kzampog
Copy link
Owner

kzampog commented May 12, 2019

Hi!

I just did some testing with your code and data. I increased the correspondence distance threshold to 100 and noticed that failures then start after frame 90. One failure happens when registering frames 91 and 92: point-to-plane does align the table surfaces but there is not enough geometry to disambiguate for sliding (e.g., more surfaces not parallel to the table would help)!

One solution you could consider is using frame-to-model alignment, which is less brittle (but slower if done naively as below):

#include <cilantro/icp_common_instances.hpp>
#include <cilantro/point_cloud.hpp>
#include <iostream>

int main(int argc, char ** argv) {
    cilantro::RigidTransform3f cumulativeMatrix = cilantro::RigidTransform3f::Identity();
    cilantro::PointCloud3f cumulativeCloud;

    char dstName[100];
    char srcName[100];

    for (int i = 0; i < 352; i++) {
        sprintf(dstName, "/home/kzampog/Desktop/icp_test/model/model_back/1_%05d.ply", i);
        sprintf(srcName, "/home/kzampog/Desktop/icp_test/model/model_back/1_%05d.ply", i + 1);

        cilantro::PointCloud3f dst(dstName), src(srcName);

        if (dst.isEmpty() || src.isEmpty()) {
            std::cout << "One of the clouds is empty!" << std::endl;
            return 0;
        }

//        dst.transform(cumulativeMatrix);
        src.transform(cumulativeMatrix);

//        cilantro::SimpleCombinedMetricRigidICP3f icp(dst.points, dst.normals, src.points);
        cilantro::SimpleCombinedMetricRigidICP3f icp(cumulativeCloud.points, cumulativeCloud.normals, src.points);

        // Parameter setting
        icp.setMaxNumberOfOptimizationStepIterations(1).setPointToPointMetricWeight(0.0f).setPointToPlaneMetricWeight(1.0f);
        icp.correspondenceSearchEngine().setMaxDistance(100.0f);
        icp.setConvergenceTolerance(1e-4f).setMaxNumberOfIterations(30);

        cilantro::RigidTransform3f tf_est = icp.estimate().getTransform();

        std::cout << "Iterations performed: " << icp.getNumberOfPerformedIterations() << std::endl;
        std::cout << "Has converged: " << icp.hasConverged() << std::endl;
        std::cout << "ESTIMATED transformation R:" << std::endl << tf_est.matrix() << std::endl;

        if (i == 0) {
            cumulativeCloud.append(dst);
        }

        src.transform(tf_est);

        cumulativeCloud.append(src).gridDownsample(1.0f);

        cumulativeMatrix = tf_est * cumulativeMatrix;
    }

    cumulativeCloud.toPLYFile("/home/kzampog/Desktop/a1.ply");

    return 0;
}

I also included a downsampling step on model update, but probably not with the best bin size. With the above, all frames in the sequence were registered correctly:
screenshot

If your data comes from an RGBD camera, you could also build upon the included fusion example (which is roughly like KinectFusion using a point-based model representation) -- that should improve the model quality quite a bit.

Please let me know if I can be of further help.

Cheers!

@ShiwenH
Copy link
Author

ShiwenH commented May 13, 2019

Wow. Excellent. Last Sunday, I was attending my brother's wedding. Sorry for responding you so late. Tonight I will test it with this model and any other models. Thanks so much. After testing it, I will learn this code in detail. I really like this source code because it is faster and lighter than PCL.

@kzampog
Copy link
Owner

kzampog commented May 13, 2019

Hehe no worries -- congrats to your brother!
I am really glad you like this :)

@ShiwenH
Copy link
Author

ShiwenH commented May 14, 2019

Hi. I just tested the code you provided. The result was awesome but the speed was a bit higher because I might use it for cross-platform. When I set the gridDownsample value to 3.0 and the max iteration number to 20, processing 350 frames costed 180 seconds. Are there any suggestions to further accelerate it?
Besides, I want to learn this project. It would be great that you provide some relevant papers.

@kzampog
Copy link
Owner

kzampog commented May 15, 2019

Hi!

It takes about 30 seconds on my 4C/8T machine (with the values you suggested). Did you compile with OpenMP?

This is the code I tested with (slight modification of the previous version):

#include <cilantro/icp_common_instances.hpp>
#include <cilantro/point_cloud.hpp>
#include <cilantro/timer.hpp>
#include <iostream>

int main(int argc, char ** argv) {
    cilantro::RigidTransform3f cumulativeMatrix = cilantro::RigidTransform3f::Identity();
    cilantro::PointCloud3f cumulativeCloud;

    char pcdName[100];

    cilantro::Timer timer;

    for (int i = 0; i <= 352; i++) {
        sprintf(pcdName, "/home/kzampog/Desktop/icp_test/model/model_back/1_%05d.ply", i);
        cilantro::PointCloud3f pcd(pcdName);

        if (pcd.isEmpty()) {
            std::cout << "Cloud is empty!" << std::endl;
            return 0;
        }

        if (cumulativeCloud.isEmpty()) {
            cumulativeCloud.append(pcd);
            continue;
        }

        timer.start();

        cilantro::SimpleCombinedMetricRigidICP3f icp(cumulativeCloud.points, cumulativeCloud.normals, pcd.points);

        // Parameter setting
        icp.setInitialTransform(cumulativeMatrix);
        icp.setMaxNumberOfOptimizationStepIterations(1).setPointToPointMetricWeight(0.0f).setPointToPlaneMetricWeight(1.0f);
        icp.correspondenceSearchEngine().setMaxDistance(100.0f);
        icp.setConvergenceTolerance(1e-4f).setMaxNumberOfIterations(20);

        cumulativeMatrix = icp.estimate().getTransform();

        timer.stop();
        std::cout << "Frame " << i <<  " registration: " << timer.getElapsedTime() << std::endl;

        // std::cout << "Iterations performed: " << icp.getNumberOfPerformedIterations() << std::endl;
        // std::cout << "Has converged: " << icp.hasConverged() << std::endl;
        // std::cout << "ESTIMATED transformation R:" << std::endl << tf_est.matrix() << std::endl;

        timer.start();

        pcd.transform(cumulativeMatrix);
        cumulativeCloud.append(pcd).gridDownsample(3.0f);

        timer.stop();
        std::cout << "Frame " << i <<  " model update: " << timer.getElapsedTime() << std::endl;

        // cumulativeMatrix = tf_est * cumulativeMatrix;
    }

    cumulativeCloud.toPLYFile("/home/kzampog/Desktop/a1.ply");

    return 0;
}

You could shave off a few seconds by not downsampling at every frame (e.g., do it every 2-5 frames instead), but that's not going to make a big difference. You could also give the fusion example a shot (projective ICP and index map based point fusion); it should be quite a bit faster and produce better quality reconstructions.

Unfortunately, API documentation is sparse at the moment, but the example programs cover a good part of the supported functionality. We also have a technical report on cilantro (link in README.md) that gives a high level overview of the library.

Let me know if I can help with anything!

Cheers!

@kzampog kzampog closed this as completed May 21, 2019
@ShiwenH
Copy link
Author

ShiwenH commented May 27, 2019

As visual studio does not support a high enough OpenMP, I tested this demo with VSCode and gcc compiler in windows. After I enabled the OpenMP, the speed is good.
However, as we are doing registration with cumulative result, the time will grow faster.
Now I only take last 6 frames to do registration and the speed is awesome again.

After migrating from VS to VSCode, the griddownsample speed is very slow. Do you know this issue?

@ShiwenH
Copy link
Author

ShiwenH commented May 27, 2019

My code attached.

#include <cilantro/icp_common_instances.hpp>
#include <cilantro/point_cloud.hpp>
#include
#include
#include

int main(int argc, char ** argv)
{
cilantro::RigidTransform3f cumulativeMatrix = cilantro::RigidTransform3f::Identity();

char firstModel[256];
sprintf(firstModel, "model/model_back/1_00000.ply");
cilantro::PointCloud3f cumulativeCloud(firstModel);

std::deque<cilantro::PointCloud3f> dequeForIcp;
dequeForIcp.push_back(cumulativeCloud);

for (int i = 1 ; i < 42 ; i++)
{
    std::cout << "Index: " << i << std::endl;

    char srcName[256];

    sprintf(srcName, "model/model_back/1_%05d.ply", i + 1);

    cilantro::PointCloud3f src(srcName);

    src.transform(cumulativeMatrix);
    
    cilantro::PointCloud3f pointcloudForIcp;
    for (int i = 0; i < dequeForIcp.size(); i++)
    {
        pointcloudForIcp.append(dequeForIcp.at(i));
    }

    cilantro::SimpleCombinedMetricRigidICP3f icp(pointcloudForIcp.points, pointcloudForIcp.normals, src.points);

    icp.setMaxNumberOfOptimizationStepIterations(1).setPointToPointMetricWeight(1.0).setPointToPlaneMetricWeight(0.0);
    icp.correspondenceSearchEngine().setMaxDistance(600);
    icp.setConvergenceTolerance(1e-2f).setMaxNumberOfIterations(30);

    cilantro::RigidTransform3f tf_est = icp.estimate().getTransform();

    src.transform(tf_est);

    dequeForIcp.push_back(src);
    if (dequeForIcp.size() > 6)
    {
        dequeForIcp.pop_front();
    }

    cumulativeCloud.append(src);

    cumulativeMatrix = tf_est * cumulativeMatrix;
}

cumulativeCloud.toPLYFile("model/new1.ply");

system("pause");

return 0;

}

@kzampog
Copy link
Owner

kzampog commented May 27, 2019

Hi!

That should work well!

How long does gridDownsample take and on what input/parameters? If you could share a test cloud and timing (before/after migrating to gcc), that would be help!

@kzampog kzampog reopened this May 27, 2019
@ShiwenH
Copy link
Author

ShiwenH commented May 29, 2019

This is the time after adding griddownsample:

image

This is the new code:
#include <cilantro/icp_common_instances.hpp>
#include <cilantro/point_cloud.hpp>
#include <cilantro/timer.hpp>
#include
#include
#include

int main(int argc, char ** argv)
{
cilantro::RigidTransform3f cumulativeMatrix = cilantro::RigidTransform3f::Identity();

char firstModel[256];
sprintf(firstModel, "model/model_back/1_00000.ply");
cilantro::PointCloud3f cumulativeCloud(firstModel);

std::deque<cilantro::PointCloud3f> dequeForIcp;
dequeForIcp.push_back(cumulativeCloud);



for (int i = 1 ; i < 76 ; i++)
{
    char srcName[256];

    sprintf(srcName, "model/model_back/1_%05d.ply", i + 1);

    cilantro::PointCloud3f src(srcName);

    src.transform(cumulativeMatrix);
    
    cilantro::PointCloud3f pointcloudForIcp;
    for (int i = 0; i < dequeForIcp.size(); i++)
    {
        pointcloudForIcp.append(dequeForIcp.at(i));
    }

    cilantro::SimpleCombinedMetricRigidICP3f icp(pointcloudForIcp.points, pointcloudForIcp.normals, src.points);

    icp.setMaxNumberOfOptimizationStepIterations(1).setPointToPointMetricWeight(1.0).setPointToPlaneMetricWeight(0.0);
    icp.correspondenceSearchEngine().setMaxDistance(600);
    icp.setConvergenceTolerance(1e-2f).setMaxNumberOfIterations(30);

    cilantro::RigidTransform3f tf_est = icp.estimate().getTransform();

    src.transform(tf_est);

    dequeForIcp.push_back(src);
    if (dequeForIcp.size() > 5)
    {
        dequeForIcp.pop_front();
    }

    cilantro::Timer timer;
    timer.start();

    cumulativeCloud.append(src).gridDownsample(3.0f);

    timer.stop();

    std::cout << timer.getElapsedTime() << std::endl;

    cumulativeMatrix = tf_est * cumulativeMatrix;
}

cumulativeCloud.toPLYFile("model/new1.ply");

system("pause");

return 0;

}

Regarding the test data, I guess the old data I provided should also work for this test.

@kzampog
Copy link
Owner

kzampog commented May 29, 2019

Wow, these numbers are crazy! I am getting about 45ms for the very last downsampling steps (end of sequence), and the whole thing takes about 6.5 seconds on my Ubuntu-based distro. Are you compiling with optimizations enabled (e.g. -O3 for gcc)?

If it's not optimizations, one thing to try would be to disable parallelism for downsampling and see what happens. To do that, try changing line 140 of grid_accumulator.hpp from:

#ifdef ENABLE_NON_DETERMINISTIC_PARALLELISM

to:

#if 0

and see if that changes the times.

@ShiwenH
Copy link
Author

ShiwenH commented May 30, 2019

-O3 is always there.
After #if 0, the time is as followings.
It is much better but I guess it is still not perfect.

31.9123
31.9141
37.8982
66.821
54.8529
65.8255
74.7994
78.7888
90.7598
103.722
100.729
107.712
125.665
123.67
138.676
138.629
155.584
158.575
162.594
166.559
183.508
185.503
198.468
192.485
195.477
204.477
219.413
222.414
228.389
238.368
255.366
261.34
257.312
267.286
279.253
290.224
299.241
304.186
319.146
326.166
321.142
346.122
340.09
350.064
353.056
378.014
369.03
371.05
381.978

@kzampog
Copy link
Owner

kzampog commented Jun 4, 2019

So it was parallelism that was slowing it down... In Linux, it does give measurable speedups!

Times now look much more reasonable, but are still a few times higher than what I get on my Linux setup.

I will try to investigate as soon as I get a chance. Thanks for reporting this!

@ShiwenH
Copy link
Author

ShiwenH commented Jun 5, 2019

My pleasure

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants