Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ReconstructionSystem with realsense at higher resolution #1273

Closed
argosvr opened this issue Oct 28, 2019 · 8 comments
Labels

Comments

@argosvr
Copy link

@argosvr argosvr commented Oct 28, 2019

I'm trying to use a robotic system to make a bodyscan (some stepper motors to take different angles to take a complete view).
I'm trying to get a better resolution using voxel size, MaxClamp and MinClamp and I get the result I want using a modified version of your sensor capture for realsense and reconstructionSystem with voxel size at 0.05, 0.04 and 0.03 (of course I adapt the voxel size and the max depth and min depth in the config file).
But when I try 0.02, I don't get any correct alignment and the result is unusable. I tried to change different parameters like increasing preference loop registration but I can't get any result.
What are the parameters I can change in the config (or the code) to get a result ? I would like to be able to work with 0.02 and 0.01 voxel resolution (which is the theoretical resolution realsense sensor can achieve). Then how can I do that ?

@argosvr argosvr added the question label Oct 28, 2019
@argosvr argosvr changed the title ReconstructionSystem with realsense at lower resolution ReconstructionSystem with realsense at higher resolution Oct 28, 2019
@theNded

This comment has been minimized.

Copy link
Contributor

@theNded theNded commented Oct 29, 2019

I guess the naming is a little bit misleading. voxel_size is only used for registration and does not affect reconstruction resolution. What takes effect is tsdf_cubic_size = 3.0: the real voxel size used for reconstruction is tsdf_cube_size / 512 = 0.0058, which is already small enough.

Intel RealSense is noisy. If you want high quality body scans, I would suggest using StructureSensor or AzureKinect.

@argosvr

This comment has been minimized.

Copy link
Author

@argosvr argosvr commented Oct 29, 2019

no.. I'm not speaking about tsdf_cubic_size (I know how I can get better reconstruction using that).. and the name comes from the ReconstructionSystem I'm using straight away which is the batch process given byt th open3d team. I'm speaking about voxel_size : from 0.05 to 0.03 I can tell you that the results are very very different.
And the realsense is noisy when you use them out of the box. But with the right light and the right parameters, the results are surprising good.
Then I maintain my question which is really about alignment and registration. Why does it work with 0.05,0.04,0.03 voxel_size and not with 0.02 ? I had to change option(WITH_OPENMP "Use OpenMP multi-threading" ON) to OFF because even with the last version of open3d, the examples were not giving expected results. But I don't see what are the parameters I can change to make it work at 0.02 voxel_size.

@theNded

This comment has been minimized.

Copy link
Contributor

@theNded theNded commented Oct 30, 2019

That's probably because the point cloud is too dense at the most coarse scale such that the registration goes to local optimal.

See this line: now multi-scale ICP is performed on point clouds downsampled with voxel resolution [0.02, 0.01, 0.005], which will only reduce the point cloud size a little bit.

My suggestion is to keep the most coarse voxel resolution 0.05 to allow a higher convergence rate. If you want a better registration at a finer scale, I would suggest to change this line to something like
[voxel_size, voxel_size/2.0, voxel_size/4.0, voxel_size/8.0], [50, 30, 14, 10], (haven't tested, but it should work).

@syncle Should we tune the code a little bit, so that when the voxel_size for downsampling is smaller than grid voxel size, we use the original point cloud instead of downsampling? This might improve the finest registration a little bit.

@argosvr

This comment has been minimized.

Copy link
Author

@argosvr argosvr commented Nov 1, 2019

thank you... I will try this.
After reading this issue - #797 -and having very big questions about the registration process, I discovered that I was unable to register properly the Standford Burghers example. Then I rebuild from source using in cmakelist option (WITH_OPENMP "Use OpenMP multi-threading" OFF).
Then (after a lot of problems because your conda script create a py36 env and the make install-pip-package gave me a cp37 wheel not installable !) I was able to get a proper registration for the Standford Burghers example.
I had to reinstall totally my computer to get this work !
now, I will test your solution.. thanks.
But really with proper parameters and light (wich is very very important) you get very low loise even at 0.02 with the dr415 and the last firmware.

@argosvr

This comment has been minimized.

Copy link
Author

@argosvr argosvr commented Nov 2, 2019

at 0.02, the solution you gave me ([voxel_size, voxel_size/2.0, voxel_size/4.0, voxel_size/8.0], [50, 30, 14, 10]) worked very well and I can get a very good point cloud.
Because the Dr400 is able to get to 0.01, I tried it also...
with the solution [voxel_size, voxel_size/2.0, voxel_size/4.0, voxel_size/8.0], [50, 30, 14, 10] it did not work... I tried also [voxel_size, voxel_size/2.0, voxel_size/4.0, voxel_size/8.0, voxel_size/16.0]], [50, 30, 14, 10, 10] to "mimic" your solution but it did not work either.. It is more an intellectual question than a real question I need to solve. I'm going to get a better sensor than the dr 400 and I know than I will have this problem...
then, if you have an idea...
but thanks for your answer !
by the way, you said in your first answer that a voxel size at 0.0058 is already small enough and I really don't agree with that. At 30cm, you can get with a dr415 a resolution which is less than 1mm and it is what you need to capture a human face.

@argosvr

This comment has been minimized.

Copy link
Author

@argosvr argosvr commented Nov 10, 2019

no idea to make it work at 0.01 ???
reading this [(https://github.com//issues/1261#issuecomment-549909729)] I think that the best solution would be to use this and calculate the voxel_size used for alignment and then modify the reconstrucionsystem to make it more flexible and usable....

@argosvr

This comment has been minimized.

Copy link
Author

@argosvr argosvr commented Nov 17, 2019

ok... no answer.. you don't like Intel cameras...

@theNded

This comment has been minimized.

Copy link
Contributor

@theNded theNded commented Nov 18, 2019

Sensor accuracy may provide clues for tuning parameters for registration or integration, but they are not neccessarily the same. For instance, registering point clouds is finding out the best possible transformation between two point sets with noise and partial overlap. The inputs are not the denser the better, otherwise the registration algorithm is likely to converge to the local minima. A sensor can achieve 0.01 accuracy does not mean it is the best solution to register them starting from 0.01 voxel size.

As I have mentioned, voxel_size defines the coarsest resolution for registration. You need to make sure the coarsest resolution is large enough, in order to assure a larger probability to converge to the global optimal. The usual practice is to start with a large number, and shrink it iteratively. Based on this, by default we start from 0.05, and I have suggested to start from 0.05 with more iterations in my previous comments.

If you insist on registering at the finest scale, just start with the default voxel_size=0.05, and do as many downsamplings as you like to shrink voxel_size: [0.05, 0.025, 0.0125, 0.00625, ......]. This is the configuration that is more likely to work.

More comments on your comments:
We don't have preference on cameras. Instead, we try to find out the optimial configuration that generally works for all sensors. It may happen, due to different sensor properties, that this certain configuration does not work as good as expected. But the overall practice is to start from the existing tuned parameters, as they are based on our experience on various data. Please be sure to understand the roles the parameters play when you tune the parameters based on your own understanding.

We try our best to solve as many issues as we can. Please be patient when you wait for a reply, and please make your questions more intepretable, see the issue template.

@theNded theNded closed this Nov 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.