Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvement for VoxelDownsample #347

Closed
wants to merge 8 commits into from

Conversation

l00p3
Copy link
Contributor

@l00p3 l00p3 commented Jun 12, 2024

The function VoxelDownsample in Preprocessing.cpp can be done with just one cycle. Instead of creating the hash_map and then extracting the points from them, we can just fill the vector of points while we are checking if voxels are occupied or not. Also we don't need an hashmap ad store the points inside with this, a set is enough.

Furthermore, probably a const voxel_size parameter is better.

@nachovizzo
Copy link
Collaborator

Fantastic I love this

Is it too much to ask for a quick and dirty benchmark??? Just to log in this PR what is the gain of adding this

@tizianoGuadagnino
Copy link
Collaborator

I love it <3, as nacho said if you can just post some results for some basic benchmarks (Mulran, NCLT etc) that will be fantastic

@l00p3
Copy link
Contributor Author

l00p3 commented Jun 13, 2024

After some small changes (now I use robin_set instead of std set) these are the results. It surprisingly reduces the error (except on NCLT), for some reason. I will test everything on kitti dataset and post the results.

Before changes (main branch):

base_1
base_2


After changes:

update_1
update_2

@tizianoGuadagnino
Copy link
Collaborator

@nachovizzo @benemer on Sejong 02 the absolute error dropped by about 500 meters...I am a bit scared...

@benemer
Copy link
Member

benemer commented Jun 13, 2024

I am, in general, a bit surprised the results change so much...the downsampling itself should be deterministic, and this part is not multi-threaded, right?

@l00p3
Copy link
Contributor Author

l00p3 commented Jun 14, 2024

Ok, I have the results for KITTI. Here there is an increase in the error, but sometime a reduction in the absolute error. The frame rate is consistently higher, as expected.

kitti1

kitti2

kitti3

@l00p3
Copy link
Contributor Author

l00p3 commented Jun 14, 2024

I am, in general, a bit surprised the results change so much...the downsampling itself should be deterministic, and this part is not multi-threaded, right?

This should be an explaination for the change in the errors: each frame is voxelized twice.
With the original code, during the first voxelization the points where selected in sequence going throw the vector of points, then the way in which the first point for each voxel was selected was deterministic, but then, the output vector of the first call of the function was a "shuffled" version of the voxelized points, because it was constructed by looping over the map, that is unordered by definition.
For this reason, during the second voxelization (to have the keypoints for registration) you were selecting the first point in each voxel always with a different order, obtaining a less regular structure, as in this scan from a kitti sequence:
scan_main

With the "new approach", during the first voxelization, we select again the point in order but we also add them in the output vector in the same order, because we avoid the loop over the map. For this reason, also in the second voxelization we have an order in the way in which we select the first point for each voxel, that is deterministic. This is shown in the same frame of a kitti sequence, this time using the new approach:
scan_update

Now, the fact that the structure of the voxelized cloud influences the error so much is still surprising but not in the realm of black magic.
@nachovizzo @tizianoGuadagnino @benemer you should now decide if you want to have the regular structure or not.
Thanks.

@benemer
Copy link
Member

benemer commented Jun 14, 2024

This is amazing. We finally found the reason why the double downsampling broke the scan pattern. Plus, we can see that the more regular pattern does not necessarily harm the performance.

The "worse" results on KITTI are fine for me, because the KITTI GT is not good, especially for relative metrics.

Do you guys think this more regular-looking scan pattern can cause any issues?

@benemer
Copy link
Member

benemer commented Jun 14, 2024

I am running experiments on the Helipr data now, so let's see how much it changes there!

@l00p3
Copy link
Contributor Author

l00p3 commented Jun 27, 2024

Important update!
I tested the system with only a single downsample. Then I tested this PR update with a single downsample. Here the results. It would be interesting to analyze also this direction and test it more. Probably with the approach of this PR a single downsampling is sufficient.

Screenshot from 2024-06-27 12-48-46

@nachovizzo
Copy link
Collaborator

@l00p3, thank you a lot for pushing on this. A side comment: Would you be able to run all the experiments (in case you haven't already) without multithreading? That would tell us the speedup in the system besides your hardware, as they are not minerally correlated. Amazing and super interesting results btw

@nachovizzo nachovizzo added the core label Jul 5, 2024
Copy link
Collaborator

@nachovizzo nachovizzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would actually like to have this in the system. But as I don't see any particular evidence that this improves/degrades 100% of the cases, I suggest we add a config flag regular_downsampling that is False by default, and we maintain both voxelization strategies until we are 100% convinced than one is better than the other.

Seems to me that for "regular" pattern-like LiDARs, this PR works better. But for non repetitive, patterns such as livox or BPearls, the former voxelization strategy is better.

If you don't want to do the whole engineering, let me know (and give me access to your fork ) so I can take it from here and do the config magic

@l00p3
Copy link
Contributor Author

l00p3 commented Jul 9, 2024

You should now have access to the fork, probably better if you manage to do that so I'll not mess up with the code.
Still didn't manage to test on single core, but I can do the test if still needed this week.

@tizianoGuadagnino
Copy link
Collaborator

@l00p3 I think at this point, I think trying on a single core is unnecessary. We will probably work on #362 to renovate the VoxelDownsample and voxel stuff in general. This was a huge contribution to the project, mainly because it triggered some big changes in the code base and how we manage voxelization in general; although we will probably not use code from this PR, I will make sure you are in the list of contributors, thanks again <3

@nachovizzo nachovizzo added the voxelization All the topic related to voxelization utilities label Aug 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core voxelization All the topic related to voxelization utilities
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants