Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CO-SLAM and Webots #42

Open
mattiapiz opened this issue Dec 15, 2023 · 8 comments
Open

CO-SLAM and Webots #42

mattiapiz opened this issue Dec 15, 2023 · 8 comments
Labels
good first issue Good for newcomers

Comments

@mattiapiz
Copy link

Hi @HengyiWang, I apologize for bothering you, but I am having trouble understanding this behavior.
In particular, I attached an RGB-D camera to my drone in a Webots environment simulation. The goal was to reconstruct the map and navigate the environment using CO-SLAM. I thought this would be possible, but the issue is related to the reconstructed color on the mesh.
Mesh_co_slam
Co-SLAM mesh
Mesh_webot
Webots environment
My question is whether this result could be due to the fact that the images are taken from a simulated environment, and therefore, Co-SLAM has issues using them, or if I am doing something wrong?
Best regards

@HengyiWang
Copy link
Owner

Hi @mattiapiz, this is a cool idea, and Co-SLAM should be able to achieve this. I think that you might have done something wrong. The screenshot you provide is not very clear. Can you maybe show me some screenshots in Meshlab so it will be easier for me to see what is happening here? In the meantime, can you check if you set a valid bounding box by using https://github.com/HengyiWang/Co-SLAM/blob/main/vis_bound.ipynb?

@mattiapiz
Copy link
Author

Thank you very much for the response @HengyiWang. Regarding the bounding box, I believe it is correct.
Here is the result obtained by running ‘vis_bound.ipynb':
vis_bound
This instead is what I got from MeshLab:
meshlab2
meshlab1
I have been trying for days to solve the issue, but unfortunately, I am unable to figure it out. I would be grateful if you could provide me with some suggestions to help me solve the problem.
I thought it could be a problem with the camera reference system or the c2w matrix. I used an identity matrix for the initialization of c2w and for the camera reference frame, I defined it by considering the reference frame of the RangeFinder in Webots. However, after several attempts, I have come to the conclusion that it is merely a matter of convention. It seems that the choice of references frames does not influence the result. Am I correct?

@HengyiWang
Copy link
Owner

Hi @mattiapiz, thanks for providing those details. As far as I can see, the surface reconstruction is not perfect, but still okay. These blocky color artifacts seem more related to the scale of the encoding. If you look into the color decoder, it takes the oneblob encoding as well as the geometric feature output from the SDF decoder. The default dimension of oneblob encoding is 16. In your case, the Y-axis (Green axis in Open3D) is very large. The effective dimension of the oneblob encoding may be less than 4 for the region of interest given this bounding box. Additionally, Co-SLAM assigns different numbers of voxels based on the largest dimension. These factors may result in this blocky color effect. I would suggest trying to change the dimension of oneblob encoding to see if it helps. Otherwise, you can try using two hash encodings, one for color, and another one for SDF by setting oneGrid: False.

@mattiapiz
Copy link
Author

Thanks @HengyiWang for the fast answer. I've tried using both hash encoding, but unfortunately, the result remains the same. I've also reduced the bits for the OneBlob encoding, but nothing changed. From what I understood, reading the paper, having more bits in the OneBlob encoding should allow for capturing more details, is that correct?
I could attach the entire configuration I used.
camera_webots.txt
Maybe it could be a problem with the depth images I pass to the algorithm? Webots provides them as float32, while the Realsense provides them as uint16. However, from what I've read, the algorithm takes float32 as input, so it shouldn't be the issue. I am at your complete disposal for further tests.

@HengyiWang
Copy link
Owner

Hi @mattiapiz, you may need to increase n_bins rather than decreasing it. If the problem still exists, maybe you could send me an email with your data, and I can try to see what is happening. This behavior looks quite interesting to me.

@mattiapiz
Copy link
Author

Hi @HengyiWang, unfortunately It haven't solved the problem, I have sent an email to you with all the dataset that I used.
Thanks for helping me

@HengyiWang
Copy link
Owner

Hi @mattiapiz, thank you for the data. Now I figure out what is the problem. Basically, the data type of your bounding box is int instead of float. You need to use [[-6.,6.],[-1.,6.],[-6.,2.]] instead of [[-6,6],[-1,6],[-6,2]]. This is my fault, I will update the repo later to support np.int. Thank you so much for reporting this issue!!!

@HengyiWang HengyiWang added the good first issue Good for newcomers label Dec 21, 2023
@mattiapiz
Copy link
Author

Hi @HengyiWang, thank you very much. Without you, I don't think I would have figured out the solution to the problem, so I greatly appreciate your effort. Now, I can proceed with the reconstruction of the apartment.
Just one last question: do you have any suggestions regarding the reference frame, c2w, and the camera one in the utilis.py for the computation of the rays? Or should I just use the OpenGL conventions? I've noticed sometimes different results depending on the reference frame I choose.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants