-
Notifications
You must be signed in to change notification settings - Fork 814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Imported Raster is misaligned with Point Cloud data. Bug? #80
Comments
Well, it is common knowledge that the color CANNOT align properly, without some calibration.... |
Hello Marco, Thank you for your comment. I am working with @rarrais on this, so let me add a few comments. What you say is correct, but to the best of my knowledge the point clouds produced by the depth cameras are already aligned wrt the rgb at the driver level (for example, see https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration). That's why we believe we should have a proper alignment ... |
I do not think the two are aligned at the driver level. This step is always done afterwards, as the calibration is done in software, and not stored in the device itself. |
Dear Marco, Thanks, I will make additional attempts to make it work. Regards, Miguel |
Hey @mcallieri , First of all, thank you very much for your assistance. As @miguelriemoliveira said, and following your advices, we investigated further the potential error in calibration between the point cloud and the RGB image. Although our initial idea was that the depth image was calibrated wrt the RGB image, we developed a method for checking if that premise holds true. Our first approach was to replicate the calibration and visualisation procedure of the Kinect 2 driver that we are using, https://github.com/code-iai/iai_kinect2 . This was our output: It looked pretty aligned to our eyes, but we were still not satisfied by this test. As such, we developed a new testing procedure, to better try to better validate this alignment. In this test, we aligned the camera with the table, so that the camera is looking logitudinally at the table. Here is the experiment setup: The idea is to see the coordinates of a point on the border of the table. Since we aligned the camera with the table a point in the border should have coordinates X around 0.026 meters, from our manual measurements. Using RVIZ:The coordinates are close to what was expected (note that the distance between the rgb and ir camera is around 0.052 meters, so we should see a 5 cm error if the point cloud was (wrongly) registered in the depth frame. Using PCLViewer:Same conclusion. Selecting the same points, These are the coordinates: Notice how the x coordinate is around 2 cm. Summary: From these tests, it seems the point cloud is registered in the rgb optical frame, as was our initial conviction. Given this, we are kind of lost on what might be happening when we try to importer rasters on Meshlab and verify that there is a clear misalignment when comparing to the point cloud, on Meshlab. As I said in my initial post, we suspect that this misalignment is causing some errors in aligning the texture with the mesh, using the provided tool in Meshlab. What might be causing this? The camera intrinsic characteristics that we are giving to meshlab through VCGCamera file? The way Meshlab internally deals with those characteristics? Can you (or someone else) point us or give us a dataset where we can see the image rasters perfectly aligning itself with the point cloud? Maybe, if we have this dataset, we can somehow retrace our problem and figure out what's wrong. Any help is deeply appreciated! Once again, thank you very much for your assistance @mcallieri ! Best regards, |
Can you send a geometry and an image, so I can do some testing? |
Hello Marco, Thank you for making available to help us. We took longer than expected because we wanted to make the test with the latest version of Meshlab and it took us a bit to install and re-setup the system. During our tests with this new version of meshlab (2016.12 built on 24 January) we ran into another problem related to importing camera poses to meshlab. Now, after importing the camera pose from the xml (we are using the agisoft xml format), when we press the "show current raster mode" button we cannot see the point cloud, we just the image (although we change the alpha with the mouse wheel). See this screenshot: Note that the camera seems to be in the correct position, see this: All the files we used for the test are here https://www.dropbox.com/sh/0gx5yvszi23k03s/AABT7Ht5jlweNL7ww63MLD8Qa?dl=0 This is our agisoft.xml file. We took most data from a calibration done as described here https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration (there is an yaml file containing the output of calibration) and have the remaining parameters for pixel height, width and focal lenght (all in millimeters, right?) taken from this paper Table 3. <?xml version="1.0" encoding="UTF-8"?>
<document version="1.2.0">
<chunk>
<sensors>
<sensor id="0" label="unknown0" type="frame">
<resolution width="1920" height="1080"/>
<property name="pixel_width" value="0.0031"/>
<property name="pixel_height" value="0.0031"/>
<property name="focal_length" value="3.291"/>
<property name="fixed" value="false"/>
<calibration type="frame" class="adjusted">
<resolution width="1920" height="1080"/>
<fx>1065.5</fx>
<fy>1076.5</fy>
<cx>966</cx>
<cy>542</cy>
<k1>0</k1>
<k2>0</k2>
<p1>0</p1>
<p2>0</p2>
</calibration>
</sensor>
</sensors>
<cameras>
<camera id="0" label="image.png" sensor_id="0" enabled="true">
<transform>1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1</transform>
</camera>
</cameras>
</chunk>
</document> We are giving 0 value for all distortion parameters because the image.png is already an undistorted image. Note that, as discussed above, the point cloud should be registered with the rgb image, which means it was already transformed to align with the rgb reference frame. For that reason we should expect to see a very good alignment between the point cloud and the rgb. Once again, thanks for your help. If something was not clear please ask and we'll do our best to clarify. Regards, Miguel |
first of all
I tried using the data in the paper: FocalMm="3.291" PixelSizeMm="0.0031 0.0031".. .and everything is ALMOST aligned. almost, but not completely, as you said :( I'm sorry I'm not able to give you a solution for this problem. |
Hi Marco, Thank you for your input. My replies:
Other comments: you say almost aligned. Is it the same misalignment we initially reported? Can you post some screenshots so we can take a look? Considering the missing of the extrinsic parameters, I will post an issue on the kinect2 drivers github page just to make sure. I believe the extrinsics should be rotation = identity and translation = zeros because the point cloud was already transformed to the rgb optical reference frame beforehand. Note that, as far as I understood from the kinect2 calibration, it is the point cloud (the depth data) that is transformed to correctly align the rgb data, not the other way around. This would explain why you do not see the X-offset in the RGB data. It was the depth that was offsetted. In any case I will open the issue to try to clarify this. You are definitely helping us a lot. Thank you. We will try to further investigate while we wait for your feedback concerning the clipping error (could it be the cause of the alignment problem?). Best regards, Miguel |
Just a follow up. I opened this issue some months ago in the kinect2 github page about a misalignment between the rgb and point cloud data. It turned out that the misalignment was only in the Z direction and was due to an "immature" depth calibration algorithm. Once we disabled that algorithm everything was fine, meaning the data was correctly aligned. Note that the misalignment we have in meshlab is in XY and does not appear to be the same... |
Dear @mcallieri , Just a follow up on the previous comment by @miguelriemoliveira : we wanted to clarify if the depth image is in fact calibrated wrt the rgb image for the camera system that we are working. For that purpose, we opened an issue on the driver github - code-iai/iai_kinect2#376 . It seems that the depth image is transformed to the rgb referential at the driver level, as we suspected initially. So it seems that at least we know by now that the input matrix in the VCGCamera file should be an identity matrix, as we were doing. Unfortunately, that still does not help solving our problem :( Thank you, once again, for your assistance. Best regards, |
Dear @mcallieri , Just a quick updates from our side:
https://drive.google.com/open?id=0B8IbqXN_5JgBSi1hWDdZUDljZzg Please let us know if we can be of any assistance in solving this issue. Once again, thank you for your time. Best regards, |
Hello,
I have a 3D Time of Flight camera (e.g. a kinect or an xtion), an I am trying to use it for creating a textured mesh.
I am importing the point cloud which has color associated to the vertices.
At the same time, I also import a raster of the associated RGB image.
I cannot make the color in the vertices align propperly with the color in the imported raster. Note that the point cloud and the image (raster) are take with the camera standing still so they should align (perfectly) I think.
My guess is I am not doing the VCGCamera file which give the information about the camera's pose and intrinsics correctly.
Here is my last try at the VCGCamera xml file to be loaded in meshlab:
<project> <VCGCamera CenterPx="320 240" FocalMm="540.5137870197473" LensDistortion="0.0394209281388728 -0.11455730670956" PixelSizeMm="1 1" RotationMatrix=" 1.0 0.0 0.0 0.0 0.0 -1.0 0.0 0.0 0.0 0.0 -1.0 0.0 0.0 0.0 0.0 1.0" TranslationVector="0.0 0.0 0.0 1" ViewportPx="640 480"/> <ViewSettings FarPlane="1000" NearPlane="-1000" TrackScale="0.01"/> </project>
The alignment is good but never perfect, and it should be correct, right?
Here is the point cloud:
Here is the projection of the raster:
And both overlayed, where you can see the misalignment (notice the aruco marker on the wall)
I've tested this using both version 1.3.2 (19 Feb 2016) on Ubuntu and on 2016.12 (23 Dec 2016) on Mac OS. I've also used two different camera systems, an ASUS Xtion PRO Live (equivalent to the Kinect 1) and a Kinect 2.
Any help is hugely appreciated.
Best regards,
Rafael Arrais
The text was updated successfully, but these errors were encountered: