Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Imported Raster is misaligned with Point Cloud data. Bug? #80

Closed
rarrais opened this issue Feb 16, 2017 · 12 comments
Closed

Imported Raster is misaligned with Point Cloud data. Bug? #80

rarrais opened this issue Feb 16, 2017 · 12 comments

Comments

@rarrais
Copy link

rarrais commented Feb 16, 2017

Hello,

I have a 3D Time of Flight camera (e.g. a kinect or an xtion), an I am trying to use it for creating a textured mesh.

I am importing the point cloud which has color associated to the vertices.

At the same time, I also import a raster of the associated RGB image.

I cannot make the color in the vertices align propperly with the color in the imported raster. Note that the point cloud and the image (raster) are take with the camera standing still so they should align (perfectly) I think.

My guess is I am not doing the VCGCamera file which give the information about the camera's pose and intrinsics correctly.

Here is my last try at the VCGCamera xml file to be loaded in meshlab:

<project> <VCGCamera CenterPx="320 240" FocalMm="540.5137870197473" LensDistortion="0.0394209281388728 -0.11455730670956" PixelSizeMm="1 1" RotationMatrix=" 1.0 0.0 0.0 0.0 0.0 -1.0 0.0 0.0 0.0 0.0 -1.0 0.0 0.0 0.0 0.0 1.0" TranslationVector="0.0 0.0 0.0 1" ViewportPx="640 480"/> <ViewSettings FarPlane="1000" NearPlane="-1000" TrackScale="0.01"/> </project>

The alignment is good but never perfect, and it should be correct, right?

Here is the point cloud:

point_cloud

Here is the projection of the raster:

raster_projection

And both overlayed, where you can see the misalignment (notice the aruco marker on the wall)

point_cloud_and_raster_projection

I've tested this using both version 1.3.2 (19 Feb 2016) on Ubuntu and on 2016.12 (23 Dec 2016) on Mac OS. I've also used two different camera systems, an ASUS Xtion PRO Live (equivalent to the Kinect 1) and a Kinect 2.

Any help is hugely appreciated.
Best regards,
Rafael Arrais

@mcallieri
Copy link
Member

Well, it is common knowledge that the color CANNOT align properly, without some calibration....
The Kinect-like devices have 2 cameras, one infrared camera for geometry, and an RGB camera for color. The two cameras are NOT co-axial, so the color has to be "aligned" to the geoemetry using some sort of calibration; using an "identity" transformation for the camera, like you did, will not work.
Just do a search on Google, there are plenty of resources for many programming languages and platforms.
Fortunately, as the cameras have fixed focus, and a fixed relative position, this has to be done once.
You may even try using the Mutual Information camera alignment filter (see the video-tutorial on Mr. P MeshLab Tutorials on YouTube), I do not guarantee that it will work, we have never tried, but if it works, the camera calibration should be valid for all the scans. Remember to use a scene with enough geometry, and not just flat walls, for the alignment ;).

@miguelriemoliveira
Copy link

Hello Marco,

Thank you for your comment. I am working with @rarrais on this, so let me add a few comments.

What you say is correct, but to the best of my knowledge the point clouds produced by the depth cameras are already aligned wrt the rgb at the driver level (for example, see https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration).

That's why we believe we should have a proper alignment ...

@mcallieri
Copy link
Member

I do not think the two are aligned at the driver level. This step is always done afterwards, as the calibration is done in software, and not stored in the device itself.
The page you have linked describes a calibration procedure. After the calibration, you should probably have some translation/rotation (which are now missig from the imported camera you attached), and not just a focal and a distortion.
Beware: it is always a pain, converting one calibration format to the other, it will be matter of trial/error to convert the camera format produced by that calibration procedure to the one used by MeshLab.

@miguelriemoliveira
Copy link

Dear Marco,

Thanks, I will make additional attempts to make it work.

Regards,

Miguel

@rarrais
Copy link
Author

rarrais commented Feb 22, 2017

Hey @mcallieri ,

First of all, thank you very much for your assistance.

As @miguelriemoliveira said, and following your advices, we investigated further the potential error in calibration between the point cloud and the RGB image. Although our initial idea was that the depth image was calibrated wrt the RGB image, we developed a method for checking if that premise holds true.

Our first approach was to replicate the calibration and visualisation procedure of the Kinect 2 driver that we are using, https://github.com/code-iai/iai_kinect2 . This was our output:

1353f4ec-f8fe-11e6-9644-91d8ce1844cd

It looked pretty aligned to our eyes, but we were still not satisfied by this test. As such, we developed a new testing procedure, to better try to better validate this alignment. In this test, we aligned the camera with the table, so that the camera is looking logitudinally at the table. Here is the experiment setup:

3411529c-f908-11e6-841b-cae8d6978a3c

341987fa-f908-11e6-9a2d-84b2b73ca2e5

341a0324-f908-11e6-8891-76324f0b4cd2

34146478-f908-11e6-9d5e-b6b08e16f8db

The idea is to see the coordinates of a point on the border of the table. Since we aligned the camera with the table a point in the border should have coordinates X around 0.026 meters, from our manual measurements.

Using RVIZ:

4dc40162-f908-11e6-8ad3-2086aee10403

The coordinates are close to what was expected (note that the distance between the rgb and ir camera is around 0.052 meters, so we should see a 5 cm error if the point cloud was (wrongly) registered in the depth frame.

Using PCLViewer:

Same conclusion. Selecting the same points,

92d5f3c8-f908-11e6-8f89-62eff65522bb

These are the coordinates:

97e8f2ca-f908-11e6-9a85-1bdc9a13c203

Notice how the x coordinate is around 2 cm.

Summary: From these tests, it seems the point cloud is registered in the rgb optical frame, as was our initial conviction.

Given this, we are kind of lost on what might be happening when we try to importer rasters on Meshlab and verify that there is a clear misalignment when comparing to the point cloud, on Meshlab. As I said in my initial post, we suspect that this misalignment is causing some errors in aligning the texture with the mesh, using the provided tool in Meshlab.

What might be causing this? The camera intrinsic characteristics that we are giving to meshlab through VCGCamera file? The way Meshlab internally deals with those characteristics?

Can you (or someone else) point us or give us a dataset where we can see the image rasters perfectly aligning itself with the point cloud? Maybe, if we have this dataset, we can somehow retrace our problem and figure out what's wrong.

Any help is deeply appreciated!

Once again, thank you very much for your assistance @mcallieri !

Best regards,
Rafael

@mcallieri
Copy link
Member

Can you send a geometry and an image, so I can do some testing?

@miguelriemoliveira
Copy link

Hello Marco,

Thank you for making available to help us. We took longer than expected because we wanted to make the test with the latest version of Meshlab and it took us a bit to install and re-setup the system.

During our tests with this new version of meshlab (2016.12 built on 24 January) we ran into another problem related to importing camera poses to meshlab. Now, after importing the camera pose from the xml (we are using the agisoft xml format), when we press the "show current raster mode" button we cannot see the point cloud, we just the image (although we change the alpha with the mouse wheel). See this screenshot:

showcurrentrastermode_diplays_nothing

Note that the camera seems to be in the correct position, see this:

camera_seems_well_placed

All the files we used for the test are here

https://www.dropbox.com/sh/0gx5yvszi23k03s/AABT7Ht5jlweNL7ww63MLD8Qa?dl=0

This is our agisoft.xml file. We took most data from a calibration done as described here https://github.com/code-iai/iai_kinect2/tree/master/kinect2_calibration (there is an yaml file containing the output of calibration) and have the remaining parameters for pixel height, width and focal lenght (all in millimeters, right?) taken from this paper Table 3.

<?xml version="1.0" encoding="UTF-8"?>
<document version="1.2.0">
    <chunk>
        <sensors>
            <sensor id="0" label="unknown0" type="frame">
                <resolution width="1920" height="1080"/>
                <property name="pixel_width" value="0.0031"/>
                <property name="pixel_height" value="0.0031"/>
                <property name="focal_length" value="3.291"/>
                <property name="fixed" value="false"/>
                <calibration type="frame" class="adjusted">
                    <resolution width="1920" height="1080"/>
                    <fx>1065.5</fx>
                    <fy>1076.5</fy>
                    <cx>966</cx>
                    <cy>542</cy>
                    <k1>0</k1>
                    <k2>0</k2>
                    <p1>0</p1>
                    <p2>0</p2>
                </calibration>
            </sensor>
        </sensors>
        <cameras>
            <camera id="0" label="image.png" sensor_id="0" enabled="true">
                <transform>1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1</transform>
            </camera>
        </cameras>
    </chunk>
</document>

We are giving 0 value for all distortion parameters because the image.png is already an undistorted image.

Note that, as discussed above, the point cloud should be registered with the rgb image, which means it was already transformed to align with the rgb reference frame. For that reason we should expect to see a very good alignment between the point cloud and the rgb.

Once again, thanks for your help. If something was not clear please ask and we'll do our best to clarify.

Regards,

Miguel

@mcallieri
Copy link
Member

first of all
0) we know our camera has limitation, it is not a photogrammetry camera, and is not suited to deal with all possible situation. this seems to be one of those :(
then

  1. the problem of the model not showing is a clipping issue, with high probability, is OUR fault. If I translate everything away from the origin, and use an "equivalent" camera, I see the mesh correctly.
  2. the image seems undistorted, so you are right, no distortion parameter should be necessary

I tried using the data in the paper: FocalMm="3.291" PixelSizeMm="0.0031 0.0031".. .and everything is ALMOST aligned. almost, but not completely, as you said :(
I am still convinced there should be some kind of EXTRINSIC parameter missing. the image seems undistorted, but DOES NOT seem modified to cope with the X-offset between the RGB and IR sensor (and for possible convergence).

I'm sorry I'm not able to give you a solution for this problem.
I will try to correct the clipping error, but it can take a while, as I will be out of the office for a course for the next week.

@miguelriemoliveira
Copy link

Hi Marco,

Thank you for your input. My replies:

  1. I agree, this is not a top quality camera and should have some limitations.

  2. How do you do that? just add a translation over z? And keep the same intrinsic parameters? This way we would get the "equivalent" camera you mention?

  3. Yes it is rectified. We are using the image pipeline from ROS http://wiki.ros.org/image_proc. This tool is used by many many people so if some problem existed here I would expect it to have been reported already. So my guess is this is not the problem.

Other comments: you say almost aligned. Is it the same misalignment we initially reported? Can you post some screenshots so we can take a look?

Considering the missing of the extrinsic parameters, I will post an issue on the kinect2 drivers github page just to make sure. I believe the extrinsics should be rotation = identity and translation = zeros because the point cloud was already transformed to the rgb optical reference frame beforehand.

Note that, as far as I understood from the kinect2 calibration, it is the point cloud (the depth data) that is transformed to correctly align the rgb data, not the other way around. This would explain why you do not see the X-offset in the RGB data. It was the depth that was offsetted. In any case I will open the issue to try to clarify this.

You are definitely helping us a lot. Thank you. We will try to further investigate while we wait for your feedback concerning the clipping error (could it be the cause of the alignment problem?).

Best regards,

Miguel

@miguelriemoliveira
Copy link

miguelriemoliveira commented Mar 1, 2017

Just a follow up. I opened this issue some months ago in the kinect2 github page about a misalignment between the rgb and point cloud data.

code-iai/iai_kinect2#334

It turned out that the misalignment was only in the Z direction and was due to an "immature" depth calibration algorithm. Once we disabled that algorithm everything was fine, meaning the data was correctly aligned.

Note that the misalignment we have in meshlab is in XY and does not appear to be the same...

@rarrais
Copy link
Author

rarrais commented Mar 7, 2017

Dear @mcallieri ,

Just a follow up on the previous comment by @miguelriemoliveira : we wanted to clarify if the depth image is in fact calibrated wrt the rgb image for the camera system that we are working.

For that purpose, we opened an issue on the driver github - code-iai/iai_kinect2#376 .

It seems that the depth image is transformed to the rgb referential at the driver level, as we suspected initially. So it seems that at least we know by now that the input matrix in the VCGCamera file should be an identity matrix, as we were doing.

Unfortunately, that still does not help solving our problem :(

Thank you, once again, for your assistance.

Best regards,
Rafael

@rarrais
Copy link
Author

rarrais commented Mar 15, 2017

Dear @mcallieri ,

Just a quick updates from our side:

  1. We received an e-mail from Dropbox saying that the files could no longer be public from the 15th of March onward. As such, we uploaded them on Google Drive. They are accessible by the following link:

https://drive.google.com/open?id=0B8IbqXN_5JgBSi1hWDdZUDljZzg

Please let us know if we can be of any assistance in solving this issue. Once again, thank you for your time.

Best regards,
Rafael

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants