Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Understanding of how the Kinect2 measures the depth #271

Closed
AlexxxBeck opened this issue May 2, 2016 · 3 comments
Closed

Understanding of how the Kinect2 measures the depth #271

AlexxxBeck opened this issue May 2, 2016 · 3 comments

Comments

@AlexxxBeck
Copy link

Hello,
I have a question regarding of how the Kinect 2 measures the depth. According to my understanding I think the Kinect 2 measures the depth as shown in this image:
ic568991
So I made some tests where I get the depth of certain points within the depth image and also to see how precise I can measure the thickness of objects. Here are the two pictures.
First the picture of the points where I measures the depth:
points
and here the second picture of the calculated depth, where depth 1 shows the depth of point 1 and so on:
depth of points

Since the ground is a plane surface all the points except point 13 on the object should kinda have the same depth according to my understanding but what I realized is that the higher the y-axis the higher is the depth value. So for example point 1 and point 7 who are 500 pixels away in the x-axis have exactly the same depth but point 5 and 11 have a higher depth value. So for example point 5 has a 400 pixel higher y-value and shows a depth value of more than 25 mm higher than point 1.
In addition when I remove the object the depth of point 13 is then 942 mm. So the objects thickness should be 70 mm but actually it's just 52mm thick. I did these tests with several objects and the offset was always around 15-20 mm.

So my question how the kinect2 actually calculates the depth image (so if I understood it right or not) and what the y-value has to do with the change in the depth value. Also I would like to know if an offset of around 15 mm is in the usual range?

@kohrt
Copy link
Collaborator

kohrt commented May 4, 2016

Hi,
the result of the depth measurement should be like that in theory. I also recognized some offsets with are different from sensor to sensor, on one sensor I also saw depth measurement changes related to the x coordinate. I think a proper depth calibration is needed for applications like yours. If someone could provide a stable and easy to use depth calibration I would really appreciate it.
The Kinect2 is a ToF sensor, and it has some issues. For example the depth measurements highly depends on the surface, for example (semi-)translucent (glas, plasic, etc.), highly reflective (mirrors, steel, etc.) or non reflective surfaces (dark black) are problematic. Also there are problems with corners, nearly orthogonal planes and reflections.
In general I would recommend to average the measurements over a small area instead of just one pixel and maybe over multiple frames.

@AlexxxBeck
Copy link
Author

Hello Thiemo, thank you very much for the detailed answer.

@adarshmodh
Copy link

adarshmodh commented May 19, 2016

Hello @wiedemeyer I am very new to this and have successfully interfaced the kinect2 with ros using your repository, but i have a similar problem. I have already visualized images and 3D point clouds using various packages like rqt, rviz and image_view and so everything seems to be working fine. But now I want to find out the 3D co-ordinates of the specific points in a point cloud (mostly of some particular object). So can I know how to find out the real world x-y co-ordinates of the points shown in the point clouds (say taking kinect as the reference frame and also origin). And then also the 11-bit depth values corresponding to the same points to find out the z co-ordinate. Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants