-
Notifications
You must be signed in to change notification settings - Fork 517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Understanding of how the Kinect2 measures the depth #271
Comments
Hi, |
Hello Thiemo, thank you very much for the detailed answer. |
Hello @wiedemeyer I am very new to this and have successfully interfaced the kinect2 with ros using your repository, but i have a similar problem. I have already visualized images and 3D point clouds using various packages like rqt, rviz and image_view and so everything seems to be working fine. But now I want to find out the 3D co-ordinates of the specific points in a point cloud (mostly of some particular object). So can I know how to find out the real world x-y co-ordinates of the points shown in the point clouds (say taking kinect as the reference frame and also origin). And then also the 11-bit depth values corresponding to the same points to find out the z co-ordinate. Thanks in advance. |
Hello,
I have a question regarding of how the Kinect 2 measures the depth. According to my understanding I think the Kinect 2 measures the depth as shown in this image:
So I made some tests where I get the depth of certain points within the depth image and also to see how precise I can measure the thickness of objects. Here are the two pictures.
First the picture of the points where I measures the depth:
and here the second picture of the calculated depth, where depth 1 shows the depth of point 1 and so on:
Since the ground is a plane surface all the points except point 13 on the object should kinda have the same depth according to my understanding but what I realized is that the higher the y-axis the higher is the depth value. So for example point 1 and point 7 who are 500 pixels away in the x-axis have exactly the same depth but point 5 and 11 have a higher depth value. So for example point 5 has a 400 pixel higher y-value and shows a depth value of more than 25 mm higher than point 1.
In addition when I remove the object the depth of point 13 is then 942 mm. So the objects thickness should be 70 mm but actually it's just 52mm thick. I did these tests with several objects and the offset was always around 15-20 mm.
So my question how the kinect2 actually calculates the depth image (so if I understood it right or not) and what the y-value has to do with the change in the depth value. Also I would like to know if an offset of around 15 mm is in the usual range?
The text was updated successfully, but these errors were encountered: