Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Short depth camera's focal length ? #63

Open
SongYx1995 opened this issue Sep 30, 2018 · 12 comments
Open

Short depth camera's focal length ? #63

SongYx1995 opened this issue Sep 30, 2018 · 12 comments

Comments

@SongYx1995
Copy link

Hi all,
I can get the depth image from HoloLens, but i need to convert it to 3D point cloud for other purposes,
thus i want get the focal_x, focal_y, and u0, v0?

Anyone can tell me the value of those parameters,or any method?

@FracturedShader
Copy link

That data is, unfortunately, not available. You have to use MapImagePointToCameraUnitPlane for each depth pixel to get the XY directions that the pixel would be projected in (Z is always 1, since it's projecting to the unit plane). Plug those values into a 3D vector to get (x, y, 1), and then normalize it. From there you can use the depth values multiplied by the depth scalar, multiplied again by the matching vector to create a point. Through testing, I found discarding depth values above 0xFF0 would reliably cut off points that didn't capture. This point cloud will only be in camera capture space though, not world space. Correlating the different captures and transforming them all is a harrowing process though.

@Huangying-Zhan
Copy link

Huangying-Zhan commented Oct 12, 2018

Hi @alexsyx , @FracturedShader has provided a good guideline for getting the 3D point cloud. Basically you need to use the unprojection mapping. To make it more specific, I will provide a long_throw_depth example about getting the 3D point cloud.
After saving the recording, you will get the following data.

  • long_throw_depth: the folder that depth pgm files are stored
  • long_throw_depth_camera_space_projection.bin: this binary file defines the unprojection mapping.

In the binary file, it basically saves (u, v) of the unit plane.
If you want to get a 3D point, it is [X,Y,Z] = Z * [u, v, 1].
Now, I am going to explain how to read this (u,v).

Suppose we have an image where the pixel coordinate of top-left is [y,x]=[0,0], In this binary file, it saves the unprojection mapping (datatype: float32) for each pixel in the following order.
u_{0,0}, v_{0,0}, u_{1,0}, v_{1,0}, …, u_{H-1,0}, v_{H-1,0},
u_{0,1}, v_{0,1}, …, u_{H-1,1}, v_{H-1,1},...
u_{0,W-1}, v_{0, W-1}, …, u_{H-1, W-1}, v_{H-1,W-1}
You can refer to here:
https://github.com/Microsoft/HoloLensForCV/blob/87c5eeb436ae909894a8049cb2584e60dcad13b0/Shared/HoloLensForCV/SensorFrameRecorder.cpp#L243

Knowing the way that unprojection mapping is saved, we can read the mapping (u,v) now.
Here is a python sample code that you can used to read the unprojection mapping.

import numpy as np
def get_cam_space_projection(projection_bin, depth_h, depth_w):
    # read binary file
    projection = np.fromfile(projection_bin, dtype = np.float32)
    x_list = [projection[i] for i in range(0,len(projection),2)]
    y_list = [projection[i] for i in range(1,len(projection),2)]
    
    # rearrange as array
    u = np.asarray(x_list).reshape(depth_w,depth_h).T
    v = np.asarray(y_list).reshape(depth_w,depth_h).T

    return [u, v]

Here is how u and v look like.
image
image
Note that, Yellow side is positive while blue side is negative. White area is invalid region.

Now, we can try to get 3D points.
First, please note that the coordinate system used in research mode is shown here.
image

Suppose we already have a depth map, Z, (-ve values) , we can get the 3D points by [Z*u, Z*v, Z] in the coordinate system described above.
Basically, this is an example of getting 3D points.

(Optional) Depth map issues
The long_throw_depth getting from the recorder is actually not really the Z values but distance(D). A simple conversion is required.
Z = D / sqrt(u^2+v^2+1)
To get the correct 3D points in the above coordinate system, remember to add -ve sign to Z.

Hope this helps!

@maurosyl
Copy link

maurosyl commented Oct 29, 2018

Update: fixed the problem, It was matlab cruelly rounding my u,v matrices' values. I am still curious about intrinsics parameters, if i wanted to retrieve them could i just use the 2D-3D correspondence i found?

Hi everyone, i am trying to recover the intrinsics parameters for the depth sensor and i have been following the instruction on @Huangying-Zhan's answer. I used the suggested python code to recover u and v matrices and then wrote a little matlab function to get the points cloud from the depth frame.
The problem is that the points cloud i get doesn't resemble the original scene in the frame at all:

                                           The depth frame

depthjpg

                                          The relative Points cloud ( Blue is near, Yellow is far)

pcloud

Here is the matlab function, did i do something wrong?

function [points_list] =  uv2pointscloud( u_mat, v_mat, Dframe) 

 for  i = 1 : 450
      
       for j = 1 : 448
           
          if Dframe(i,j) > 64000           
               
              effDframe(i,j) = 0;
           else 
               effDframe(i,j) = Dframe(i,j);
           end
     end 
 end


for  i = 1 : 450
      for j = 1 : 448
           Unscaled_effZframe(i,j) = effDframe(i,j)/ sqrt(u_mat(i,j)^2 + v_mat(i,j)^2 + 1);
      end
end

effZframe = Unscaled_effZframe/1000;

k = 1;
tic
for  i = 1 : 450
       for j = 1 : 448
           if effZframe(i,j) ~= 0
               points_list( k , 1) =  u_mat(i,j)*effZframe(i,j); 
               points_list( k , 2) =  v_mat(i,j)*effZframe(i,j);
               points_list( k , 3) =  effZframe(i,j);
               k = k+1;
           end
       end
end
toc

end

Also, I understand from your answers that the only way to get the intrinsics parameters of the camera is to use u and v to find the 3D points and then compute the intrinsics matrix from the 3D - 2D correnspondence, is that right?

@zwz14
Copy link

zwz14 commented Nov 14, 2018

If you are using short throw depth data, you should take data in the range of 200 to 1000 as valid. After you hidden the invalid ones, maybe you can get right 3D point cloud of near scene like your hand.

@streamwill
Copy link

streamwill commented Dec 1, 2018

Hello, I build the project [Recorder] of HololensForCV to get the depth sensor data,but the results .CSV file and .TAR package is empty,
1
2
Does anyone know what is going on? Thanks very much!
By the way The project[SensorStreamViewer] is working.
snipaste_2018-12-01_16-44-06

@ahojnnes
Copy link
Contributor

@Liebewill please upgrade to the latest Windows Version on HoloLens and checkout the latest commits of this repository. There was an incompatibility with the latest update vs. usage of the API in this repository. It should be fixed now.

@alemarro
Copy link

alemarro commented Jan 22, 2019

Hi all,
my question is: does read_sensor_poses implemented in recorder_console.py (https://github.com/Microsoft/HoloLensForCV/blob/master/Samples/py/recorder_console.py) actually gives the absolute camera pose, i.e. the transformation from the world to the camera coordinate system?

I already have the point cloud in the frame coordinate system. I got this from using the (.bin) projection*(-depth)/1000
Now I am trying to use the inverse of the poses from that code (read_sensor_poses) multiplied by the 3D points, but the point clouds are not fitting.
What am I doing wrong?
Thanks

@vitcozzolino
Copy link

vitcozzolino commented Feb 19, 2019

@mauronano How did you solve the problem with the messed-up point cloud? I'm having the same issue but i don't think it's a rounding problem, at least in my case. Do you have some hints? I can post the code if necessary.

Edit: I think I found out what's happening. In my original picture there are a lot of reflective surfaces (like 2 monitors and a whiteboard) which maybe messed up the depth map. I'm just guessing as I'm a beginner in this field.

@FracturedShader
Copy link

@vitcozzolino, you are correct. In addition to reflective surfaces, anything with a black material/coating does not capture well either (as it absorbs infrared).

pablospe added a commit to pablospe/HoloLensForCV that referenced this issue May 3, 2019
@cyrineee
Copy link

@mauronano how did get the depth from long_throw_depth.csv ?there are too many features in the csv file (which one ?)

@maurosyl
Copy link

@cyrineee you don't get the depth from the .csv file. If you run the recording app you get the depth data in the form of grayscale images arranged in folders like in the picture, every pixel of these images is a measure of the distance of some "obstacle" in the depth camera field of view. Each depth map (i.e. each grayscale image) comes with a timestamp which u can then look up to in the csv file to get additional information on each particular frame (like the orientation of the camera when the frame was shot and some other camera parameters)
Recording

@cyrineee
Copy link

@mauronano Thaaaaaaaaaaaaaaaaaaanks a lot for your explanations !
I should download files from the recoder (windows portal device of the explorer)
then i use .pgm or .ppm to get the distance ?
should i use this script for the depth and the distance ?:
1ff2d0d#diff-104676d8e0f74131a6b3e4a7352c4bcb

Thanks in advance !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants