Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Canwe use mapper functions on already acquired images #2

Open
hardik-uppal opened this issue May 11, 2020 · 4 comments
Open

Canwe use mapper functions on already acquired images #2

hardik-uppal opened this issue May 11, 2020 · 4 comments

Comments

@hardik-uppal
Copy link

I have already acquired rgb-d dataset from kinect2. Can this library be used for already acquired images?

@KonstantinosAng
Copy link
Owner

Most of the library's functions require the actual depth values from Kinect, meaning the distance of the object from the camera. However, a saved Depth Frame or Color Frame does not contain any information about the depth, they only represent pixel values. I don't think that you can map color to depth without using a Kinect device because you need acquire the depth values in meters from the Kinect.

@KonstantinosAng
Copy link
Owner

KonstantinosAng commented May 11, 2020

This is the code to Map Color to Depth, however without the Kinect running i don't think that it will produce any result. Most likely a black image.

import numpy as np
import ctypes
import cv2
from pykinect2 import PyKinectV2
from pykinect2.PyKinectV2 import *
from pykinect2 import PyKinectRuntime

kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Color)

""" import your images here """
depth_img = cv2.imread('path_to_your_depth_frame')
align_depth_img = cv2.imread('path_to_your_color_frame')

color2depth_points_type = _DepthSpacePoint * np.int(1920 * 1080)
color2depth_points = ctypes.cast(color2depth_points_type(), ctypes.POINTER(_DepthSpacePoint))
kinect._mapper.MapColorFrameToDepthSpace(ctypes.c_uint(512 * 424), kinect._depth_frame_data, ctypes.c_uint(1920 * 1080), color2depth_points)
depthXYs = np.copy(np.ctypeslib.as_array(color2depth_points, shape=(kinect.color_frame_desc.Height*kinect.color_frame_desc.Width,)))
depthXYs = depthXYs.view(np.float32).reshape(depthXYs.shape + (-1,))
depthXYs += 0.5
depthXYs = depthXYs.reshape(kinect.color_frame_desc.Height, kinect.color_frame_desc.Width, 2).astype(np.int)
depthXs = np.clip(depthXYs[:, :, 0], 0, kinect.depth_frame_desc.Width - 1)
depthYs = np.clip(depthXYs[:, :, 1], 0, kinect.depth_frame_desc.Height - 1)
align_depth_img[:, :] = depth_img[depthYs, depthXs, :1]
cv2.imshow('Aligned Image', cv2.resize(cv2.flip(align_depth_img, 1), (int(1920 / 2.0), int(1080 / 2.0))))
cv2.waitKey(0)

Also if you want to Map Depth Frames to Color Space you should use the color_2_depth function. Or the code below:

import numpy as np
import ctypes
import cv2
from pykinect2 import PyKinectV2
from pykinect2.PyKinectV2 import *
from pykinect2 import PyKinectRuntime

kinect = PyKinectRuntime.PyKinectRuntime(PyKinectV2.FrameSourceTypes_Depth | PyKinectV2.FrameSourceTypes_Color)

""" import your images here """
color_img = cv2.imread('path_to_your_color_frame')
align_color_img = cv2.imread('path_to_your_depth_frame')

depth2color_points_type = _ColorSpacePoint * np.int(512 * 424)
depth2color_points = ctypes.cast(depth2color_points_type(), ctypes.POINTER(_ColorSpacePoint))
kinect._mapper.MapDepthFrameToColorSpace(ctypes.c_uint(512 * 424), kinect._depth_frame_data, kinect._depth_frame_data_capacity, depth2color_points)
colorXYs = np.copy(np.ctypeslib.as_array(depth2color_points, shape=(kinect.depth_frame_desc.Height * kinect.depth_frame_desc.Width,)))
colorXYs = colorXYs.view(np.float32).reshape(colorXYs.shape + (-1,))
colorXYs += 0.5
colorXYs = colorXYs.reshape(kinect.depth_frame_desc.Height, kinect.depth_frame_desc.Width, 2).astype(np.int)
colorXs = np.clip(colorXYs[:, :, 0], 0, kinect.color_frame_desc.Width - 1)
colorYs = np.clip(colorXYs[:, :, 1], 0, kinect.color_frame_desc.Height - 1)
align_color_img[:, :] = color_img[colorYs, colorXs, :]
cv2.imshow('img', cv2.flip(align_color_img, 1))
cv2.waitKey(0)

But again I don't think it would produce something useful.

@hardik-uppal
Copy link
Author

Hey thanks for the quick reply, The Kinect depth image i have raw depth value (with values up to 7905). Also, in the line:
"kinect._mapper.MapDepthFrameToColorSpace(ctypes.c_uint(512 * 424), kinect._depth_frame_data, kinect._depth_frame_data_capacity, depth2color_points)"

kinect._depth_frame_data- shouldn't this be replace with original depth frame, that we read?

@KonstantinosAng
Copy link
Owner

In reality, the array that the Kinect returns when you call the function kinect.get_last_depth_frame() is a (424*512) array that is made from the kinect._depth_frame_data using the next command:

depth_frame = np.copy(np.ctypeslib.as_array(kinect._depth_frame_data, shape=(kinect._depth_frame_data_capacity.value,)))

So there is a correlation between the depth_frame_data and the depth_frame. If you have stored the depth frame that the function kinect.get_last_frame() returns then you have to transformed it to the depth_frame_data type using the above type.
The depth_frame_data is of type LP_c_ushort and is created using the next commands:

depth_frame_data = ctypes.POINTER(ctypes.c_ushort)
depth_frame_data_capacity = ctypes.c_uint(kinect.depth_frame_desc.Width * kinect.depth_frame_desc.Height)
depth_frame_data_type = ctypes.c_ushort * kinect._depth_frame_data_capacity.value
depth_frame_data = ctypes.cast(kinect._depth_frame_data_type(), ctypes.POINTER(ctypes.c_ushort))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants