-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I use registration.apply offline? #57
Comments
You probably solved this already but adding the answer for future users. And then just create the frame But there is still a problem. @r9y9 , Is there a way to register the depth and rgb offline? (I want to generate an XYZRGB point cloud) |
Hi, Im trying to run the code @sitzikbs proposed but for me it does not seem to work. I need to convert an rgb image (BGR in opencv) to a Frame object but the execution gets stuck in np.frombuffer. If I change the dtype to float32 it decodes an array with nan in all positions. Do you know where the problem could be? |
I Have the same issue with the registration of the depth map and RGB offline, did you find a solution??? |
@AAcquier I eventually reimplemented everything in python. I didn't document it properly at the time but I hope this helps:
|
Hi @sitzikbs , Thanks for your code it is really helpful but I still got a couple of questions concerning it:
Kind regards, Alex |
|
Thank you very much for that, is there a way to find the depth of rgb pixel an/or the physical distance between (ie in mm) between 2 rgb pixels? |
Now you have a mapping from every rgb pixel to depth pixel so you can find
the corresponding point and just measure the euclidean distance between the
points.
…On Fri, Apr 17, 2020 at 12:07 AM AAcquier ***@***.***> wrote:
Thank you very much for that, is there a way to find the depth of rgb
pixel an/or the physical distance between (ie in mm) between 2 rgb pixels?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#57 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADJHW62I4JSBYZJ5DLZCIVDRM4GLDANCNFSM4GHOMOSA>
.
--
Yizhak (Itzik) Ben-Shabat | Research Fellow | ARC Centre of Excellence for
Robotic Vision
College of Engineering and Computer Science | Australian National University
M 0401 352 546 | E yizhak.benshabat@anu.edu.au | W www.itzikbs.com
Canberra ACT 2600 Australia
|
I want to get the depth frame warped into color frame. But I only have depth and color images. I notice there's an issue about creating
Frame
fromnumpy.ndarray
, I tried but there's not a parameter calledbyte_data
anymore and I didn't find anything helpful in the documentation about creatingFrame
fromnumpy.ndarray
. So how can I get bigdepth from previously collected color and depth images without Kinect v2?The text was updated successfully, but these errors were encountered: