Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DepthImageToPointCloud does not handle frame offsets #12125

Open
RussTedrake opened this issue Oct 1, 2019 · 1 comment
Open

DepthImageToPointCloud does not handle frame offsets #12125

RussTedrake opened this issue Oct 1, 2019 · 1 comment
Assignees
Labels
component: geometry perception How geometry appears in color, depth, and label images (via the RenderEngine API) priority: medium type: feature request type: MIT 6.4210 Related to http://manipulation.csail.mit.edu

Comments

@RussTedrake
Copy link
Contributor

@pangtao22 noticed that the RGB ad point clouds are off for the physical robot, and realized that our DepthImageToPointCloud class does not handle the case when the RGB camera frame is offset from the depth camera frame.

Image from iOS

I'm not actually sure what the "right" algorithm is here (the projection seems nontrivial?). to leave myself a breadcrumb, the open3d version is here: http://www.open3d.org/docs/release/python_api/open3d.geometry.RGBDImage.html#open3d.geometry.RGBDImage.create_from_color_and_depth but opens with "RGBDImage is for a pair of registered color and depth images, viewed from the same view, of the same resolution. If you have other format, convert it first."

@RussTedrake RussTedrake self-assigned this Oct 1, 2019
@jwnimmer-tri jwnimmer-tri changed the title DepthImageToPointCloud does not handle DepthImageToPointCloud does not handle frame offsets Apr 30, 2021
@EricCousineau-TRI
Copy link
Contributor

EricCousineau-TRI commented Jun 14, 2021

From reviewing #14985, I believe the defect is as you mentioned; the depth image is not registered to same camera extrinsics nor intrinsics, due to this code snippet:

// To pose the two sensors relative to the camera body, we'll assume X_BC = I,
// and select a representative value for X_CD drawn from calibration to define
// X_BD.
geometry::render::ColorRenderCamera color_camera{
{renderer_name,
{kWidth, kHeight, 616.285, 615.778, 405.418, 232.864} /* intrinsics */,
{0.01, 3.0} /* clipping_range */,
{} /* X_BC */},
false};
const RigidTransformd X_BD(
RotationMatrix<double>(RollPitchYaw<double>(
-0.19 * M_PI / 180, -0.016 * M_PI / 180, -0.03 * M_PI / 180)),
Vector3d(0.015, -0.00019, -0.0001));
geometry::render::DepthRenderCamera depth_camera{
{renderer_name,
{kWidth, kHeight, 645.138, 645.138, 420.789, 239.13} /* intrinsics */,
{0.01, 3.0} /* clipping_range */,
X_BD},
{0.1, 2.0} /* depth_range */};
return {color_camera, depth_camera};

Possible solutions:

  • Register color to depth, e.g. via something like cv2.registerDepth. The math is not too hard to write in C++; dunno how great performance is.
  • For the purpose of simulation, do not use baked-in different extrinsics and intriniscs; instead, make the same.

\cc @SeanCurtis-TRI

@jwnimmer-tri jwnimmer-tri added component: geometry perception How geometry appears in color, depth, and label images (via the RenderEngine API) and removed unused team: robot locomotion group labels Apr 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: geometry perception How geometry appears in color, depth, and label images (via the RenderEngine API) priority: medium type: feature request type: MIT 6.4210 Related to http://manipulation.csail.mit.edu
Projects
None yet
Development

No branches or pull requests

3 participants