Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate a colorized PCL point cloud #1143

Closed
summer1216 opened this issue Feb 7, 2018 · 10 comments
Closed

Generate a colorized PCL point cloud #1143

summer1216 opened this issue Feb 7, 2018 · 10 comments

Comments

@summer1216
Copy link

summer1216 commented Feb 7, 2018

Required Info
Camera Model SR300
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Linux (Ubuntu 14)
Kernel Version (Linux Only) (4.14.18)
Platform PC
SDK Version 2

I have already generated a live PCL point cloud but in the form of pcl::PointCloud < pcl::PointXYZ >, that means, with no color information. I want to have that in pcl::PointCloud < pcl::PointXYZRGB > form.

What I have done:

I used the following code to generate the rs2::pointcloud,

rs2::pointcloud pc;
rs2::points points = pc.calculate(depth_frame);
pc.map_to(color_frame);

and I used points.get_vertices() to get the x,y,z values of each point and write them to PCL point cloud. However, I couldn't find any method that I can get the color information (r, g, b values) at each point.

I tried to use rs2::align and rs2_project_point_to_pixel to map the 3D point to the Point in color image, but I still don't know how I can get the r,g,b value of a given point in color image.

What is the best way to do that? Thank you!

@summer1216 summer1216 changed the title Generize a colorized PCL point cloud Generate a colorized PCL point cloud Feb 7, 2018
@zivsha
Copy link
Contributor

zivsha commented Feb 7, 2018

Hi @summer1216 , you're almost there, after creating the points and mapping the color texture to them just like you've done, you can use rs2::points::get_texture_coordonates() for the pixel in the texture that matches the point, you can see an example in the pointcloud example which uses example.hpp:

/* this segment actually prints the pointcloud */
auto vertices = points.get_vertices(); // get vertices
auto tex_coords = points.get_texture_coordinates(); // and texture coordinates
for (int i = 0; i < points.size(); i++)
{
if (vertices[i].z)
{
// upload the point and texture coordinates only for points we have depth data for
glVertex3fv(vertices[i]);
glTexCoord2fv(tex_coords[i]);
}
}

@summer1216
Copy link
Author

@zivsha thank you very much for your reply. I guess I miss the knowledge of texture coordinates, I don't know what it is, and couldn't understand immediately after google it. Since I don't need to use openGL at all, I just want the r, g, b, values of the points. Is there a conversion which can convert the 'texture coordinates' directly to rgb values? Thank you!

@zivsha
Copy link
Contributor

zivsha commented Feb 11, 2018

Hi @summer1216 ,
Hope you managed to work this out by now, if not then I hope the following will help you do that.
The texture coordinates syntax is typically a pair of floats (u,v) where u,v are between [0-1].
The semantics of it is the location in the 2D texture image (RGB frame in our case) of where to take the color of a 3D point.
If you look at how we set the texture mapping values, you can see how to get the color pixel from these values:

auto tex_xy = pixel_to_texcoord(&mapped_intr, *pixels_ptr);

float2 pixel_to_texcoord(const rs2_intrinsics *intrin, const float2 & pixel) { return{ pixel.x / (intrin->width), pixel.y / (intrin->height) }; }

As you can see we take the pixel's (x,y) and devise it by the width and height of the image to get a normalized value between [0,1] (well actually, this could be <0 or >1 since the projection might suffer from distortions - such values should be discarded when you revert the process).
So after calculating the pointcloud and mapping the texture, if you call points.get_texture_coordinates() per index you will get the (u,v) and by multiplying it with the color image's resolution (and clamping) you can get the (x,y) pixel from the image where you can query it for its RGB values.

@summer1216
Copy link
Author

@zivsha Thank you very much. Your explanation is clear and very helpful. Now I have achieved the colorized point cloud by using PCL. But I still have small problems I need to handle with: (1)The values from get_texture_coordonates() are negative, I need to change them to the opposite number. (2)My rgb and depth image do not have 100% the same view, the not matched parts need to be excluded. But the main question is solved and this issue can be closed. Thanks again!

@AleBasso80
Copy link

Hello,
when I try to create a pcl point cloud by means of this function, I get a lot of white points on its borders (they are white if the projection is outside of the RGB sensor). This does not happen if I save the point cloud in the RealSense viewer.
Any suggestion?
Thanks!

d_intrinsics = depth_stream.asrs2::video_stream_profile().get_intrinsics();
c_intrinsics = color_stream.asrs2::video_stream_profile().get_intrinsics();
d_to_c_extrinsics = depth_stream.get_extrinsics_to(color_stream);

void CreatePointCloud(pcl::PointCloudpcl::PointXYZRGB::Ptr cloud, std::vector<uint16_t> vec_depth, std::vector<uint8_t> vec_color,
std::vector& x, std::vector& y, std::vector& z)
{
auto start = std::chrono::high_resolution_clock::now();

int sz_depth = vec_depth.size();
x.resize(sz_depth);
y.resize(sz_depth);
z.resize(sz_depth);

// cloud->resize(sz_depth);
// int actual_size = 0;
// #pragma omp parallel for schedule(dynamic) //Using OpenMP to try to parallelise the loop
for (int dy = 0; dy < d_intrinsics.height; ++dy)
{
for (int dx = 0; dx < d_intrinsics.width; ++dx)
{
int i = dy * d_intrinsics.width + dx;
// float d = (float)(*(pdepth + i) * depth_scale);
float d = vec_depth[i] * depth_scale;

  if (d > 0.2)
  {
    PointT p;

// int udepth = i % d_intrinsics.width;
// int vdepth = i / d_intrinsics.width;
const float pixeld[2] = {(float)dx, (float)dy};

    float point[3];
    rs2_deproject_pixel_to_point(point, &d_intrinsics, pixeld, d);
    p.x = point[0];
    p.y = point[1];
    p.z = point[2];

    x[i] = p.x;
    y[i] = p.y;
    z[i] = p.z;

    float target[3];
    rs2_transform_point_to_point(target, &d_to_c_extrinsics, point);
    float pixel[2];
    rs2_project_point_to_pixel(pixel, &c_intrinsics, target);

    int cx = (int)std::round(pixel[0]);
    int cy = (int)std::round(pixel[1]);
    if(cx < 0 || cy < 0 || cx >= c_intrinsics.width || cy >= c_intrinsics.height)
    {
      p.r = 255;
      p.g = 255;
      p.b = 255;
    }
    else
    {
//            p.r = *(pcolor + (cy * wc + cx) * 3);
//            p.g = *(pcolor + (cy * wc + cx) * 3 + 1);
//            p.b = *(pcolor + (cy * wc + cx) * 3 + 2);
      int ind = (cy * c_intrinsics.width + cx) * 3;
      p.r = vec_color[ind];
      p.g = vec_color[ind + 1];
      p.b = vec_color[ind + 2];
    }

// cloud->points[i] = p;
cloud->push_back(p);
// actual_size++;
}
else
{
x[i] = 0;
y[i] = 0;
z[i] = 0;
}
}
}

// cloud->resize(actual_size);

auto end = std::chrono::high_resolution_clock::now();
int elapsed = std::chrono::duration_caststd::chrono::milliseconds(end-start).count();
std::cout << "PC creation " << std::to_string(elapsed) << " ms" << std::endl;
}

@AleBasso80
Copy link

I think I understood the problem. It seems to me that the viewer colorizes in some way (perhaps the closest valid pixel) the points where the retroprojection on the RGB sensor is not valid.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
@AleBasso80 Glad to see the issue resolved. So can we close this issue now?

@AleBasso80
Copy link

Yes, thank you.

@ljc19800331
Copy link

@summer1216 Hello. I faced the similar problem for the colored point cloud (but I am coding in python). What is your solution of solving the problems like
(1)The values from get_texture_coordonates() are negative, I need to change them to the opposite number. (Why you change to the opposite number?)
(2)My rgb and depth image do not have 100% the same view, the not matched parts need to be excluded. ( I faced the similar problem as well. The color image is not successfully aligned with the 3D point cloud. If the point cloud is not aligned, how to identify which part should be removed?)
Thank you for your answer.

@sunsided
Copy link

sunsided commented Sep 7, 2020

I stumbled across a similar issue when trying to add texture mapping to an example of librealsense-rust. It wasn't until I found #6234 that I got it to work. This comment gave it away:

The Field of View (FOV) of the Depth sensor is bigger than the RGB sensor's, hence while the sensors overlap you can't have RGB data coverage on the boundaries of the depth frame.

In my case, the texture coordinates are reported as int32 values, but they really appear to be 32-bit floating points. By re-interpreting *(*float)(*int)&coord I got a value range of about -0.3 ... 1.3; then discarding every coordinate value outside of range 0 .. 1 (or simply mapping those to black) produced correct results.

It does feel a bit weird that the UV coordinates are presented as ints but really are floats, but simply casting to float and scaling (e.g. by int32 max-value) didn't appear to do the trick.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants