Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to estimate pose of each point in pointcloud? #10222

Closed
Chezhian531 opened this issue Feb 6, 2022 · 33 comments
Closed

How to estimate pose of each point in pointcloud? #10222

Chezhian531 opened this issue Feb 6, 2022 · 33 comments

Comments

@Chezhian531
Copy link

I am using an intelrealsense D435i, on ubuntu camera. I want to estimate the 3d poses of all the points, w.r.t camera in the point cloud? Could you suggest some ways to achieve this?
@MartyG-RealSense
Copy link
Collaborator

Hi @Chezhian531 If you wanted to obtain the pose (xyz position and rotation angle) of an observed point on a pointcloud, #7884 and #7560 may be helpful references to begin with.

If these references do not meet your requirements or you have further questions based on the contents of those links, please do let me know. Thanks!

@Chezhian531
Copy link
Author

Thanks for the response. I have looked into the links. I also have a question about publishing the pointcloud information as ros topics in matlab. I was able to stream /camera/color/image_rect_raw in matlab. But i am not able to stream the /camera/depth_registered/points in matlab. why is it so?

For example, i want to publish the color stream of my room in matlab, let us say it has 1280,720 pixels, for the same stream why does the point cloud have more pixel size?

I am quite new to both realsense and ros, why does the output is seen on the grid for point cloud, whereas the output of color stream is seen on the left hand side of the rviz window. why is it so?

Can one not see a stream of point clouds published my ros at all in matlab?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 6, 2022

My understanding is that to publish /camera/depth_registered/points you should use the RealSense ROS wrapper launch instruction rs_rgbd.launch. For example:

roslaunch realsense2_camera rs_rgbd.launch

Support for the RGBD launch mode should be installed before performing this launch. IntelRealSense/realsense-ros#2092 (comment) has information about doing this.

registered depth, and the term 'depth_registered', refers to aligning the depth stream to the color stream, as described at IntelRealSense/realsense-ros#2047 (comment)

When depth is being aligned to color in RealSense, the field of view (FOV) size is automatically resized to match the FOV size of the color stream. The D435i camera model has a smaller color FOV size than the depth FOV size. This means that when the depth FOV is auotmatically adjusted, the outer regions of the depth information can be excluded from the aligned image.

The documentation of the RealSense ROS wrapper provides the following advice about this aspect of alignment.


The depth FOV and the texture FOV are not similar. By default, pointcloud is limited to the section of depth containing the texture. You can have a full depth to pointcloud, coloring the regions beyond the texture with zeros, by setting allow_no_texture_points to true.


I regard the color stream display in the left-side panel of RViz during pointcloud streaming as just a visual verification that the color data is being published correctly.

In regard to your question "Can one not see a stream of point clouds published my ros at all in matlab", I do not have knowledge about streaming RealSense ROS into MATLAB and so cannot provide advice on that particular subject unfortunately.

@Chezhian531
Copy link
Author

Thanks for the reply. I am not quite sure how, i would be able to estimate the position of object using pointclouds. could you please guide me?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 7, 2022

If you use rs_rgbd.launch and want to obtain an xyz pointcloud, IntelRealSense/realsense-ros#851 (comment) suggests accessing depth_image_proc/point_cloud_xyz

@Chezhian531
Copy link
Author

Thanks i resolved the issue of pointcloud stream in matlab.

BUt another issue popped up. I want to create pointcloud of a 3d object. do you suggest some methods using which i can achieve this?

@MartyG-RealSense
Copy link
Collaborator

If you needed a 360 degree pointcloud of a 3D object and had to do it in ROS then you could capture individual views from more than one camera in real-time and then stitch the individual point clouds together into a single combined cloud. Intel have a guide for doing so with RealSense ROS at the link below.

https://github.com/IntelRealSense/realsense-ros/wiki/Showcase-of-using-2-cameras

Another ROS approach would be to use Intel's D435i SLAM guide to record a point cloud map of a scene with rtabmap_ros and then save the point cloud as a .pcd file, as described here:

https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i


If you do not need to use ROS for this particular application then you could export a point cloud from the RealSense Viewer tool's 3D mode as a .ply point cloud data file.

Alternatively, if you need to build up a 360 degree point cloud and are able to use the C++ language then the rs-kinfu tool enables you to use a single camera and move it around a scene, progressively building up detail on the point cloud until you are satisfied with the level of detail on the image and then exporting it as a ply file.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/opencv/kinfu

@MartyG-RealSense
Copy link
Collaborator

Hi @Chezhian531 Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

@Abirami-Segar
Copy link

Hi

I have a similar problem. I have D435i attached to the robot end effector. I want to know the orientation of a point on the object w.r.t camera's frame. how can i obtain this?

P.s I have read the #7884 and ##7560 and I might need more support from you.

@MartyG-RealSense
Copy link
Collaborator

Hi @Abirami-Segar In addition to the resources in the links that you provided, a couple more examples of SolvePNP OpenCV pose algorithms for Python can be found at #10233 (comment)

@Abirami-Segar
Copy link

Thanks for the reply. Could you tell me where I can find the coordinate system of the realsense camera? where is the axis of the camera?

@MartyG-RealSense
Copy link
Collaborator

The documentation link below shows the coordinate system of the D435i.

https://github.com/IntelRealSense/librealsense/blob/master/doc/d435i.md#sensor-origin-and-coordinate-system

#7568 (comment) and #7279 (comment) may also be helpful references.

@Abirami-Segar
Copy link

Thanks for the useful links. I have a another question. I have used the depth camera to plot a 3d scene. I have held the camera away from the scene at a height which was noted. When I recorded the point clouds and plotted it, I realized that only the depth value in the plot was accurate. Now that I know the coordinate system of the camera and the xyz axis about this origin, I dont expect the values of x and y coordinate to be large in number for a point in the point cloud which is not far away from the camera's origin. I understand that there is some misunderstanding of concepts from my end. could you please explain?

@MartyG-RealSense
Copy link
Collaborator

Are you using Python to align depth and color and then obtaining the distance value with the points.get_distance() instruction and finding that the distance is accurate at the center of the image but the distance becomes increasingly larger in error as you move towards the outer areas of the image?

If you are, this is a known issue that can occur with using alignment with points.get_distance. An example of this is discussed at #6749 (comment) and it is suggested that the camera be calibrated to see whether it improves results.

@Abirami-Segar
Copy link

I am not using python. I am using matlab. I am more interested about a 3d object's point cloud.I want to plot the 3d object along with its scene . The range of x and y values are way too much even though these points are not away from the origin of depth camera

@MartyG-RealSense
Copy link
Collaborator

If you want to plot a point cloud in MATLAB, are you using the RealSense MATLAB wrapper or do you generate the point cloud using the in-built functions of MATLAB

@Abirami-Segar
Copy link

Abirami-Segar commented Apr 5, 2022

I am using inbuilt matlab functions. My problem is I am holding the camera at a distance of 30 cms. I am having a object which is 10 cm in length, 5 cm in width and 3 cm in depth right below the camera. I am interested on a point which is right in the middle of the object. This point of interest is directly below the camera. Therefore, I dont expect the x and y values(from the point cloud plot) to be in the range of 10-20cm. I expect it to be in the range of 1-2cm. But the plot reveals that the xyz values are(0.267 0.045 0.321). The values of y and z are acceptable, why is there so much error in the x value?

@MartyG-RealSense
Copy link
Collaborator

There is a similar sounding MATLAB pointcloud case at the link below where the MATLAB user was unhappy with a significantly inaccurate x value.

https://uk.mathworks.com/matlabcentral/answers/266182-x-y-and-z-co-ordinates-from-3d-point-cloud-are-not-accurate-showing-considerable-error-in-x-co-ord

@Abirami-Segar
Copy link

Abirami-Segar commented Apr 6, 2022

Thanks for the comment. I have this error even before i use the inbuilt matlab functions. I first run roscore in terminal and then launch rs_rgbd. Launch and then i subscribe to this ros topic(depth registered) in simulink and then i collect the xzy data and then i plot it. Even then there is this error.

Also in some cases(different camera position and object position) error seems to persist even in y measurement.

What exactly is the meaning of the point cloud message? Is it the distance measured between the camera's origin and the the scene? Please help

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 6, 2022

Would it be possible to provide an RGB image of the scene that the camera is observing so that I can see whether there are aspects of the scene that might be difficult for the camera to read. An example of such aspects would be smooth objects with low texture or no texture on their surface, a surface that is colored dark grey / black or a surface that has light reflections.

@Abirami-Segar
Copy link

Many thanks for the reply. Could you please tell me what does the point cloud of a scene represent? Is it the distance of a point in scene in all three directions from thr camers's optical center? My surface is actually grey in colour,

Also, what is the physical meaning of the point cloud data? Sorry if my question is primitive. I was under the impression that the point cloud stream is the physical distance between the point in the scene to the camera's optical center

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 6, 2022

A 2D depth image represents the pixel coordinates (XY) on that image. A point cloud 'projects' the pixel coordinates into 3D so that they represent coordinates in real-world space (XYZ), with the Z depth coordinate being distance from the camera's 0,0,0 depth origin in real-world measurement units (meters). On 400 Series cameras, the depth origin is the center-line of the left infrared imaging sensor.

The darker a shade of grey is, the more difficult it will be for the camera to analyze due to the color absorbing light. This is a general physics principle, not just on RealSense camera's specifically. The amount of information returned to the camera will progressively decrease as the shade becomes darker.

There was a past case where a clothing sweater with grey and red stripes was depth sensed. The red stripes were fully visible on the depth image but the areas representing the grey stripes were just empty black regions with no depth detail because the camera could not obtain depth information from the grey parts.

Casting a strong light source onto grey or black areas of a real-world surface can help to bring out depth detail from it.

@Chezhian531
Copy link
Author

Hi

I am also a beginner in the point cloud region. I dint quite clearly understand the meaning of 2D depth image. Does that mean, each pixel containing (x,y) coordinate is stored as a depth image?could you please explain more?

So is the point cloud NOT the physical distance between the point in scene and the camera's origin?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 8, 2022

Hi @Chezhian531 The depth pointcloud represents real-world 'Z' distances in meters from the camera to the point in the scene.

It will likely not be a measurement that is exactly the same as a real-world distance read with a tape measure though. This is because the cameras have an RMS Error factor, where error starts at around zero at the camera lenses and increases linearly over distance. Other factors in the scene such as lighting and environment may also affect the extent of error in the measurement.

The RealSense SDK's official Projection documentation at the link below explains the difference between 2D (pixel) and 3D (point) coordinate systems.

https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20

@Chezhian531
Copy link
Author

Is there a way to get the real world x and y coordinates from the depth stream?

@MartyG-RealSense
Copy link
Collaborator

You could use one of the approaches below.

  1. Align the depth stream to the color stream with align_to and then 'deproject' the aligned image with rs2_deproject_pixel_to_point to obtain 3D X, Y and Z coordinates.

  2. Use a simpler approach of generating a point cloud directly with pc.calculate and then obtaining the X, Y and Z coordinates from it with points.get_vertices

@Chezhian531
Copy link
Author

Is a similar function available in matlab?

@MartyG-RealSense
Copy link
Collaborator

If you are a Windows user then you can interface with MATLAB through the RealSense MATLAB wrapper. Otherwise, MATLAB does have in-built 3D coordinate functions

If you would prefer to perform the procedure directly inside MATLAB with its built-in functions then the link below may be helpful.

https://www.mathworks.com/matlabcentral/answers/584420-how-to-get-3d-world-coordinates-from-2d-image-coordinates

@Chezhian531
Copy link
Author

Could you please provide some examples of using pc.calculate and then obtaining XYZ with points.get_vertices?

@MartyG-RealSense
Copy link
Collaborator

Which programming language are you using, please (C++, Python or C#)

@Chezhian531
Copy link
Author

I am asking for understanding the program so that I can implement the logic in MATLAB. could you please provide the examples in c++ and python

@MartyG-RealSense
Copy link
Collaborator

If using the RealSense MATLAB wrapper with Windows, the RealSense SDK has an official pointcloud example program for the wrapper called pointcloud_example.m that makes use of points.get_vertices to obtain X, Y and Z.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/matlab/pointcloud_example.m#L30-L33

#4612 (comment) has a Python example of points.get_vertices

#5728 is a C++ example.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants