Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coordinate system #11493

Closed
IndiGleb opened this issue Feb 27, 2023 · 29 comments
Closed

Coordinate system #11493

IndiGleb opened this issue Feb 27, 2023 · 29 comments
Labels

Comments

@IndiGleb
Copy link

Required Info
Camera Model { R200 / F200 / SR300 / ZR300 / D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win (8.1/10) / Linux (Ubuntu 14/16/17) / MacOS
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version { legacy / 2.. }
Language {C/C#/labview/nodejs/opencv/pcl/python/unity }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

<Describe your issue / question / feature request / etc..>

How i can get coordinate from camera (D455) ? Maybe there is a function that defines this ? As a result, I want to get a matrix containing coordinates in each pixel

@MartyG-RealSense
Copy link
Collaborator

Hi @IndiGleb It sounds as though exporting the data to a .csv file may fit your needs. A .csv file is a textual file of all the coordinates that can be imported into a database or spreadsheet. A csv could be exported directly with a program script, or you could record a bag file of the camera streams and then convert the bag to csv format with the RealSense SDK's rs-convert tool. This subject is discussed at #11090

@IndiGleb
Copy link
Author

I saw that in the view you can find out the values of coordinates at a point. My task is to get a three-channel matrix containing coordinates (x,y,z)
image

@IndiGleb
Copy link
Author

What is this method ? I search him in SDK, but i can't find that

@MartyG-RealSense
Copy link
Collaborator

You could generate a pointcloud with pc.calculate and then extract and print the XYZ values of each coordinate with the instruction points.get_vertices. #4612 (comment) has a Python example of points.get_vertices whilst #5728 is a C++ example.

@IndiGleb
Copy link
Author

I can measured coordinate without pointcloud ?

@MartyG-RealSense
Copy link
Collaborator

Obtaining the 3D world-space coordinate of a single specific pixel coordinate on an image without using a pointcloud or alignment can be done with the instruction rs2_project_color_pixel_to_depth_pixel and information about this instruction is at #5603 (comment)

@IndiGleb
Copy link
Author

where can I find this method ?

@MartyG-RealSense
Copy link
Collaborator

Are you using Python or C++ please?

@IndiGleb
Copy link
Author

С++. I want to know where the measured coordinate from pointcloud and rs2_project_color_pixel_to_depth_pixel

@MartyG-RealSense
Copy link
Collaborator

A C++ example of using rs2_project_color_pixel_to_depth_pixel is at #6239 (comment)

For the pointcloud, using points.get_vertices like in #5728 to obtain the coordinates will likely be the best method to use.

@IndiGleb
Copy link
Author

In which cpp source file can i find what is contained in these methods ?

@IndiGleb
Copy link
Author

I'm trying to figure out exactly how the coordinates are calculated

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 28, 2023

These two sections of the SDK's source code may be especially relevant:

rs.cpp

librealsense/src/rs.cpp

Lines 3762 to 3769 in 3371f4d

void rs2_project_color_pixel_to_depth_pixel(float to_pixel[2],
const uint16_t* data, float depth_scale,
float depth_min, float depth_max,
const struct rs2_intrinsics* depth_intrin,
const struct rs2_intrinsics* color_intrin,
const struct rs2_extrinsics* color_to_depth,
const struct rs2_extrinsics* depth_to_color,
const float from_pixel[2]) BEGIN_API_CALL

unit-tests-live.cpp

// Search along a projected beam from 0.1m to 10 meter
rs2_project_color_pixel_to_depth_pixel(to_pixel, reinterpret_cast<const uint16_t*>(depth.get_data()), depth_scale, 0.1f, 10,
&depth_intrin, &color_intrin,
&color_extrin_to_depth, &depth_extrin_to_color, from_pixel);
float dist = static_cast<float>(sqrt(pow((depth_pixel[1] - to_pixel[1]), 2) + pow((depth_pixel[0] - to_pixel[0]), 2)));
if (dist > 1)
count++;
if (dist > 2)
{
WARN("Projecting color->depth, distance > 2 pixels. Origin: ["
<< depth_pixel[0] << "," << depth_pixel[1] <<"], Projected << "
<< to_pixel[0] << "," << to_pixel[1] << "]");
}

@IndiGleb
Copy link
Author

IndiGleb commented Mar 1, 2023

Post-processing work when camera enable on ethernet ?

@MartyG-RealSense
Copy link
Collaborator

My understanding from a RealSense team member's advice at #6376 is that if a network camera running on the rs-server ethernet networking system has a serial number defined in a script then it should behave in that script like a normal non-networked camera. I cannot recall a previous case of post-processing being applied to a network camera though.

The rs-server ethernet system is being removed in the next RealSense SDK version however, so it would not be a suitable networking solution to choose if you plan to update the SDK in future.

The EtherSense Python-based ethernet system should continue to work.

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/ethernet_client_server
https://dev.intelrealsense.com/docs/depth-camera-over-ethernet-whitepaper

@IndiGleb
Copy link
Author

IndiGleb commented Mar 1, 2023

how i can disable post-processing in code ?

@MartyG-RealSense
Copy link
Collaborator

You do not need to disable post-processing filters in code when creating your own program as they are not active by default and have to be deliberately programmed into a script. They are only enabled by default in the RealSense Viewer tool.

@IndiGleb
Copy link
Author

IndiGleb commented Mar 1, 2023

How i can check this ?

@MartyG-RealSense
Copy link
Collaborator

You do not need to check if the filters are disabled if you have not programmed them into your script as the filters will not be applied. This is because the program does not know how to use filters unless you tell it how to by adding post-processing filter code.

@IndiGleb
Copy link
Author

IndiGleb commented Mar 6, 2023

how will ethersense differ from rs-server ?

@MartyG-RealSense
Copy link
Collaborator

They share the general principle of being able to send data from a camera attached to a computer (the remote server) to another computer with a display and no camera attached (the host) but otherwise they are completely different in how they work.

@IndiGleb
Copy link
Author

IndiGleb commented Mar 6, 2023

How the image will be sent over the network?

@IndiGleb
Copy link
Author

IndiGleb commented Mar 6, 2023

BGR will remain jpeg ?

@IndiGleb
Copy link
Author

IndiGleb commented Mar 6, 2023

Or maybe H264?

@IndiGleb
Copy link
Author

IndiGleb commented Mar 6, 2023

Where can I read more about EtherSense ?

@IndiGleb
Copy link
Author

IndiGleb commented Mar 6, 2023

I mean how the image will compress both RGB and depth ?

@MartyG-RealSense
Copy link
Collaborator

It is sent by the TCP protocol in EtherSense, whilst rs-server uses the RTSP protocol.

There are not further information resources about EtherSense, unfortunately. The paper linked to earlier is the best guide available.

https://dev.intelrealsense.com/docs/depth-camera-over-ethernet-whitepaper

It appears from the section of the paper linked to below that the streams are uncompressed, but advice is offered on different ways to reduce bandwidth.

https://dev.intelrealsense.com/docs/depth-camera-over-ethernet-whitepaper#network-bandwidth

@MartyG-RealSense
Copy link
Collaborator

Hi @IndiGleb Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants