New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
saving and working with depth images in python #1312
Comments
|
Your code is correct. The depth image is single channel 16-bit image. In the first snippet, the shape of in the second snippet, the variable
|
|
@waspinator Did you get it through? Any other questions about this? |
|
yes, saving as |
|
Is there another way of saving depth images as png with rosbag? ( Not familiar with it at all and looking for something user/beginner friendly) |
Using a realsense D435i I'm capturing data using ROS into bag files with the standard topics. In particular I have
/aligned_depth_to_color/image_raw(sensor_msgs/Image)/color/image_raw(sensor_msgs/Image)/extrinsics/depth_to_color(realsense2_camera/Extrinsics)My understanding is that color images can be saved in whatever format is convenient (jpeg, png, ...) but that depth should be saved as .raw. I'm not sure how to save, open, or interpret .raw files.
I'm reading the messages using rosbag and saving them to file using opencv.
but opencv doesn't seem to understand what I mean by "raw":
could not find a writer for the specified extension in function imwrite. Saving as 'png' produces a result that seems to match distance in mm, but from reading other issues (IntelRealSense/librealsense#4427) it seems that doesn't save the correct depth data.Once I have a method of saving into the "raw" format, how would I read it? For example, I would read a regular png file with presumably wrong depth data using
img = cv2.imread('depth.png', cv2.IMREAD_UNCHANGED)which would produce a numpy array for the depth values. How would I read a raw file to get a numpy array of depth values?Once I have my data saved to file, my end goal is to measure the distance to an object, it's width and its height. I'm able to extract the pixel locations of the bottom, top, left, and right locations, but I would have to convert those to real-world coordinates in order to measure the distances. I probably should try using
get_distanceandrs2_deproject_pixel_to_point(). Which intrinsics do I need (/aligned_depth_to_color/camera_info?) and how would I convert them fromsensor_msgs/camera_infoto somethingpyrealsense2could work with?For example, ROS reports the following values in
/aligned_depth_to_color/camera_infoThese seem to line up nicely with the attributes of pyrealsense2.intrinsics, but it's missing Distortion coefficients (
coeffs)So I could do something like this:
Are there examples of how I would use ROS topics or/and saved color and depth files (as opposed to a live video stream) to measure dimensions?
Thanks
The text was updated successfully, but these errors were encountered: