Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RGB and depth frames are out of sync #10850

Closed
pm222 opened this issue Aug 31, 2022 · 3 comments
Closed

RGB and depth frames are out of sync #10850

pm222 opened this issue Aug 31, 2022 · 3 comments

Comments

@pm222
Copy link

pm222 commented Aug 31, 2022


Required Info
Camera Model D400
Firmware Version v2.50.0
Operating System & Version Ubuntu 20.04
Kernel Version (Linux Only) 5.15.0-46-generic
Platform PC
SDK Version 2.0
Language python
Segment others

Issue Description

While I was reading frames from a recorded .BAG file I experienced the following: The depth frames seem to be out of sync. For every N. RGB image, the N. depth image if a bit off. But if I compare the N. RGB with the N+1. depth, then they seem to be in sync. See the attached images.

The frames are exported with this code:
# Create pipeline
pipeline = rs.pipeline()

    # Create a config object
    config = rs.config()

    # Tell config that we will use a recorded device from file to be used by the pipeline through playback.
    config.enable_device_from_file(input_file, repeat_playback=False)

    config.enable_stream(rs.stream.color, 1920, 1080, rs.format.rgb8, 6)
    config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 6)
   
    # Start streaming from file
    profile = pipeline.start(config)

    #Needed so frames don't get dropped during processing:
    profile.get_device().as_playback().set_real_time(False)

    video_profile = None
	i = 0
    while True:
        i += 1
        
        # Get frameset
        frames = pipeline.wait_for_frames()

        align_to = rs.align(rs.stream.color)
        aligned = align_to.process(frames)
        color_frame = aligned.get_color_frame()
        color_image = np.asanyarray(color_frame.get_data())

        depth_frame = aligned.get_depth_frame()
        depth_image = np.asanyarray(depth_frame.get_data())
        
        #Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)

        cv2.imwrite(f'{Depth_dir}/frame_{str(i).zfill(3)}_depth.png', depth_image)
        cv2.imwrite(f'{Colored_depth_dir}/frame_{str(i).zfill(3)}_colored_depth.png', depth_colormap)

The timestamps and frame id:
109th frameset:
color frame timestamp: 1657268335541.0369
depth frame timestamp: 1657268335539.4014
color frame number: 1448
depth frame number: 515

110th frameset:
color frame timestamp: 1657268335708.7126
depth frame timestamp: 1657268335707.0774
color frame number: 1449
depth frame number: 516

The video was recorded while moving around 40cm/sec. What could be the reason for this sync issue?
Images will be uploaded soon.

rgb_109_depth_109
rgb_109_depth_110

@MartyG-RealSense
Copy link
Collaborator

Hi @pm222 At #1548 (comment) a RealSense team member advises that RGB and depth frames have a temporal offset bounded by a period of one frame, and wait_for_frames() finds the best match between their timestamps.

I also note that you are using kernel 5.15. This kernel is not yet officially supported by the RealSense SDK, though information about an unofficial test patch to add 5.13 and 5.15 support is available at #10439 (comment)

@MartyG-RealSense
Copy link
Collaborator

Hi @pm222 Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants