Replies: 8 comments 5 replies
-
Hi @mahir1010 In a discussion at #1957 (comment) about the maths of forcing a constant FPS when using manual exposure, a RealSense team member suggests increasing the gain value to improve depth performance. In regard to applying settings for individual streams, you could set up a separate pipeline for each stream. There is a Python example at #5628 (comment) that creates two pipelines and places depth + color on one pipeline and IMU on the other. So it may be possible to adapt the script to have depth on one pipeline and color on the other. |
Beta Was this translation helpful? Give feedback.
-
The larger the frame queue size, the more of the computer's available memory capacity that will be consumed as frames will not be being released from the pipeline as often.
|
Beta Was this translation helpful? Give feedback.
-
I read this issue again from the beginning. You mention that you are setting 60 FPS for depth and color but actually getting ~59.3Hz on the RGB sensor and ~59.8Hz. That is actually an excellent performance, as it is rare for the actual FPS to be exactly the value that was set. For example, if 30 FPS is set in the RealSense Viewer tool and the real-time information overlay is activated, the displayed actual FPS during streaming will typically be around 29.8. When talking about 'frame drops', this is typically meant as image frames that have been lost, and are therefore missing from a bag file recording. Is that what is happening to you, or was your question about dropped frames related to the lower than expected FPS speed, please? |
Beta Was this translation helpful? Give feedback.
-
Instead of enforcing a constant FPS using a manual exposure value, the alternative method of FPS enforcement to have auto-exposure enabled and an RGB option called auto-exposure priority disabled. The hardware specification of the computer can have a bearing on dropped frames. For example, if the storage drive that the data is being written to has a slow access speed then it can create a 'bottleneck' delay in writing frames. Having recording compression enabled may also cause dropped frames on some computers. A RealSense team member provides advice about this at #2102 (comment) At #9022 (comment) a RealSense user provides a very detailed analysis of frame queue size based on their research. |
Beta Was this translation helpful? Give feedback.
-
The hardware spec should be more than enough. We use i-7 10850H, 128GB 2900MHz Ram, 4TB NVME SSD . Also, as I said, this happens randomly, which is even worse. As for RS2_OPTION_FRAMES_QUEUE_SIZE, I tried the |
Beta Was this translation helpful? Give feedback.
-
Is the queue size set before recording begins? If changes to settings are made after recording starts then those changes are not applied to the recording and it only uses the settings that were configured before recording began. |
Beta Was this translation helpful? Give feedback.
-
As you are using both depth and color, I would not rule out the possibility of the color stream being a factor in your dropped frames. At #2637 (comment) a RealSense team member lists the reasons why using color in a multiple-camera hardware sync setup where depth is synced and color is unsynced can cause problems. |
Beta Was this translation helpful? Give feedback.
-
The intention of my suggestion was to explore reasons why frame drops might be occurring in a multi-camera setup. As you are using manual exposure, setting RGB to 6 fps instead of 60 and setting RGB exposure to 78 instead of 156 may result in less lag on the RGB stream. 60 FPS RGB will work best when auto-exposure is enabled. These characterstics are due to the 'rolling' shutter on the D435's RGB sensor being slower than the fast 'global' shutter on its depth sensor. I do not have any further advice to offer about the frame queue size unfortunately, so if changing the RGB settings is not possible for your multicam system or the changes do not make a difference to frame drop then regretfully, closing the issue may be the best course of action. |
Beta Was this translation helpful? Give feedback.
-
We have a recording setup where we record multiple hardware-synced D435s to files without processing framesets. Latency is not an issue; however, frame drop is. Our goal is to achieve stable and constant framerates. Following is a snippet of our code, and for each camera, we fork a separate process.
Not shown here, but we also disable auto-exposure and set it to 15.6 ms to get 60Hz, however, our effective framerate is always ~59.3Hz on the RGB sensor and ~59.8Hz(acceptable) on the depth.
In one of the discussions, I read that increasing frame queue size will increase latency but reduce frame drops. The performance is even better when setting the frame queue size separately for both streams. What would be the best way of doing it?
EDIT: Added Camera configuration
Beta Was this translation helpful? Give feedback.
All reactions