Improve pre_capture so it does not drop frames #1494
Replies: 4 comments
-
I forked the motion project and am going to see if I can attempt either of these proposed changes. Comments / suggestions / critique is welcomed! |
Beta Was this translation helpful? Give feedback.
-
@hmblprogrammer I don't like option 1. It's going to introduce a huge delay that is dependent on user configuration. On top of that, you have to make sure you flush the image ring buffer prior to draining the encoder. A slightly different option 2 sounds better. Create a separate thread that pulls frames out of the image ring buffer and encodes the frames as fast as it can handle so we basically decouple the reading and processing of images from encoding. |
Beta Was this translation helpful? Give feedback.
-
That makes sense to me. It also solves a problem I was thinking about with the separate thread idea in my original proposal: it was possible that the actual motion detection itself might just not be able to keep up with some (high framerate) sources and in a case like that the buffer would constantly grow. I like your proposal of a separate thread that pulls frames out of the image ring buffer and encodes the frames as fast as it can handle. I can take a stab at building that on a fork and will update if I make any useful progress. |
Beta Was this translation helpful? Give feedback.
-
For RTSP cameras, you can also use the |
Beta Was this translation helpful? Give feedback.
-
I think the config option
pre_capture
has great value. I find that to avoid false positives, I need high threshold values, such that when motion is detected the beginning of the event is lost. Example: people walking up to the front door, by the time their image is large enough to trigger the threshold most of the time they were approaching is not in the video, the first frame is seconds too late.The documentation for
pre_capture
states:It appears to me that this is referring to the
process_image_ring
function called frommlp_actions
at motion.c:2577 which appears to process all images in the buffer before returning to the next iteration of the main motion detection loop, and this appears to be the cause of dropped frames when high values forpre_capture
are set.This seems like it could be greatly optimized in one of two ways, adding a lot more power to the
pre_capture
option:the image ring shouldn't be processed all at once. Unless I am missing something, I think a better design would be to always keep at least
pre_capture
frames inside the ring buffer. When motion is detected, start sending frames from the head of the buffer to ffmpeg, but don't send them all (don't "catch up") by flushing the ring buffer. This means that we wouldn't have a delay before we can fetch the next frame from the RTSP / USB video source. In this pattern, the ffmpeg / extpipe would always be "delayed" bypre_capture
frames -- so when the motion ends, we would then process to process the remaining frames from this buffer in the loop while waiting for further motion.Alternatively, motion could use a separate thread for fetching frames from the video source, and this thread would only fetch frames (storing them into shared memory) and would do nothing else. A second thread would do what the loop does now, but pulling new frames from the shared memory buffer rather than from the video source directly. If the second thread runs with a lower scheduling priority, on a multi-core system this means that the first thread will continue to fill the buffer with new frames even if the motion detection thread is busy with processing motion, writing ffmpeg / ext_pipe output, etc.
Option #2 might help improve frame drop scenarios overall, but may be a more complex change.
Beta Was this translation helpful? Give feedback.
All reactions