-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Save snapshot from different stream than detection stream. #2199
Comments
This will require decoding the high resolution stream. The only alternative would be to try and decode a single frame from the high resolution stream on demand. This would be possible, but there is no way to ensure the frame selected from the high resolution stream would match the low resolution stream. It might be a few seconds later. |
That makes sense. Doing 24/7 deciding of all the high res streams doesn't sound like a great idea. Is it possible to use whatever method you're already using for motion recording precapture to start deciding the high resolution stream 5 second before the event, then grab the correct frame? |
It's not. Motion detection requires decoding the video stream too. |
Right - I get that motion detection requires decoding the stream. But my understanding is that Frigate can do motion detection on the low res stream, then save a recording of the high res stream when motion is detected. And that the recording can specify a "pre-recording" duration (that defaults to 5 seconds). So it sounds like Frigate is already somehow keeping a buffered stream of the high-res video so that it can somehow "go back in time" to handle the pre-recording? |
The best I could do is go back and grab a frame from recording segments that is approximately the same time as the low resolution frame. With different frame rates and different resolutions, it is almost guaranteed to be different from the original frame from the low resolution stream. |
Makes sense. I'll go ahead and document my use case clearly, then please feel free to close as "decline to add". No hard feelings here as I don't have time to submit a PR. For those who come later: Right now, I'm doing detection with Frigate on the high resolution stream of the video because it's that stream that's used to store snapshots and publish to the MQTT bridge and I want those snapshots to be high resolution for both face recognition and seeing high-res images in email/home assistant.
The best workaround I've found is by doing detection on the low-res stream in frigate, then picking up the MQTT /events message with my own script. This turns around and grabs a native screenshot from the camera (For Dahua cameras, this is |
If this were to be implemented, it would be option 2. I would think of it as creating an endpoint that allows a single frame grab from the record stream. I can't guarantee that it would be the right frame, but it would be in the ballpark. |
Closing to keep your issues list svelte |
Was option 2 implemented ? |
Not yet |
I use a single 4K stream for detect and record, but I set my detect resolution down to 720P. In this case does frigate have easy access to a decoded 4K frame for snapshot images? Or is resizing somehow done before decoding? |
It will resize the stream for the detection process so the detection logic and snapshots will be done on the 720p steam. |
To add my two cents to this thread as I was disappointed with the snapshots one of the key things that bothers me is that on some cameras the aspect ratio is not even the same as the primary feed. This causes some odd behaviours if you have home assistant widgets switching between the last still from the LQ feed 4:3 and then back to the live HQ feed 16:9. For bounding boxes and overlays I can totally see this as a major processing issue, however I would really like a best effort HQ clean option for several use cases where I think it would be the best option and not that sensitive to not matching the frames.
|
Still a valid issue |
This would be really nice. I set up the Double Take add-on and CompreFace to run facial recognition on my frigate snapshot images. The quality is really poor and not really good enough for this, so I would like to image snapshots from the high quality stream.
I think this could be solved by adding a calibration option for +/- x milliseconds. This should be pretty consistent, and not too hard to figure out manually. |
I searched (and found this issue) because I would also like to see
FWIW, I worked around the mismatched aspect ratio for my cameras by using driveway:
ffmpeg:
inputs:
- path: rtsp://front-cam/cam/realmonitor?channel=1&subtype=1
roles:
- detect
- rtmp
- path: rtsp://front-cam/cam/realmonitor?channel=1&subtype=0
roles:
- record
output_args:
detect: '-vf scale=704:396 -f rawvideo -pix_fmt yuv420p'
rtmp: '-aspect 704:396 -c copy -f flv'
detect:
width: 704
height: 396
fps: 7
stationary:
interval: 70 |
I would like to see this too. The slight variation in images does not matter to me either. I would like to feed it into face and numberplate recognition. It would be nice to have an endpoint to grab a full resolution snapshot from. At the moment my choice is rtmp which is processed and scaled or the snapshot or the camera directly. Am I missing any other options? |
Chiming in to add my support. High resolution snapshots are almost more important than the recordings, depending on the use case. For automations, as an example. Would definitely like to see this improved. |
Another +1 support for saving the image from higher res feed. As a workout for anyone looking for this now, who's camera doesn't have a still image url so can't use that method, my dirty workaround has been to create duplicate cameras that use the main stream for detected but has detect turned off by default & then created some automations so that when the original camera stream finds an object (all_count goes above 1) it turns on detected on the duplicate camera and turns it back off when count is 0. Does create some extra work for your hardware but at least gets an image output that's big enough to get a facial match for double take |
If you're going to do all of that work, you might as well just use a higher resolution stream for detect. Whatever resolution you set for detect:
width:
height: will resize the detect stream to that. So for example my doorbell camera I just have my main stream (2560x1920) and set it to quarter size which is plenty for doubletake while also not being too much work detect:
width: 1280
height: 960 |
Probably not the best example/wording as the camera I've been testing on just has person object detection rather than a group of objects, but the duplicate camera setup allows you to pick and chose what object type (really shouldn't have used all_count lol) you reprocess the better stream to get a better image. So for example doorbell cam could process vehicles, bikes, animals, people, etc. all at say 640 / 480 and the automation to turn on detection on the higher res duplicate camera is only triggered by person going above 0 Maybe I'm over thinking it and should just detect everything on a higher res but I would have thought its much easier on your hardware to detect on a really low res image by default and then call for the higher res as and when required |
In general yes it is more than without but then again just works without any complications and depends on your hardware, but with Frigate 0.12 (currently in beta) using the hwaccel presets will use the GPU to scale the stream instead of the CPU so even with this setup my CPU use is only 3% (process usage for a single core) for each camera (used to be 50% process usage for a single core) |
So is the preferred solution to perform detection on the main higher resolution stream and reduce the detect size? Does the reduction in detect size proportionally reduce the load on the system? Thanks |
it would be a snapshot of size |
this is all built in to ffmpeg, and depends on the hardware decoder that is used |
I ended up just increasing the bitrate of the detection stream. Then making sure snapshots are also at 100 quality. Works great with existing setup. |
Would this be hard to implement (keep & save the original frame as a snapshot)? |
It couldn't be done from the same ffmpeg process, it would need to be a separate process which has the issues raised above |
I'm also interested in a variation of this solution. I've built a flow in nodered that's activated by frigate's detection of a car to then pull a snapshot directly from the camera and post it to codeproject.ai to read the plate. Would be amazing for frigate to have an integration so the snapshots and clips would be searchable by the found plate message returned. |
this can already be done, I have a similar setup in HA. Based on the plate the sub label is set on the car event |
Well now I don't feel so special.... mind pointing me toward some docs for that implementation? |
https://docs.frigate.video/integrations/api#post-apieventsidsub_label is the relevant API I am planning on writing up a guide at some point once I get some more testing and refining in for it |
Nice! |
Guide is up #8165 |
Found this feature request and I would love a Frigate way of handling this. During setup I just tuned down most of my sub-streams to lower quality and fps, and my snapshots are pretty bad. I might just put the detect on the primary stream for now and see if my hardware cares. However I would love to add a snapshot role to the record stream and get a better quality even if ti isn't the SAME frame. Even better if it is run through the coral device and labels added around it. |
Hi @NickM-27, I am trying to follow the discussion around onvif but I can't seem to get it to work. I would appreciate your expert eye on this to let me know what I'm doing wrong.
|
Did you put the go2rtc config in the doubletake config file? That's supposed to be part of the frigate config. |
Okay so this is in my frigate config now:
and this is my deepstack config. does that seem right?
|
seems like it should be good but keep in mind you haven't disabled the other images types so doubletake will still pull from snapshots and mqtt as well |
Okay so I've removed the MQTT config but I only get events from "lounge_camera", but am trying to pull the snapshots from "snapshot_test1". I can confirm from my go2rtc dashboard that "snapshot_test1" and "test" both indeed work and stream the same footage (but in a higher resolution) as "lounge_camera". Do I need to add anything else to my configs, or have I done something wrong?
|
@NickM-27 , Got a question about the onvif snapshots, I have a camera added with ONVIF as mentioned above.
With the below config:
What is the reason/advantage for adding an extra stream in frigate under got2rtc like the following:
When opening both snapshots urls ( which update per second? ) , they look the same to me? |
the advantage is that behind the scenes with your config go2rtc is using ffmpeg to full a keyframe and encode it as jpeg. Using onvif just pulls the image directly from the camera so it is less work |
You mentioned in this case about disabling the other images.
|
The only type that uses this is latest. Snapshot will also pull from the detect stream |
I like your reply :) so adding the following would make the deal:
|
That option doesn't exist in doubletake config. |
hence why I liked your reply. |
you'd set snapshots to 0 in the doubletake config so it only uses latest to check for faces |
Last edit :) :) 👍 Apparently enabling HTTP in the reolink setup/environment did the trick..... Hopefully tomorrow I will have some triggers again in Double Take. Updated the config in Frigate as followed and now the snapshot stream is functional.
|
Can you share your whole frigate + double-take conf? Using reolink cams and trying to get better snaps without killing my cpu. |
you need to set |
Thx for the confirmation nick 👍, The first problem must have been that the snapshots url were not working properly.... @shjips : Frigate configuration: Double-Take configuration: |
Your configs worked beautifully to get perfect snapshots for double-take to process, finally got my first match without the detection area being too small! One problem I still have is that sub labels are still not being updated in frigate for the event, is this working for you? |
Describe what you are trying to accomplish and why in non technical terms
I want to be able to run detection on a lower, more efficient stream, but then use a higher resolution stream to pass the MQTT image and for saving the snapshot. This will let me stay efficient on detection, but get high-res screens via MQTT to other apps like HA, DoubleTake, and custom scripts.
Describe the solution you'd like
A "snapshots" role in the camera section would allow separation between the detection stream (which currently is used for snapshots) and the stream used to capture snapshots. Further, an "mqtt_snapshot" role could be specified for the images passed over MQTT on detection events.
Describe alternatives you've considered
Running the detection on the high res stream, but that's inefficient with many cameras.
The text was updated successfully, but these errors were encountered: