Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Save snapshot from different stream than detection stream. #2199

Open
rsteckler opened this issue Nov 8, 2021 · 115 comments
Open

Save snapshot from different stream than detection stream. #2199

rsteckler opened this issue Nov 8, 2021 · 115 comments
Labels
enhancement New feature or request pinned

Comments

@rsteckler
Copy link

Describe what you are trying to accomplish and why in non technical terms
I want to be able to run detection on a lower, more efficient stream, but then use a higher resolution stream to pass the MQTT image and for saving the snapshot. This will let me stay efficient on detection, but get high-res screens via MQTT to other apps like HA, DoubleTake, and custom scripts.

Describe the solution you'd like
A "snapshots" role in the camera section would allow separation between the detection stream (which currently is used for snapshots) and the stream used to capture snapshots. Further, an "mqtt_snapshot" role could be specified for the images passed over MQTT on detection events.

Describe alternatives you've considered
Running the detection on the high res stream, but that's inefficient with many cameras.

@rsteckler rsteckler added the enhancement New feature or request label Nov 8, 2021
@blakeblackshear
Copy link
Owner

This will require decoding the high resolution stream. The only alternative would be to try and decode a single frame from the high resolution stream on demand. This would be possible, but there is no way to ensure the frame selected from the high resolution stream would match the low resolution stream. It might be a few seconds later.

@rsteckler
Copy link
Author

That makes sense. Doing 24/7 deciding of all the high res streams doesn't sound like a great idea.

Is it possible to use whatever method you're already using for motion recording precapture to start deciding the high resolution stream 5 second before the event, then grab the correct frame?

@blakeblackshear
Copy link
Owner

It's not. Motion detection requires decoding the video stream too.

@rsteckler
Copy link
Author

Right - I get that motion detection requires decoding the stream. But my understanding is that Frigate can do motion detection on the low res stream, then save a recording of the high res stream when motion is detected. And that the recording can specify a "pre-recording" duration (that defaults to 5 seconds).

So it sounds like Frigate is already somehow keeping a buffered stream of the high-res video so that it can somehow "go back in time" to handle the pre-recording?

@blakeblackshear
Copy link
Owner

The best I could do is go back and grab a frame from recording segments that is approximately the same time as the low resolution frame. With different frame rates and different resolutions, it is almost guaranteed to be different from the original frame from the low resolution stream.

@rsteckler
Copy link
Author

Makes sense. I'll go ahead and document my use case clearly, then please feel free to close as "decline to add". No hard feelings here as I don't have time to submit a PR. For those who come later:

Right now, I'm doing detection with Frigate on the high resolution stream of the video because it's that stream that's used to store snapshots and publish to the MQTT bridge and I want those snapshots to be high resolution for both face recognition and seeing high-res images in email/home assistant.
Detecting on multiple 4MP streams is expensive, so ideally I could detect on the lower res substream, but still get those high-res images for snapshots. This would obviously require frigate to decode the low-res stream for detection, and ALSO decode the high-res stream for snapshots. This leads to two options:

  1. Always be decoding the high-res stream in case a snapshot is needed. This seems wasteful, but it probably less expensive that doing actual detection on the high-res stream (although detection happens on Coral and the decoding would happen on the CPU (or QuickSync), so it's not apples-to-apples.
  2. Detect on the low-res stream, then start decoding the high-res stream to pull snapshots and latest jpgs when an object is detected. There will be latency as ffmpeg starts decoding the high-res stream and different FPS would lead to potentially different frames than those where detection actually happened. This is probably ok for the facial recognition use-case because latest.jpg would be sent in high-res multiple times and the timing doesn't need to be perfect. It's less ideal for the "best frame" snapshot, which may not match the actual best frame.

The best workaround I've found is by doing detection on the low-res stream in frigate, then picking up the MQTT /events message with my own script. This turns around and grabs a native screenshot from the camera (For Dahua cameras, this is http://user:pass@ip_address/cgi-bin/snapshot.cgi?1 ) This is not the perfect frame timing, but turns around quickly enough to be a patch solution that works well enough for my use cases.

@blakeblackshear
Copy link
Owner

blakeblackshear commented Nov 11, 2021

If this were to be implemented, it would be option 2. I would think of it as creating an endpoint that allows a single frame grab from the record stream. I can't guarantee that it would be the right frame, but it would be in the ballpark.

@rsteckler
Copy link
Author

Closing to keep your issues list svelte

@lesourcil
Copy link

lesourcil commented Apr 27, 2022

Was option 2 implemented ?

@NickM-27
Copy link
Sponsor Collaborator

Was option 2 implemented ?

Not yet

@messnerdev
Copy link

I use a single 4K stream for detect and record, but I set my detect resolution down to 720P. In this case does frigate have easy access to a decoded 4K frame for snapshot images? Or is resizing somehow done before decoding?

@NickM-27
Copy link
Sponsor Collaborator

I use a single 4K stream for detect and record, but I set my detect resolution down to 720P. In this case does frigate have easy access to a decoded 4K frame for snapshot images? Or is resizing somehow done before decoding?

It will resize the stream for the detection process so the detection logic and snapshots will be done on the 720p steam.

@bagobones
Copy link

To add my two cents to this thread as I was disappointed with the snapshots one of the key things that bothers me is that on some cameras the aspect ratio is not even the same as the primary feed. This causes some odd behaviours if you have home assistant widgets switching between the last still from the LQ feed 4:3 and then back to the live HQ feed 16:9.

For bounding boxes and overlays I can totally see this as a major processing issue, however I would really like a best effort HQ clean option for several use cases where I think it would be the best option and not that sensitive to not matching the frames.

  1. HA camera object still frames.. By default HA it is always showing a delayed still, and the frigate HACS widget also has a still while loading option. In both cases things get strange when switching aspect ratios.
  2. Doorbell snapshots.. I used to grab the latest still from the camera on button press. Someone is ALWAYS reaching out to the camera in that case, I would rather have the HQ version than the POTATO quality detection stream.
  3. snapshots for events already offer a clean and unclean option.. Having a 3rd HQ clean option would be nice for notifications.. For cameras with wider detection areas it is unlikely the detected object will get out of frame if the delay is less than say a second and only a few frames off.

@NickM-27 NickM-27 reopened this Sep 15, 2022
@NickM-27
Copy link
Sponsor Collaborator

Still a valid issue

@ndbroadbent
Copy link

ndbroadbent commented Nov 17, 2022

This would be really nice. I set up the Double Take add-on and CompreFace to run facial recognition on my frigate snapshot images. The quality is really poor and not really good enough for this, so I would like to image snapshots from the high quality stream.

The best I could do is go back and grab a frame from recording segments that is approximately the same time as the low resolution frame. With different frame rates and different resolutions, it is almost guaranteed to be different from the original frame from the low resolution stream.

I think this could be solved by adding a calibration option for +/- x milliseconds. This should be pretty consistent, and not too hard to figure out manually.

@eddyg
Copy link

eddyg commented Nov 30, 2022

I searched (and found this issue) because I would also like to see snapshot.jpg be from the "record" stream (thumbnail.jpg could remain from the "detect" stream?) so consider this a +1.

To add my two cents to this thread as I was disappointed with the snapshots one of the key things that bothers me is that on some cameras the aspect ratio is not even the same as the primary feed. This causes some odd behaviours if you have home assistant widgets switching between the last still from the LQ feed 4:3 and then back to the live HQ feed 16:9.

FWIW, I worked around the mismatched aspect ratio for my cameras by using ffmpeg to scale the low-res detection stream to 16:9. This also seemed to improve accuracy, since the video is no longer being vertically stretched. (If there's a better way to do this, I'd be happy to update my config!)

  driveway:
    ffmpeg:
      inputs:
        - path: rtsp://front-cam/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
            - rtmp
        - path: rtsp://front-cam/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
      output_args:
        detect: '-vf scale=704:396 -f rawvideo -pix_fmt yuv420p'
        rtmp: '-aspect 704:396 -c copy -f flv'
    detect:
      width: 704
      height: 396
      fps: 7
      stationary:
        interval: 70

@NickM-27 NickM-27 mentioned this issue Dec 7, 2022
@asaworld
Copy link

I would like to see this too. The slight variation in images does not matter to me either. I would like to feed it into face and numberplate recognition. It would be nice to have an endpoint to grab a full resolution snapshot from. At the moment my choice is rtmp which is processed and scaled or the snapshot or the camera directly. Am I missing any other options?

@lordratner
Copy link

Chiming in to add my support. High resolution snapshots are almost more important than the recordings, depending on the use case. For automations, as an example. Would definitely like to see this improved.

@toddstar
Copy link

toddstar commented Jan 5, 2023

Another +1 support for saving the image from higher res feed.

As a workout for anyone looking for this now, who's camera doesn't have a still image url so can't use that method, my dirty workaround has been to create duplicate cameras that use the main stream for detected but has detect turned off by default & then created some automations so that when the original camera stream finds an object (all_count goes above 1) it turns on detected on the duplicate camera and turns it back off when count is 0. Does create some extra work for your hardware but at least gets an image output that's big enough to get a facial match for double take

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Jan 5, 2023

Another +1 support for saving the image from higher res feed.

As a workout for anyone looking for this now, who's camera doesn't have a still image url so can't use that method, my dirty workaround has been to create duplicate cameras that use the main stream for detected but has detect turned off by default & then created some automations so that when the original camera stream finds an object (all_count goes above 1) it turns on detected on the duplicate camera and turns it back off when count is 0. Does create some extra work for your hardware but at least gets an image output that's big enough to get a facial match for double take

If you're going to do all of that work, you might as well just use a higher resolution stream for detect. Whatever resolution you set for

detect:
  width:
  height:

will resize the detect stream to that. So for example my doorbell camera I just have my main stream (2560x1920) and set it to quarter size which is plenty for doubletake while also not being too much work

detect:
  width: 1280
  height: 960

@toddstar
Copy link

toddstar commented Jan 5, 2023

Another +1 support for saving the image from higher res feed.
As a workout for anyone looking for this now, who's camera doesn't have a still image url so can't use that method, my dirty workaround has been to create duplicate cameras that use the main stream for detected but has detect turned off by default & then created some automations so that when the original camera stream finds an object (all_count goes above 1) it turns on detected on the duplicate camera and turns it back off when count is 0. Does create some extra work for your hardware but at least gets an image output that's big enough to get a facial match for double take

If you're going to do all of that work, you might as well just use a higher resolution stream for detect. Whatever resolution you set for

detect:
  width:
  height:

will resize the detect stream to that. So for example my doorbell camera I just have my main stream (2560x1920) and set it to quarter size which is plenty for doubletake while also not being too much work

detect:
  width: 1280
  height: 960

Probably not the best example/wording as the camera I've been testing on just has person object detection rather than a group of objects, but the duplicate camera setup allows you to pick and chose what object type (really shouldn't have used all_count lol) you reprocess the better stream to get a better image. So for example doorbell cam could process vehicles, bikes, animals, people, etc. all at say 640 / 480 and the automation to turn on detection on the higher res duplicate camera is only triggered by person going above 0

Maybe I'm over thinking it and should just detect everything on a higher res but I would have thought its much easier on your hardware to detect on a really low res image by default and then call for the higher res as and when required

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Jan 5, 2023

Maybe I'm over thinking it and should just detect everything on a higher res but I would have thought its much easier on your hardware to detect on a really low res image by default and then call for the higher res as and when required

In general yes it is more than without but then again just works without any complications and depends on your hardware, but with Frigate 0.12 (currently in beta) using the hwaccel presets will use the GPU to scale the stream instead of the CPU so even with this setup my CPU use is only 3% (process usage for a single core) for each camera (used to be 50% process usage for a single core)

@simondsmason
Copy link

So is the preferred solution to perform detection on the main higher resolution stream and reduce the detect size? Does the reduction in detect size proportionally reduce the load on the system? Thanks

@thewan056
Copy link

thewan056 commented Oct 5, 2023

what scaling mode/method/algorithm (sorry not too sure the correct word to use here) does frigate use? maybe you can allow the user to choose so that we can figure out which is more suited for our use cases, like for those that prefer quality over resource/power consumption for example.
for some users tweaking the scaler might be a good enough compromise

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Oct 5, 2023

you might as well just use a higher resolution stream for detect. Whatever resolution you set for

detect:
  width:
  height:

will resize the detect stream to that. So for example my doorbell camera I just have my main stream (2560x1920) and set it to quarter size which is plenty for doubletake while also not being too much work

In this case, what will be the source image for the snapshot and extracted thumbnail - the original stream's frame or the downsampled one?

it would be a snapshot of size detect.widthxdetect.height

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Oct 5, 2023

what scaling mode/method/algorithm (sorry not too sure the correct word to use here) does frigate use? maybe you can allow the user to choose so that we can figure out which is more suited for our use cases, like for those that prefer quality over resource/power consumption for example. for some users tweaking the scaler might be a good enough compromise

this is all built in to ffmpeg, and depends on the hardware decoder that is used

@dopeytree
Copy link

dopeytree commented Oct 5, 2023

I ended up just increasing the bitrate of the detection stream. Then making sure snapshots are also at 100 quality. Works great with existing setup.

@cocasema
Copy link

cocasema commented Oct 5, 2023

you might as well just use a higher resolution stream for detect. Whatever resolution you set for

detect:
  width:
  height:

will resize the detect stream to that. So for example my doorbell camera I just have my main stream (2560x1920) and set it to quarter size which is plenty for doubletake while also not being too much work

In this case, what will be the source image for the snapshot and extracted thumbnail - the original stream's frame or the downsampled one?

it would be a snapshot of size detect.widthxdetect.height

Would this be hard to implement (keep & save the original frame as a snapshot)?
I'm not familiar with the codebase yet, but this sounds like one of the viable solutions.

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Oct 5, 2023

Would this be hard to implement (keep & save the original frame as a snapshot)?
I'm not familiar with the codebase yet, but this sounds like one of the viable solutions.

It couldn't be done from the same ffmpeg process, it would need to be a separate process which has the issues raised above

@idontcare99999
Copy link

I'm also interested in a variation of this solution.

I've built a flow in nodered that's activated by frigate's detection of a car to then pull a snapshot directly from the camera and post it to codeproject.ai to read the plate. Would be amazing for frigate to have an integration so the snapshots and clips would be searchable by the found plate message returned.

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Oct 8, 2023

I'm also interested in a variation of this solution.

I've built a flow in nodered that's activated by frigate's detection of a car to then pull a snapshot directly from the camera and post it to codeproject.ai to read the plate. Would be amazing for frigate to have an integration so the snapshots and clips would be searchable by the found plate message returned.

this can already be done, I have a similar setup in HA. Based on the plate the sub label is set on the car event

@idontcare99999
Copy link

I'm also interested in a variation of this solution.
I've built a flow in nodered that's activated by frigate's detection of a car to then pull a snapshot directly from the camera and post it to codeproject.ai to read the plate. Would be amazing for frigate to have an integration so the snapshots and clips would be searchable by the found plate message returned.

this can already be done, I have a similar setup in HA. Based on the plate the sub label is set on the car event

Well now I don't feel so special.... mind pointing me toward some docs for that implementation?

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Oct 8, 2023

https://docs.frigate.video/integrations/api#post-apieventsidsub_label is the relevant API

I am planning on writing up a guide at some point once I get some more testing and refining in for it

@idontcare99999
Copy link

Nice!

@NickM-27
Copy link
Sponsor Collaborator

Guide is up #8165

@sangdrax8
Copy link

Found this feature request and I would love a Frigate way of handling this. During setup I just tuned down most of my sub-streams to lower quality and fps, and my snapshots are pretty bad. I might just put the detect on the primary stream for now and see if my hardware cares. However I would love to add a snapshot role to the record stream and get a better quality even if ti isn't the SAME frame. Even better if it is run through the coral device and labels added around it.

@MrMewCakes
Copy link

Hi @NickM-27, I am trying to follow the discussion around onvif but I can't seem to get it to work.

I would appreciate your expert eye on this to let me know what I'm doing wrong.
Running everything in dockers on Unraid
192.168.1.52 is the IP of my cameras on an NVR
192.168.1.90 is the IP of my Unraid server

# Double Take
# Learn more at https://github.com/jakowenko/double-take/#configuration
go2rtc:
  streams:
    snapshot_test: onvif://<user>:<pass>@192.168.1.52:80?subtype=000&snapshot
  
mqtt:
  host: 192.168.1.90:1883
  username: <mqtt_user>
  password: <mqtt_pass>
  
frigate:
  url: http://192.168.1.90:30058
  update_sub_labels: true
  labels:
    - person
  events:
    snapshot_test:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test
  
  
topics:
  # mqtt topic for frigate message subscription
  frigate: frigate/events
  #  mqtt topic for home assistant discovery subscription
  homeassistant: homeassistant
  # mqtt topic where matches are published by name
  matches: double-take/matches
  # mqtt topic where matches are published by camera name
  cameras: double-take/cameras

detectors:
#  compreface:
 #   url: http://192.168.1.90:15000
  deepstack:
    url: http://192.168.1.90:5000

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Feb 8, 2024

Did you put the go2rtc config in the doubletake config file? That's supposed to be part of the frigate config.

@MrMewCakes
Copy link

Okay so this is in my frigate config now:

go2rtc:
  streams:
    snapshot_test1: onvif://<user>:<pass>@192.168.1.52:80?subtype=000&snapshot
    snapshot_test2: onvif://<user>:<pass>@192.168.1.52:80?subtype=001&snapshot
    snapshot_test3: onvif://<user>:<pass>@192.168.1.52:80?subtype=010&snapshot
    snapshot_test4: onvif://<user>:<pass>@192.168.1.52:80?subtype=011&snapshot
    snapshot_test5: onvif://<user>:<pass>@192.168.1.52:80?subtype=020&snapshot
    snapshot_test6: onvif://<user>:<pass>@192.168.1.52:80?subtype=021&snapshot
    snapshot_test7: onvif://<user>:<pass>@192.168.1.52:80?subtype=030&snapshot
    snapshot_test8: onvif://<user>:<pass>@192.168.1.52:80?subtype=031snapshot
    snapshot_test9: onvif://<user>:<pass>@192.168.1.52:80?subtype=040&snapshot
    snapshot_test10: onvif://<user>:<pass>@192.168.1.52:80?subtype=041&snapshot

and this is my deepstack config. does that seem right?

# Double Take
# Learn more at https://github.com/jakowenko/double-take/#configuration

mqtt:
  host: 192.168.1.90:1883
  username: <mqtt_user>
  password: <mqtt_pass>
  
frigate:
  url: http://192.168.1.90:30058
  update_sub_labels: true
  labels:
    - person
  events:
    snapshot_test:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test
    snapshot_test1:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test1
    snapshot_test2:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test2
    snapshot_test3:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test3
    snapshot_test4:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test4
    snapshot_test5:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test5
    snapshot_test6:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test6
    snapshot_test7:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test7
    snapshot_test8:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test8
    snapshot_test9:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test9
    snapshot_test10:
      image:
        height: 1440
        latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test10
        
      
  
topics:
  # mqtt topic for frigate message subscription
  frigate: frigate/events
  #  mqtt topic for home assistant discovery subscription
  homeassistant: homeassistant
  # mqtt topic where matches are published by name
  matches: double-take/matches
  # mqtt topic where matches are published by camera name
  cameras: double-take/cameras

detectors:
#  compreface:
 #   url: http://192.168.1.90:15000
  deepstack:
    url: http://192.168.1.90:5000

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Feb 8, 2024

seems like it should be good but keep in mind you haven't disabled the other images types so doubletake will still pull from snapshots and mqtt as well

@MrMewCakes
Copy link

MrMewCakes commented Feb 9, 2024

Okay so I've removed the MQTT config but I only get events from "lounge_camera", but am trying to pull the snapshots from "snapshot_test1".

I can confirm from my go2rtc dashboard that "snapshot_test1" and "test" both indeed work and stream the same footage (but in a higher resolution) as "lounge_camera".

Do I need to add anything else to my configs, or have I done something wrong?
Should "snapshot_test1" and "test" be added under "cameras" in the Frigate config?

go2rtc:
  streams:
    lounge_camera:
    - ffmpeg:http://192.168.1.52/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=<user>&password=<password>
    snapshot_test1: onvif://<user>:<password>@192.168.1.52:80?subtype=000&snapshot
    test: onvif://<user>:<password>@192.168.1.52:80?subtype=000

cameras:
   lounge_camera: 
    ffmpeg:
      inputs:
      - path: rtsp://127.0.0.1:8554/lounge_camera
        input_args: preset-rtsp-restream
        roles:
        - detect
    objects:
      track:
      - person
      - dog
      filters:
        person:
          min_score: 0.82
    snapshots:
      enabled: true
      bounding_box: true
    motion:
      mask:
      - 263,0,259,45,635,45,635,0
    mqtt:
      timestamp: false
      bounding_box: false
      crop: true
      quality: 100
      height: 500
    ui:
      order: 3

# Double Take
# Learn more at https://github.com/jakowenko/double-take/#configuration
  
frigate:
  url: http://192.168.1.90:30058
  update_sub_labels: true
  labels:
    - person
  cameras:
     - snapshot_test1
     - lounge_camera
     - test

      
      
detectors:
#  compreface:
 #   url: http://192.168.1.90:15000
  deepstack:
    url: http://192.168.1.90:5000

@AussiSG
Copy link

AussiSG commented Mar 13, 2024

@NickM-27 , Got a question about the onvif snapshots, I have a camera added with ONVIF as mentioned above.
By allowing the 1984 port in docker I already have the following snapshot URL working / available.

http://192.168.0.12:1984/api/frame.jpeg?src=doorbell

With the below config:

go2rtc:
  streams:
    doorbell: 
      - rtsp://USER:PW@@192.168.0.21:554/h264Preview_01_main
      - ffmpeg:rtsp://USER:PW@@192.168.0.21:554/h264Preview_01_sub#audio=pcm#audio=volume

What is the reason/advantage for adding an extra stream in frigate under got2rtc like the following:

go2rtc:
  streams:
    doorbell: 
      - rtsp://USER:PW@@192.168.0.21:554/h264Preview_01_main
      - ffmpeg:rtsp://USER:PW@@192.168.0.21:554/h264Preview_01_sub#audio=pcm#audio=volume
    doorbell_snapshot:
      - onvif://USER:PW!2@@192.168.0.21:554?subtype=MediaProfile00000&snapshot

When opening both snapshots urls ( which update per second? ) , they look the same to me?

@NickM-27
Copy link
Sponsor Collaborator

NickM-27 commented Mar 13, 2024

the advantage is that behind the scenes with your config go2rtc is using ffmpeg to full a keyframe and encode it as jpeg. Using onvif just pulls the image directly from the camera so it is less work

@AussiSG
Copy link

AussiSG commented Mar 16, 2024

seems like it should be good but keep in mind you haven't disabled the other images types so doubletake will still pull from snapshots and mqtt as well

You mentioned in this case about disabling the other images.
I've setup double take as followed and still receive ( low res ) mqtt detector stuff....

frigate:
  url: http://192.168.0.12:5000
  update_sub_labels: true
  labels:
    - person

  events:
    # doorbell camera 
    doorbell:
      # mqtt disabled
      mqtt: false
      # only snapshot url to be used
      image:
        height: 1440
        latest: http://192.168.0.12:1984/api/frame.jpeg?src=doorbell_snapshot
    
    # front1 camera
    front1:
      # mqtt disabled for processing
      mqtt: false
      # only snapshot url to be used
      image:
        height: 1440
        latest: http://192.168.0.12:1984/api/frame.jpeg?src=front1

@NickM-27
Copy link
Sponsor Collaborator

The only type that uses this is latest. Snapshot will also pull from the detect stream

@AussiSG
Copy link

AussiSG commented Mar 16, 2024

I like your reply :)

so adding the following would make the deal:

    **snapshot: http://192.168.0.12:1984/api/frame.jpeg?src=front1**

@NickM-27
Copy link
Sponsor Collaborator

That option doesn't exist in doubletake config.

@AussiSG
Copy link

AussiSG commented Mar 16, 2024

That option doesn't exist in doubletake config.

hence why I liked your reply.
So what would the config be looking like ? Got an example?

@NickM-27
Copy link
Sponsor Collaborator

you'd set snapshots to 0 in the doubletake config so it only uses latest to check for faces

@AussiSG
Copy link

AussiSG commented Mar 18, 2024

Last edit :) :) 👍

Apparently enabling HTTP in the reolink setup/environment did the trick.....

Hopefully tomorrow I will have some triggers again in Double Take.
Currently I had 0 , even not any unknown matches where saved....

Updated the config in Frigate as followed and now the snapshot stream is functional.

go2rtc:
  streams:
    front1: # <- for RTSP streams
      - rtsp://192.168.0.15:554/ # <- stream which supports video & aac audio
    front1_snapshot:
      - onvif://192.168.0.15:554?subtype=MediaProfile00000&snapshot    
    doorbell: 
      - rtsp://USER:PASS@@192.168.0.21:554/h264Preview_01_main
      - ffmpeg:rtsp://USER:PASS@@192.168.0.21:554/h264Preview_01_sub#audio=pcm#audio=volume
    doorbell_sub:  
      - rtsp://USER:PASS@@192.168.0.21:554/h264Preview_01_sub
    doorbell_snapshot:
      - onvif://USER:PASS@@192.168.0.21:8000?subtype=MediaProfile00000&snapshot

@shjips
Copy link

shjips commented Mar 18, 2024

Last edit :) :) 👍

Apparently enabling HTTP in the reolink setup/environment did the trick.....

Hopefully tomorrow I will have some triggers again in Double Take. Currently I had 0 , even not any unknown matches where saved....

Updated the config in Frigate as followed and now the snapshot stream is functional.

go2rtc:
  streams:
    front1: # <- for RTSP streams
      - rtsp://192.168.0.15:554/ # <- stream which supports video & aac audio
    front1_snapshot:
      - onvif://192.168.0.15:554?subtype=MediaProfile00000&snapshot    
    doorbell: 
      - rtsp://USER:PASS@@192.168.0.21:554/h264Preview_01_main
      - ffmpeg:rtsp://USER:PASS@@192.168.0.21:554/h264Preview_01_sub#audio=pcm#audio=volume
    doorbell_sub:  
      - rtsp://USER:PASS@@192.168.0.21:554/h264Preview_01_sub
    doorbell_snapshot:
      - onvif://USER:PASS@@192.168.0.21:8000?subtype=MediaProfile00000&snapshot

Can you share your whole frigate + double-take conf? Using reolink cams and trying to get better snaps without killing my cpu.

@NickM-27
Copy link
Sponsor Collaborator

you need to set mqtt: false in the attempts section as well

@AussiSG
Copy link

AussiSG commented Mar 19, 2024

Thx for the confirmation nick 👍, The first problem must have been that the snapshots url were not working properly....

@shjips :
Please find the following links for my config files, I must say that I have not got the 2 way audio working over frigate with the Reolink doorbell yet. I do get audio sounds but it seems that I cant send audio to the doorbell. Maybe @NickM-27 you can have a little peak at my frigate config aswell? ❤️
Maybe Ive configured the restream part not good?

Frigate configuration:
https://pastebin.com/v0an1cni

Double-Take configuration:
https://pastebin.com/LGrAta2j

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request pinned
Projects
None yet
Development

No branches or pull requests