-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autotracker: Basic zooming and moves with velocity estimation #7713
Conversation
✅ Deploy Preview for frigate-docs canceled.
|
Is there a reason why a user would want to use absolute zoom if relative zoom is available? |
Relative zoom seems to be implemented differently depending on the camera. The ONVIF spec says:
That last line is the hangup. The Dahua/Amcrest cams I tested use a generic zoom translation space. When you send it a positive zoom value along with pan/tilt values, it zooms in proportionally to the area of the frame with the centroid calculated by a relative pan/tilt. That makes it super nice for our application as we already have the bounding box dimensions for objects. So it's super easy to grab the centroid and bounding box of the object, do a few quick calculations, and send those values directly to the camera. The result is a perfect pan/tilt/zoom movement toward the object. I'm just not sure if other brands implement relative zoom in the same way because the ONVIF spec says there's no assumption about how the generic zoom range is mapped. Hence the default to absolute zooming. |
Thanks, I think we'll want some kind of detailed explanation on the docs for that. I also think instead of two separate fields for zoom it would be preferable to have one field with an enum of |
This PR adds a few features to the autotracker:
Basic zooming - There is plenty of room for improvement, but this implementation uses ONVIF absolute zooming to conservatively zoom in and out on an autotracked object. Relative zooming is also supported with some Dahua and Amcrest PTZs (which is what I had to test). Relative zooming will zoom concurrently with a pan/tilt movement.
Movements based on Norfair's estimation of the object's velocity. Faster moving objects now cause more PTZ movement so that the object is not out of the frame after a move. This requires the user to "calibrate" their camera on startup so that Frigate can measure movement times of the user's camera. These values are then stored and a simple linear regression is calculated to predict the time of a pan/tilt move. The intercept and coefficients of the calculation are written back to the config file.
I've tested with my setup and it's working, but I'd welcome anyone else with a PTZ to help test and improve the feature!