-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any plans for add support for MariaDB/mySQL or PostgreSQL ? or how to improve SQLite ? #9496
Comments
There have been improvements in 0.13 and there are more than can be done. The last that was said on this, there are no plans to support other DB options #3644 (comment) |
As far as debugging this, would want info on db query times from chrome network debugger |
I don't think the chrome network debugger can help either... It works ok most of the time... I can see that it take longer when no events have been requested for a few hours. But suddenly the memory goes haywire for no apparent reason and the entire LXC becomes inaccessible and I have no choice but to restart the LXC. after restarting the LXC it could take 10-15min for frigate to start, I don't know if it's VACUUMing the DB or repair it but it does take too long to start (As seen on the screenshot) Any ideas ? |
it will help give more information to us as the developers to understand what is going on frigate runs vacuum on startup once every two weeks |
I've added a cron job that sends a curl request to http://frigate_ip:5000/events every minute to keep the sqlite fresh... this has helped a lot... I used to have to restart the LXC almost daily (Sometimes twice a day) but now it could take 3-4 days before the server start leaking memory again... (Chrome network debugger below) If there is a particular way to share the the logs below (and not as a screenshot) please let know. The response time of each request for /events is very normal... A may have to disable the cron job in order to get longer responses... I've trying to find a way to debug this ... I've left htop running but by the time the server runs out memory htop does not really show anything and the server is already unresponsive... |
yeah would want to see this view when the times are slow |
Something similar I've noticed when comparing this issue to how this server is behaving is that the memory is gradually increasing as the time goes by... This setup is on proxmox and with the storage location is a proxmox mp of 4 disks using LVM. the op mentioned that once he switch to FUSE the memory leak stopped happening and the server became stable. The docker file looks like this:
Wondering if there is a preferred method for frigate to efficiently mount a disk to an LXC ? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Describe the problem you are having
There are 77 Cameras in this setup from different manufactures. Each camera has a stream for recording and another stream for detecting. Everything works well but if the server does not receive a request http://frigate_server:5000/events frequently (one every minute) the server become really unstable and it crashes as when there are query to access events and so on... as the server colects new events the server takes longer to show the results, then the requests starts to pile up and the memory peak to 100% and the CPU % usage also goes to 100%.
The server has plenty of resource and currently is set to use 20 cores and 128GB of ram.
So I'm wondering if there is any way to improve sqLITE and if there are any plans in adding support to other databases?
Version
0.13.0-49814B3
Frigate config file
Relevant log output
FFprobe output from your camera
Frigate stats
Operating system
HassOS
Install method
HassOS Addon
Coral version
USB
Network connection
Wired
Camera make and model
Axis, Hikvision
Any other information that may be helpful
I cut the config file short to make it simpler
The text was updated successfully, but these errors were encountered: