¿What is your Inference Speed? Share your experiences #7601
Replies: 19 comments 19 replies
-
Config 1: Config 2: Config 3: |
Beta Was this translation helpful? Give feedback.
-
Frigate version: 12 stable System is running 20 containers and OMV6 |
Beta Was this translation helpful? Give feedback.
-
Frigate version: 0.12.1 Stable |
Beta Was this translation helpful? Give feedback.
-
Frigate version: 0.12.1 Stable I'm concerned by the inference speed. |
Beta Was this translation helpful? Give feedback.
-
Frigate version: 0.13.0-9185753 |
Beta Was this translation helpful? Give feedback.
-
Frigate Version: 0.12.1 |
Beta Was this translation helpful? Give feedback.
-
Just a test for fun Titan RTX Build: Frigate version: 0.13.0-5658E5A inference time is the same with a p4 as with Titan RTX so there is a bottleneck ¿maybe in the CPU side? |
Beta Was this translation helpful? Give feedback.
-
Frigate version: 0.12.1-367D724 The Inference speed is a bit high considering the speed I see in previous comments. Maybe the M.2 A+E slot is not fast enough? /edit: I see M.2 A+E is PCIe (2x) + USB. The device is definitely using PCIe since it does not show up in |
Beta Was this translation helpful? Give feedback.
-
Just discovered that inferente times, at least with tensorrt detector, are super tied to cpu speed. For example, I get 4ms-5ms while mi cpu is at 800mhz-1200mhz (powersave governor) and 1ms-1,6ms with cpu using performance mode at 4mhz so the inference stat is ¿not? isolated to detector hardware performance. |
Beta Was this translation helpful? Give feedback.
-
Hmm interesting have removed 1x camera and added 2x more 3k cameras and now its up at 10ms from 6.2ms even though detection is 640x 480 so small.. Do you think it's better to use variable or fixed rate's on the cameras feed? Frigate version: 0.12.1 Stable |
Beta Was this translation helpful? Give feedback.
-
Frigate version: 0.13.0-AA93D4F Something has gone sour in my setup since a few days ago as you can see. I updated to 0.13.0beta5 this week but not sure if that can have anything to do with the poor inference speed. |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
for my setup I recently just swapped from an ubuntu 23.04 minimized server to a proxmox LXC setup. intel - 11600 LXC setup: ubuntu vm: |
Beta Was this translation helpful? Give feedback.
-
Frigate version: 12.1-367D724 |
Beta Was this translation helpful? Give feedback.
-
Frigate version: 0.12.1 Inference speed seems slower than I would expect, the NPU sits at 7% usage |
Beta Was this translation helpful? Give feedback.
-
Frigate version: 0.12.1-367D724
Frigate version: 0.13.0-614A36A |
Beta Was this translation helpful? Give feedback.
-
0.13.0-AA93D4F (13 beta 5) OpenmediaVault 6.1.0-0.deb11.13-amd64 4x Dahua 5 series, 2x Wyze v3 (trash) 2x Coral USB |
Beta Was this translation helpful? Give feedback.
-
Btw. for comparison, wouldn't it be better to use a benchmark-set? A couple of short clips with a config of like 6 cams and defaults (comments) for different hardware? |
Beta Was this translation helpful? Give feedback.
-
I recently tried my P2000 (yolov7-320) instead of my Coral USB. GPU doesn't do much outside of frigate. One week; Lucky - Only just noticed I have 6 cameras, but only 5 ffmpeg streams there... fixed! |
Beta Was this translation helpful? Give feedback.
-
Now that we have several hardware choices to run our detectors, i thinks could be usefull to share our experiences to have info about how god or bad a model behave in specific hardware to tune our configs or to upgrade out setups.
The idea came up because i am thinking of upgrade my setup with an nvidia GPU to ¿get better inferece speed? but i don't know if tensor flow will behave any better than my current setup.
I will start sharing my experiences with Frigate.
Setup1:
Setup2:
Setup3:
Setup4:
Setup5:
Setup6
Beta Was this translation helpful? Give feedback.
All reactions