Installation on a virtual machine within Proxmox #1837
Replies: 18 comments 27 replies
-
I agree, I have been running Frigate inside of Proxmox for over a month now with little issues. (I use a coral device via USB- make sure you initialize it once to get the proper mac address to pass through or pass the entire usb through!). While my inference speed is more on the 6.1-6.7, it is not the worst. |
Beta Was this translation helpful? Give feedback.
-
Are you using an m.2pcie - > pcie adapter? If so, would you mind sharing which one? Thanks |
Beta Was this translation helpful? Give feedback.
-
For the past four days I was playing with frigate installed in container in debian vm running on top of latest proxmox. I didn't had any issues with the two m.2 corals installed in my small server and their passthrough , but I had big problems with the passthrough of the igpu to be used for hardware acceleration of decoding the ffmpeg streams, also I had numerous freezes of the ethernet port, which to mitigate the issue had to disable offloading to the ethernet controller and this added more or less overhead to my cpu. Finally I went back to Debian on baremetal. Looks like Proxmox is still not good enough for passthrough of different hardware to the VM guest operating systems.... I was deceived of running the proxmox on top of zfs mirror and the possibility to backup all the VM to my network storage... |
Beta Was this translation helpful? Give feedback.
-
I concur. Running a M2 coral in a proxmox VM performs almost as well as on bare metal, and is plenty fast for my 5-6 cameras. The bigger issue with virtualization is the difficulty in getting hardware decode (GPU or QSV) working, especially if your machine only has an IGP. Then again, I dont find this to be a major bottleneck, seeing that decoding 5 substreams and doing the motion detection takes 10-15% on 1 core of my humble i3 with no hardware acceleration at all. Its likely being able to leverage QSV would lower this a fair bit, but unless you have dozens of cameras or high res or no substreams, its not likely a problem. |
Beta Was this translation helpful? Give feedback.
-
I've had Frigate running really well in a Ubuntu LTS VM on top of Proxmox. I'm passing through both the M2 PCI Coral and the igpu with no issues. Hardware is a 6th gen i5 HP Mini PC. I used this guide to get the igpu to passthorugh: |
Beta Was this translation helpful? Give feedback.
-
I didn't have good experience with passing the coral to a vm in proxmox, it kept throwing errors in dmesg and kept resetting the usb port. I tried all available ports and option combinations to no avail. I've googled around to find it has to do with the speed at which the coral is consuming data over the link or some such. Nevertheless sometimes it worked and I've been getting inference speeds around 20ms as opposed to 50-70ms on cpu. The issues I faced are perhaps dues to multiple other usb/pci passthroughs already present and/or the hw used (xeon-2176m nuc). After putting the frigate container into docker running in lxc container on the proxmox (takes a bit of work to set up, but works flawlessly afterwards) I'm getting inference speeds of 8ms with no usb port restarts/dmesg errors. |
Beta Was this translation helpful? Give feedback.
-
Thought I'd share my setup which is working very well, with iGPU and dual TPU passthrough to a Proxmox VM. I'm getting inference speed of 6.8 for both Coral, and CPU usage is consistently 11-20%. Currently only 3x 4k Reolink 820a's, but more to come now that I've got so much more overhead. (as of 2022-10-12) Host:
VM (Ubuntu Server 22.0.4.1):name: nvr
agent: 1
bios: ovmf # (UEFI) This is key
# machine: Default (i440fx) # Q35 wasn't necessary
boot: order=scsi0;ide2;net0 # defaults
net0: virtio=56:AD:0E:42:EE:3C,bridge=vmbr0 # defaults
cpu: host,flags=+aes #Key CPU Settings
sockets: 1
cores: 4
memory: 8192
balloon: 0
vga: std,memory=256
scsihw: virtio-scsi-pci # defaults
scsi0: local-lvm:vm-103-disk-1,size=16G
efidisk0: local-lvm:vm-103-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:00:02.0,mdev=i915-GVTg_V5_4 # iGPU partial passthrough
hostpci1: 0000:04:00 # PCIe Coral TPU 1 of 2
hostpci2: 0000:05:00 # PCIe Coral TPU 2 of 2 # On Host
# /etc/default/grub
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
# The following is the key line to change. Note that many guides
# add a lot of unnecessary things to this, and removing them was
# needed to get everything to work for me.
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on i915.enable_gvt=1 iommu=pt initcall_blacklist=sysfb_init"
GRUB_CMDLINE_LINUX=""
# After updating, run command: update-grub # On Host
# /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
kvmgt
xengt
vfio-mdev
# After updating, run command: update-initramfs -u -k all # On Host
# /etc/modprobe.d/blacklist-apex.conf
# This prevents the host from loading the drivers for the Corals, allowing for passthrough.
blacklist gasket
blacklist apex
options vfio-pci ids=1ac1:089a
# After updating, run command: update-initramfs -u -k all Host readouts
VM ReadoutsAfter using the Proxmox UI to add three PCIe devices (CPU half, and both Coral), and installing the coral drivers, here's the outputs:
Frigate Settings# ~/docker/frigate/docker-compose.yml
version: "3.9"
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
image: blakeblackshear/frigate:stable
shm_size: "384mb" # update for your cameras based on calculation above
devices:
- /dev/dri/card0 # for intel hwaccel, needs to be updated for your hardware
- /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
- /dev/apex_1:/dev/apex_1 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
volumes:
- /etc/localtime:/etc/localtime:ro
- ./config.yml:/config/config.yml:ro
- ./media:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 2000000000
ports:
- "5000:5000"
- "1935:1935" # RTMP feeds
environment:
FRIGATE_RTSP_PASSWORD: "myrtsppassword" # ~/docker/frigate/config.yml
cat ~/docker/frigate/config.yml
mqtt:
host: 192.168.1.11
user: evan
password: secretpassword
objects:
track:
- person
- dog
- cat
snapshots:
enabled: True
bounding_box: True
detect:
width: 640
height: 360
fps: 5
detectors:
coral1:
type: edgetpu
device: pci:0
coral2:
type: edgetpu
device: pci:1
record:
enabled: True
retain:
days: 2
mode: active_objects
events:
retain:
default: 3
mode: active_objects
ffmpeg:
hwaccel_args: -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format yuv420p
cameras:
driveway_cam_frigate:
objects:
track:
- person
- car
- dog
- cat
- motorcycle
- bicycle
ffmpeg:
inputs:
- path: rtsp://evan:secretpassword@192.168.1.31:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://evan:secretpassword@192.168.1.31:554/h264Preview_01_sub
roles:
- detect
front_door_cam_frigate:
ffmpeg:
inputs:
- path: rtsp://evan:secretpassword@192.168.1.32:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://evan:secretpassword@192.168.1.32:554/h264Preview_01_sub
roles:
- detect
back_cam_frigate:
ffmpeg:
inputs:
- path: rtsp://evan:secretpassword@192.168.1.33:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://evan:secretpassword@192.168.1.33:554/h264Preview_01_sub
roles:
- detect
rtmp:
enabled: False
live:
height: 1080
quality: 1 |
Beta Was this translation helpful? Give feedback.
-
I run Frigate on docker on Ubuntu VM host on Proxmox. I use the usb coral with 2 cameras. The inference speed is bit higher compare to bare metal but not significant. Also, the hardware is an old Dell server |
Beta Was this translation helpful? Give feedback.
-
Hi James,
Can you elaborate how you converted the Proxmox VM into a LXC container? I
am using 2 mini PCIe coral ( on PCI adapters) and passthrough to the ubuntu
VM. Is there any significant benefit to moving the VM to a container?
regards
Jon
…On Thu, Feb 9, 2023 at 7:13 AM James L ***@***.***> wrote:
Out of interest what hardware and versions are you using? I recently moved
Frigate from a Proxmox VM into an LXC container, because I was getting 18ms
inference times on my USB Coral which seemed high. Moving to the LXC
container has cut it down to 8ms and reduced CPU as well. I believe this is
because Qemu has to emulate USB, which isn't ideal for a high bandwidth
device like the Coral.
Currently running on a Dell R220 with a Xeon E3-1231v3.
—
Reply to this email directly, view it on GitHub
<#1837 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ALLKNQQLDZH4VAV7DKWV5UTWWQSCTANCNFSM5EWADMSA>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Created a topic specifically on Frigate in LXC here: #5448 |
Beta Was this translation helpful? Give feedback.
-
I'm using Proxmox 7.4 with and a PCIe Coral TPU. I followed all the Proxmox host instructions, and inside my Ubuntu 22.04 VM I can see the Coral PCIe device via lspci. I did the manual driver update from the Coral official site. I also added the Proxmox host GRUB pcie_aspm=off flag. However, I have no /dev/apex*. I'm at a loss. |
Beta Was this translation helpful? Give feedback.
-
Used Frigate on a VM on Proxmox. Using USB accelerator, via USB2. This is old server, has no USB3 (I already ordered dual TPU M.2 accelerator and compatible adapter for PCIe connector, can't wait to see the difference lol). Reasons to go to bare metal:
|
Beta Was this translation helpful? Give feedback.
-
Folks,
I'm running on LXC container with TPU and iGPU working great. Apalrd's
adventures has a great video here
<https://www.youtube.com/watch?v=sCkswrK0G3I>
…On Wed, Nov 29, 2023 at 9:34 AM alienatedsec ***@***.***> wrote:
So my VM has crashed twice now in 48 hours. Seems to be almost 24 hours
apart. Almost like it just goes to sleep, no logs nothing to indicate a
problem. I can not tell if the console is still up because the iGPU pass
through has stopped the proxmox console from working. I do have these
errors constantly in my proxmox logs but not sure if they are related to
the "sleep" as they just seem to be a side effect of passing through the
igpu.
image.png (view on web)
<https://github.com/blakeblackshear/frigate/assets/118318345/b0539f7c-60f5-4d22-86b6-d0a30645a318>
I have gone bare metal since.
—
Reply to this email directly, view it on GitHub
<#1837 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AJ5YVOGK5A7SFRK7MUEZCV3YG36SDAVCNFSM5EWADMSKU5DIOJSWCZC7NNSXTOKENFZWG5LTONUW63SDN5WW2ZLOOQ5TONZQGIZDOMI>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
I have a copy of Ubuntu 22 Server LTS setup with VMware ESXi 7.3 running a custom dell image on mirrored SD cards in a RAID 0. Coral USB TPU passed through to the VM. I setup the VM with 8 cores total with 4 processors 2 cores per chip. It's a PowerEdge R620 with 40 cores and 256gig of RAM. Separate RAID controller for the internal micro SD Cards and then the main PERC cards tied to 4 x 2 TB SAS 10K RPM Drives for the guest OS’s. ESXi’s resource requirements and footprint are so small that a RAiD 0 internal SD card solution works really well. I went with top of the like class 10 SD Cards so the worse case is have to swap out on SD Card and I'm done I have 2 Amcrest AD410 doorbell sensors, eventually 6 ReoLink RCL520 Poe cameras, and some scattered Wyze Pan cams with RSTP Running. The wyze will eventually be replaced but I figured if try and get as many years as I can before I trash that company for good. Overall i’ll have about 5 streams going at a time being recorded to a OpenMediaVault NAS that's running in a different VM. I only have to five it 16 gigs of RAM and it barely useshalf that without any HD paging. Inference with 5 streams is between 25-35ms and the processor cores running usually about 25-50% usage. Been very stable for me with everything going. The PoE cams can easily eat about 1.7 gig an hour were as the wireless are anywhere between 300-700 mb per hour. I have it send it's triggers to MQTT on my Home Assistant VM also on the same box that's running DoubleTake and CompreFace and CodeAI.io running for facial and license plate recognition. Those are getting ready to be moved out of Home Assistant and installed into a Jetson Nano instead. Even still I get about a round trip time of a known individual of car in about -300ms. |
Beta Was this translation helpful? Give feedback.
-
What type of hardware are you running it on. Homebrew or enterprise? |
Beta Was this translation helpful? Give feedback.
-
I'm running Proxmox on a HP Elitedesk 800 G3 mini with a i7-7700 which got an Intel HD Graphics 630 iGPU. i have 2 VM's running where one of them is Home Assistant with Frigate as an add-on. recently, Frigate has been eating away at the 8GB of memory and the 4 cores i've given the Hass VM. Frigate is running with only one camera and no crazy config as far as i can tell. If anyone has any advice as to how to lower the usage, i would love to hear from you. Until recently it was my understanding that it was not possible to passthrough the GPU without the Proxmox host losing access to it and i wasn't sure what that would result in. Now i've found that it is indeed possible, but there is a lot guides, all with different approaches. Following them, i found that i should be able to verify IOMMU (idk what it is) but i haven't been able to do that, so i'm considering getting another CPU. Does anyone know what to look for on Intels website with the overview of CPUs, to ensure that a CPU (with iGPU) is capable of what i want to do? I mean, it's not like they got something called IOMMU. Edit: According to the Proxmox docs, i have to look for VT-d capable CPU's which mine is already, so i'm not sure why i can't verify IOMMU. Frigate config:
|
Beta Was this translation helpful? Give feedback.
-
I've never heard about a partial gpu passthrough (I'm a real noob about virtualization, but I couldn't find anything about it online). |
Beta Was this translation helpful? Give feedback.
-
@AlessandroTischer it's possible to have hardware acceleration passthrough with openvino in proxmox and then use the oepnvino detectors. have a look here to pass your igpu to the lxc (/dev/dri/renderD128), you can skip the coral : https://www.homeautomationguy.io/blog/running-frigate-on-proxmox once done you msut specify the device /dev/dri/renderD128 in your docker-compose to pass it to the frigate docker. and then you need to setup openvino detector according to the frigate documentation and you have hardware acceleration on detection witch seems almost as efficient as a coral. as far i have tested no need to partial passthrough if you dont want to use the GPU in a VM in addition to LXC |
Beta Was this translation helpful? Give feedback.
-
Within the installation documentation it states "Running Frigate in a VM on top of Proxmox, ESXi, Virtualbox, etc. is not recommended. The virtualization layer typically introduces a sizable amount of overhead for communication with Coral devices."
I have an m.2 pcie Coral device connected to a 4 year old desktop class motherboard running Proxmox with the Coral device PCI passthrough sent to an Ubuntu LTS VM. Within that VM I have docker-compose utilizing the coral via Frigate.
My inference speed ranges between 6.1 and 6.27, which lines up with the same speed that is achieved with bare-metal unraid with docker referenced in the Recommended Hardware documentation.
The statement in the installation documentation is not incorrect - there may be sizable overhead seen. However that is not universally true based on my setup.
Beta Was this translation helpful? Give feedback.
All reactions