New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade TensorRT to 8.5.3 #7006
Conversation
…tup. Rearrange tensorrt files into a docker support folder.
…cess and minimum HW support.
✅ Deploy Preview for frigate-docs canceled.
|
I have a test image pushed if anyone wants to help verify this works on new (4000 series) and old (900 series) GPUs. This change also streamlines the model generation process, and no pre-conversion steps are necessary, just add the model you want to use as an environment variable in your Using image services:
frigate:
image: ghcr.io/natemeyer/frigate:trt-8.5-7761a1a-tensorrt
environment:
- USE_FP16=false
- YOLO_MODELS=yolov7-tiny-416 The trt-models folder no longer needs to be mapped as a volume into the container. The model in the You will also need to add this new path into the model:
path: /config/model_cache/tensorrt/yolov7-tiny-416.trt Note: I had issues using the latest driver on my 1080ti, the system would crash when running object detection. I rolled back to the 531.68 driver and it is working fine. Not sure if this issue extends to any other cards or setups, or if I'm the lucky one. |
Hi Nate. I followed your instructions above and
|
Yes, all the same models should be available as before. Just set them in the |
I tried the other yolo7 models |
@doctorttt Thanks for the heads up. I think the conversion script was too verbose, so I redirected the stdout to /dev/null. Give it a try with the latest update. Pushing a new image, |
@NateMeyer |
@doctorttt One last note, you should be able to set the |
@NateMeyer Thanks - I flipped |
Latest rebase on dev brought in blakeblackshear#7006, which replaced the model generation image with a trt-model-prepare service in the regular frigate-tensorrt image. Follow the same paradigm for Jetsons.
Latest rebase on dev brought in blakeblackshear#7006, which replaced the model generation image with a trt-model-prepare service in the regular frigate-tensorrt image. Follow the same paradigm for Jetsons.
Can confirm this works on a NVIDIA GeForce RTX 2060. One thing that isn't clear, if I want to provide a custom or fine tuned YOLO model, would I be able to provide it somehow through the external env as PT or ONNX. and get Frigate on startup to do the conversion or does this need to happen manually outside of the container and do we need to store the model and map it like this without an env variable like this:
Thanks again for the hard work! |
@trixor you'll need to do the conversion yourself and supply the |
@madsciencetist thanks! Great, that's how I understood it after reading the code but wanted to verify. I'll see if I can get that to work and submit a PR if able. |
Latest rebase on dev brought in blakeblackshear#7006, which replaced the model generation image with a trt-model-prepare service in the regular frigate-tensorrt image. Follow the same paradigm for Jetsons.
Latest rebase on dev brought in blakeblackshear#7006, which replaced the model generation image with a trt-model-prepare service in the regular frigate-tensorrt image. Follow the same paradigm for Jetsons.
This pull request updates the TensorRT libraries to 8.5.3 in order to support the latest GPUs from Nvidia. Should address Issue #6666
This update also is able to do the model conversion from within the frigate image, which allows us to integrate it into the startup scripts.
Tasks: