-
Notifications
You must be signed in to change notification settings - Fork 45.3k
Open
Labels
models:research:odapiODAPIODAPIstat:awaiting model gardenerWaiting on input from TensorFlow model gardenerWaiting on input from TensorFlow model gardenertype:support
Description
System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
- TensorFlow installed from (source or binary): source
- TensorFlow version (use command below): 1.4 with GPU
- Bazel version (if compiling from source): newest
- CUDA/cuDNN version: CUDA 9 / cuDNN 7
- GPU model and memory:
Laptop: GeForce GTX 1050 4GB
Jetson Tx2: Tegra 8GB - Exact command to reproduce:
clone my repo https://github.com/GustavZ/realtime_object_detection
and run object_detection.py
Describe the problem
I am using the SSD Mobilenet for realtime inference with a Webcam as Input using OpenCV and i get following Performance:
Laptop: ~25 fps at ~40% GPU and ~25% CPU Usage
Jetson: ~5 fps at ~5-10% GPU and 10-40% CPU Usgae
Any hints why the Object Detection API is so slow on Inference.
Training may be easy and fast ok, but inference / really using the models for realtime object detection is very slow and does not use full GPU.
(For comparison YOLO with darknet runs at 90-100% GPU Usage with 3x higher fps)
Here is a screenshot what nvidia-smi and top give me while inferencing on the laptop

Reactions are currently unavailable
Metadata
Metadata
Labels
models:research:odapiODAPIODAPIstat:awaiting model gardenerWaiting on input from TensorFlow model gardenerWaiting on input from TensorFlow model gardenertype:support