-
Notifications
You must be signed in to change notification settings - Fork 45.5k
Closed
Labels
stat:awaiting maintainerWaiting on input from the maintainerWaiting on input from the maintainer
Description
Description,
There are at least 20% questions and issues on github and SO directly related to out-of-memory issues, training performance issues, inference performance issues, etc. It could save lots of waiting time and hardware resources if the team can publish an official guideline for performance tuning and achieve performance close to numbers on Model Zoo.
New documents can include (not limit to) these topics,
- Monitor actual memory usage, and detect and prevent out-of-memory error.
- FAQ for train/eval/inference on GPU, and tips on improving GPU utilization.
- Tutorial for video inferencing(realtime).
- Tips for inferencing on mobile devices (with very limited resources)
Overall my experience with TF object detection API is positive, fairly easy to use, pretty robust. It could be much more popular if train/eval/inference performance is improved.
Thank you.
System information
- What is the top-level directory of the model you are using:
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
- TensorFlow installed from (source or binary): pip
- TensorFlow version (use command below): v1.7
- Bazel version (if compiling from source): N/A
- CUDA/cuDNN version: 9.0
- GPU model and memory: GTX1060, 6GB
- Exact command to reproduce:
KDMueller, xyou365, Seleucia, vipinpillai, QuickLearner171998 and 2 more
Metadata
Metadata
Assignees
Labels
stat:awaiting maintainerWaiting on input from the maintainerWaiting on input from the maintainer