Skip to content

Request new documents on performance fine tuning #4495

@dennywangtenk

Description

@dennywangtenk

Description,

There are at least 20% questions and issues on github and SO directly related to out-of-memory issues, training performance issues, inference performance issues, etc. It could save lots of waiting time and hardware resources if the team can publish an official guideline for performance tuning and achieve performance close to numbers on Model Zoo.

New documents can include (not limit to) these topics,

  1. Monitor actual memory usage, and detect and prevent out-of-memory error.
  2. FAQ for train/eval/inference on GPU, and tips on improving GPU utilization.
  3. Tutorial for video inferencing(realtime).
  4. Tips for inferencing on mobile devices (with very limited resources)

Overall my experience with TF object detection API is positive, fairly easy to use, pretty robust. It could be much more popular if train/eval/inference performance is improved.

Thank you.


System information

  • What is the top-level directory of the model you are using:
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
  • TensorFlow installed from (source or binary): pip
  • TensorFlow version (use command below): v1.7
  • Bazel version (if compiling from source): N/A
  • CUDA/cuDNN version: 9.0
  • GPU model and memory: GTX1060, 6GB
  • Exact command to reproduce:

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions