Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FR] Enable YOLOv4 engine #48

Open
bafu opened this issue Jul 12, 2020 · 5 comments
Open

[FR] Enable YOLOv4 engine #48

bafu opened this issue Jul 12, 2020 · 5 comments
Assignees

Comments

@bafu
Copy link
Member

bafu commented Jul 12, 2020

Concept
If we provide YOLOv4 engine, YOLO user can have a faster and more accurate detection engine to use.

Suggested Implementation (if any)

  1. Study inference performance
  2. Implement v4 engine and service.
@bafu
Copy link
Member Author

bafu commented Jul 12, 2020

TFLite model

yolov4-tiny-416.zip

Steps to create the TFLite model

  1. Convert raw model to TF checkpoints

    $ python3 save_model.py --weights ./data/yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-416-tflite --input_size 416 --model yolov4 --tiny --framework tflite
    
  2. Convert checkpoints to TFLite model

    $ python3 convert_tflite.py --weights ./checkpoints/yolov4-tiny-416-tflite --output ./checkpoints    /yolov4-tiny-416.tflite
    
  3. Run a testing inference by using the TFLite model

    $ python3 detect.py --weights ./checkpoints/yolov4-tiny-416.tflite --size 416 --model yolov4 --image ./data/kite.jpg --framework tflite
    

Benchmark

detect.py.patch.txt

CPU: Intel i7-10710U (12-core)

Inference takes 0.11111974716186523 s

Environment

  • tensorflow-yolov4-tflite: commit dafb42f
  • tensorflow-cpu: 2.2.0

@bafu
Copy link
Member Author

bafu commented Jul 12, 2020

Darknet benchmark

CPU: Intel Pentium Gold G5400

  • v4
    • Standard: 26s
    • Tiny: 2.1s
  • v3
    • Standard: 28s
    • Tiny: 2.1s

@bafu
Copy link
Member Author

bafu commented Jul 12, 2020

Failed to run the TFLite model on RPi 4:

pi@raspberrypi:~/codes/tensorflow-yolov4-tflite $ python3 detect.py --weights ./checkpoints/yolov4-tiny-416.tflite --size 416 --model yolov4 --image ./data/kite.jpg --framework tflite
Traceback (most recent call last):
  File "detect.py", line 90, in <module>
    app.run(main)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 300, in run
    _run_main(main, args)
  File "/usr/local/lib/python3.7/dist-packages/absl/app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "detect.py", line 49, in main
    interpreter = tf.lite.Interpreter(model_path=FLAGS.weights)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/interpreter.py", line 77, in __init__
    model_path))
ValueError: Didn't find op for builtin opcode 'RESIZE_BILINEAR' version '3'
Registration failed.

TF Wheel is from PINTO0309

pi@raspberrypi:~/codes/tensorflow-yolov4-tflite $ python3 -m pip freeze | grep tensorflow
tensorflow==1.14.0
tensorflow-estimator==1.14.0

Solution

Upgrade TF from 1.14.0 to 2.2.0 can fix this issue.

@bafu
Copy link
Member Author

bafu commented Jul 12, 2020

YOLOv4 Tiny (TFLite) inference time on RPi 4

pi@raspberrypi:~/codes/tensorflow-yolov4-tflite $ python3 detect.py --weights ./checkpoints/yolov4-tiny-416.tflite --size 416 --model yolov4 --image ./data/kite.jpg --framework tflite                   
[{'name': 'input_1', 'index': 0, 'shape': array([  1, 416, 416,   3]), 'shape_signature': array([  1, 416, 416,   3]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
[{'name': 'Identity', 'index': 224, 'shape': array([   1, 2535,    4]), 'shape_signature': array([   1, 2535,    4]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}, {'name': 'Identity_1', 'index': 206, 'shape': array([   1, 2535,   80]), 'shape_signature': array([   1, 2535,   80]), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0), 'quantization_parameters': {'scales': array([], dtype=float32), 'zero_points': array([], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]
Inference takes 0.8501577377319336 s

@bafu
Copy link
Member Author

bafu commented Mar 7, 2021

YOLOv4 Tiny TFLite model and label files: #49 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants