Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A better or easy way to convert YOLO v4 tiny and full models to mobile devices (iOS and android) #6800

Open
tobyglei opened this issue Oct 7, 2020 · 13 comments
Labels
Feature-request Any feature-request

Comments

@tobyglei
Copy link

tobyglei commented Oct 7, 2020

I googled around and looked for some of the best ways to deploy the YOLO models on mobile devices, but I couldn't find a good one.

The recommend or most popular way to do that is to convert the darknet weights to tensorflow / pytorch weights, then use coremltools to do the weight conversion to support iOS. However, there are many unsupport layer or functions on CoreML so it is not as easy as I thought to convert the weights between different platforms.

Hope to see if we can get the pretrained weight on mobile devices and generate a FPS comparison chart between different Mobile chips(e.g., Apple A series).

I can help out if you find this is an interesting direction for this project.

@tobyglei tobyglei added the Feature-request Any feature-request label Oct 7, 2020
@AlexeyAB
Copy link
Owner

AlexeyAB commented Oct 7, 2020


Another way to install OpenCV on Smartphone and use yolov4.cfg/yolov4.weights directly by using OpenCV on iOS/Android: https://opencv.org/releases/


Another way to use Tencent/NCNN library to run yolov4: https://github.com/Tencent/ncnn

@tobyglei
Copy link
Author

tobyglei commented Oct 8, 2020

Thanks for the detail response. I will give it a try.

@fm64hylian
Copy link

there is also the option of converting to keras model and then to mlmodel: https://github.com/Ma-Dan/YOLOv3-CoreML
or you can use this guide to add the mish layer on convert.py for yolov4 (or replace convert.py with this script and it will convert directly to mlmodel)

then you convert your .h5 keras model with this repository, so in short

convert weights to keras
python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5

then convert keras to mlmodel on the /Convert folder from this repository
python coreml.py

OR use the script
python3 convert.py yolov4.cfg yolov4.weights yolov4.mlmodel

I could implement my custom yolov3 model on IOS using those repositories , I haven't tested the generated YOLOv4 mlmodel on the xcode side because I don't own a macOS but, and if someone know how please let me know, the input and output parameters when converting to mlmodel that the pretrained apple mlmodel offer, the input and output parameters are different. So if someone knows how to modify the parameters when converting to mlmodel or keras, so that the outputs are confidence and iouThreshold instead of whatever output1, output2 and output3 means (it says "The 13x13 grid (Scale1)", 2626 scale2, but I don't really know what that means), that would be great.

apple yolo

Thank you.

@syedmustafan
Copy link

there is also the option of converting to keras model and then to mlmodel: https://github.com/Ma-Dan/YOLOv3-CoreML
or you can use this guide to add the mish layer on convert.py for yolov4 (or replace convert.py with this script and it will convert directly to mlmodel)

then you convert your .h5 keras model with this repository, so in short

convert weights to keras
python convert.py yolov3.cfg yolov3.weights model_data/yolo.h5

then convert keras to mlmodel on the /Convert folder from this repository
python coreml.py

OR use the script
python3 convert.py yolov4.cfg yolov4.weights yolov4.mlmodel

I could implement my custom yolov3 model on IOS using those repositories , I haven't tested the generated YOLOv4 mlmodel on the xcode side because I don't own a macOS but, and if someone know how please let me know, the input and output parameters when converting to mlmodel that the pretrained apple mlmodel offer, the input and output parameters are different. So if someone knows how to modify the parameters when converting to mlmodel or keras, so that the outputs are confidence and iouThreshold instead of whatever output1, output2 and output3 means (it says "The 13x13 grid (Scale1)", 2626 scale2, but I don't really know what that means), that would be great.

apple yolo

Thank you.

The same issue I am facing too. I want to get the output as confidence and coordinates. Anyone kindly suggest a method to implement that.

@vak
Copy link

vak commented Nov 24, 2020

@AlexeyAB thank you for providing yolov4-tiny.weights on the README.md and the reference to tensorflow-yolov4-tflite on how to get YoloV4 on TF-Lite!

Unfortunately converting your yolov4-tiny.weights to TF-Lite using tensorflow-yolov4-tflite doesn't work:

python save_model.py --weights ./data/yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-416-tflite --input_size 416 --model yolov4 --framework tflite

causes:

  File "/home/dev/tensorflow-yolov4-tflite/core/utils.py", line 63, in load_weights
    conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 554878 into shape (256,256,3,3)

Also @AlexeyAB you mentioned that you convert to the TF-Lite in your own way to avoids the garbage in output. Is this converter open?

Or maybe it is possible to share yolov4-tiny.tflite exported by your converter?

@AlexeyAB
Copy link
Owner

@vak

  1. You forgot --tiny flag

  2. Try
    python save_model.py --weights ./data/yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-416-tflite --input_size 416 --model yolov4 --framework tflite --tiny

  3. There is garbadge only when using universal converters like pytorch -> onnx -> pb -> tflite. But if you are using the native YOLOv4 implementation on TensorFlow, then there should be no garbage.

Also @AlexeyAB you mentioned that you convert to the TF-Lite in your own way to avoids the garbage in output. Is this converter open?

  1. This converter is private. And this is for Pytorch->TFlite only.

There is converter Darknet -> ONNX: https://github.com/linghu8812/tensorrt_inference/tree/master/Yolov4

There is converter OpenVINO -> PB -> TFlite: https://github.com/PINTO0309/openvino2tensorflow

@tobyglei
Copy link
Author

tobyglei commented Dec 4, 2020

@AlexeyAB thank you for providing yolov4-tiny.weights on the README.md and the reference to tensorflow-yolov4-tflite on how to get YoloV4 on TF-Lite!

Unfortunately converting your yolov4-tiny.weights to TF-Lite using tensorflow-yolov4-tflite doesn't work:

python save_model.py --weights ./data/yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-416-tflite --input_size 416 --model yolov4 --framework tflite

causes:

  File "/home/dev/tensorflow-yolov4-tflite/core/utils.py", line 63, in load_weights
    conv_weights = conv_weights.reshape(conv_shape).transpose([2, 3, 1, 0])
ValueError: cannot reshape array of size 554878 into shape (256,256,3,3)

Also @AlexeyAB you mentioned that you convert to the TF-Lite in your own way to avoids the garbage in output. Is this converter open?

Or maybe it is possible to share yolov4-tiny.tflite exported by your converter?

@vak Any luck on getting it working?

@aparico
Copy link

aparico commented Dec 25, 2020

@AlexeyAB There's currently a problem in converting Darknet --> Tensorflow Model --> TFLite Model (reported here and here). Do you have a suggestion for solving this?

@AlexeyAB
Copy link
Owner

AlexeyAB commented Dec 25, 2020

I think I will create my own Python-script to convert any custom yolo-model from Darknet to TF, TFLite, Pytorch, ONNX, ... #7179

@Arfinul
Copy link

Arfinul commented Jan 8, 2021

converted - customized(not coco) yolov3-tiny into .tflite format, then stuck on int8 conversion

python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-int8.tflite --quantize_mode int8 --dataset ./coco_dataset/coco/val207.txt

./checkpoints/yolov4-416 ---> this is not coco model, it is from customized/different dataset

  1. Please suggest, still shall i have to use ./coco_dataset/coco/val207.txt with this command?
    
  2. if not, how can i convert my dataset from YOLO ANNOTATED FORMAT to the format of val207.txt ?
    

@Pchivurin
Copy link

Hey guys!

Any luck with the issue?

@NeilPandya
Copy link

NeilPandya commented Aug 12, 2021

Hello, all.

I've been successful in saving my .weights file from a custom trained tiny-yolov4-3-layer model in darknet as a "saved_model.pb" in the "checkpoints folder in this repo.

DON'T FORGET THE "--tiny" TAG FOR CONVERTING YOLOV4-TINY MODELS. YOU WILL RECEIVE AN ERROR RE: INPUT SHAPE.

Because I'm using tf installed via conda install tensorflow-gpu==2.4.1,

I get the following error

`2021-08-11 22:36:44.983345: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
loc("batch_normalization/moving_mean"): error: is not immutable, try running tf-saved-model-optimize-global-tensors to prove tensors are immutable
Traceback (most recent call last):
File "/opt/miniconda3/envs/tfyolo/lib/python3.9/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
File "/opt/miniconda3/envs/tfyolo/lib/python3.9/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
return _pywrap_toco_api.TocoConvert(
Exception: :0: error: loc("batch_normalization/moving_mean"): is not immutable, try running tf-saved-model-optimize-global-tensors to prove tensors are immutable

During handling of the above exception, another exception occurred:

2021-08-11 22:36:44.983345: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
loc("batch_normalization/moving_mean"): error: is not immutable, try running tf-saved-model-optimize-global-tensors to prove tensors are immutable
Traceback (most recent call last):
  File "/opt/miniconda3/envs/tfyolo/lib/python3.9/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
    model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
  File "/opt/miniconda3/envs/tfyolo/lib/python3.9/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
    return _pywrap_toco_api.TocoConvert(
Exception: <unknown>:0: error: loc("batch_normalization/moving_mean"): is not immutable, try running tf-saved-model-optimize-global-tensors to prove tensors are immutable


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/neil/Source/tensorflow-yolov4-tflite/convert_tflite.py", line 76, in <module>
    app.run(main)
  File "/opt/miniconda3/envs/tfyolo/lib/python3.9/site-packages/absl/app.py", line 312, in run
    _run_main(main, args)
  File "/opt/miniconda3/envs/tfyolo/lib/python3.9/site-packages/absl/app.py", line 258, in _run_main
    sys.exit(main(argv))
  File "/home/neil/Source/tensorflow-yolov4-tflite/convert_tflite.py", line 71, in main
    save_tflite()
  File "/home/neil/Source/tensorflow-yolov4-tflite/convert_tflite.py", line 45, in save_tflite
    tflite_model = converter.convert()
  File "/opt/miniconda3/envs/tfyolo/lib/python3.9/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
    result = _convert_saved_model(**converter_kwargs)
  File "/opt/miniconda3/envs/tfyolo/lib/python3.9/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
    data = toco_convert_protos(
  File "/opt/miniconda3/envs/tfyolo/lib/python3.9/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
    raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc("batch_normalization/moving_mean"): is not immutable, try running tf-saved-model-optimize-global-tensors to prove tensors are immutable

I'm inclined to think that I wouldn't receive this exception re: "tensor immutability" if I was running tensorflow-gpu==2.3.0rc as "requirements-gpu.txt" suggests in hunglc007's method.

I'm trying to find 2.3.0 and it's proving fruitless. I'm so close, yet so far. If someone can point me to a 2.3.0 repo, I will compile it and make the package publicly available.

@Anas-Alshaghouri
Copy link

Another way to install OpenCV on Smartphone and use yolov4.cfg/yolov4.weights directly by using OpenCV on iOS/Android: https://opencv.org/releases/

* C++ https://github.com/opencv/opencv/blob/master/samples/dnn/object_detection.cpp

* Python https://github.com/opencv/opencv/blob/master/samples/dnn/object_detection.py

* Description: https://docs.opencv.org/master/da/d9d/tutorial_dnn_yolo.html

Another way to use Tencent/NCNN library to run yolov4: https://github.com/Tencent/ncnn

Thank you for all your support. I am having a problem when trying convert_tflite.py code, some people are saying that Tensorflow version is the problem(I have version 2.6) and downgrading to version 2.3 can solve the problem. Is there any way around downgrading??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature-request Any feature-request
Projects
None yet
Development

No branches or pull requests

10 participants