-
Notifications
You must be signed in to change notification settings - Fork 7.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A better or easy way to convert YOLO v4 tiny and full models to mobile devices (iOS and android) #6800
Comments
Another way to install OpenCV on Smartphone and use
Another way to use Tencent/NCNN library to run yolov4: https://github.com/Tencent/ncnn |
Thanks for the detail response. I will give it a try. |
there is also the option of converting to keras model and then to mlmodel: https://github.com/Ma-Dan/YOLOv3-CoreML then you convert your .h5 keras model with this repository, so in short convert weights to keras then convert keras to mlmodel on the /Convert folder from this repository OR use the script I could implement my custom yolov3 model on IOS using those repositories , I haven't tested the generated YOLOv4 mlmodel on the xcode side because I don't own a macOS but, and if someone know how please let me know, the input and output parameters when converting to mlmodel that the pretrained apple mlmodel offer, the input and output parameters are different. So if someone knows how to modify the parameters when converting to mlmodel or keras, so that the outputs are confidence and iouThreshold instead of whatever output1, output2 and output3 means (it says "The 13x13 grid (Scale1)", 2626 scale2, but I don't really know what that means), that would be great. Thank you. |
The same issue I am facing too. I want to get the output as confidence and coordinates. Anyone kindly suggest a method to implement that. |
@AlexeyAB thank you for providing yolov4-tiny.weights on the README.md and the reference to Unfortunately converting your
causes:
Also @AlexeyAB you mentioned that you convert to the TF-Lite in your own way to avoids the garbage in output. Is this converter open? Or maybe it is possible to share |
There is converter
There is converter
|
@vak Any luck on getting it working? |
@AlexeyAB There's currently a problem in converting Darknet --> Tensorflow Model --> TFLite Model (reported here and here). Do you have a suggestion for solving this? |
I think I will create my own Python-script to convert any custom yolo-model from Darknet to TF, TFLite, Pytorch, ONNX, ... #7179 |
converted - customized(not coco) yolov3-tiny into .tflite format, then stuck on int8 conversion python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-int8.tflite --quantize_mode int8 --dataset ./coco_dataset/coco/val207.txt ./checkpoints/yolov4-416 ---> this is not coco model, it is from customized/different dataset
|
Hey guys! Any luck with the issue? |
Hello, all. I've been successful in saving my .weights file from a custom trained tiny-yolov4-3-layer model in darknet as a "saved_model.pb" in the "checkpoints folder in this repo. DON'T FORGET THE "--tiny" TAG FOR CONVERTING YOLOV4-TINY MODELS. YOU WILL RECEIVE AN ERROR RE: INPUT SHAPE. Because I'm using tf installed via conda install tensorflow-gpu==2.4.1, I get the following error `2021-08-11 22:36:44.983345: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set During handling of the above exception, another exception occurred:
I'm inclined to think that I wouldn't receive this exception re: "tensor immutability" if I was running tensorflow-gpu==2.3.0rc as "requirements-gpu.txt" suggests in hunglc007's method. I'm trying to find 2.3.0 and it's proving fruitless. I'm so close, yet so far. If someone can point me to a 2.3.0 repo, I will compile it and make the package publicly available. |
Thank you for all your support. I am having a problem when trying convert_tflite.py code, some people are saying that Tensorflow version is the problem(I have version 2.6) and downgrading to version 2.3 can solve the problem. Is there any way around downgrading?? |
I googled around and looked for some of the best ways to deploy the YOLO models on mobile devices, but I couldn't find a good one.
The recommend or most popular way to do that is to convert the darknet weights to tensorflow / pytorch weights, then use coremltools to do the weight conversion to support iOS. However, there are many unsupport layer or functions on CoreML so it is not as easy as I thought to convert the weights between different platforms.
Hope to see if we can get the pretrained weight on mobile devices and generate a FPS comparison chart between different Mobile chips(e.g., Apple A series).
I can help out if you find this is an interesting direction for this project.
The text was updated successfully, but these errors were encountered: