-
-
Notifications
You must be signed in to change notification settings - Fork 16.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using YOLOv5 with Neural Compute Stick2 #552
Comments
@hghari I'm not qualified to answer this as I have no experience with the cited hardware, but I'll leave this open for community support! Good luck. |
@hghari I'm working on this direction as well - though I don't yet have a solution. Let's stay in touch throughout the process :D Did you start by converting ONXX to OpenVINO? |
@glenn-jocher thanks |
@hghari, very nice, @jimsu2012 and I did a similar conversion. We just received the NCS in the mail today so will be trying to deploy in the next few days. We will keep you posted of any success there! |
@Jacobsolawetz Looking forward to hear from you. |
@Jacobsolawetz hi, I gave up using yolo v5 model because of inconsistencies between cpu and ncs2 results. Please inform me if you had any success. thanks |
@hghari makes sense - none yet. Will post here if i find some success |
I am working on this issue as well. There are two problems:
|
@hghari hi, which model did you use to be able to convert into onnx and eventually into the IR of openvino? im using openvino 2020.1, pytorch 1.5 and seems like im stuck on converting the onnx model of yolov5s (which i edited the export script into opset 10) to openvino |
I used the model provided on github |
The same struggle here, please post any progress you might have! |
using the latest openvino, i managed to convert to IR, although weird behavior as mentioned in this response
i decided to not use yolov5 and went for v4 instead, but i think you will have to play with the export script to make it functional |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Didn't try but this article seems to deal with the same problem: |
@hghari Hi, how to convert yolov5 to openVINO? Could you share the methods? Thanks. |
I may be late for the party, but I managed to run a yolov5 network on the NCS2.
The generated IR should run on the NCS2 and return the same output as CPU inference |
Brother, I can't get the correct result using the method you said. When using mo.py to convert to an IR model, if you don't add "-s 255", the model I get can detect the correct result on the CPU, but the correct result cannot appear on the NCS2. But after adding "-s 255", I can't detect the correct result on the CPU and NCS2. |
The flag -s 255 sets the expected scale of the input image. I guess you perform a normalization of the image to range 0-1 before inference (something like img/=255). Make sure your input has range 0-255 by excluding this normalization when using a model converted with -s 255. Without -s 255 use range 0-1 instead. |
@hghari |
@violet17 @Jacobsolawetz @yurikleb @hghari @usamahjundia good news 😃! Your original issue may now be fixed ✅ in PR #6057. This PR adds native YOLOv5 OpenVINO export: python export.py --weights yolov5s.pt --include openvino # export to OpenVINO To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
I am trying to get my own Yolov5L model running on the Raspberry Pi (B, v1.2) with the NCS2. But I get extremely bad results. Is it normal that the NCS2 performs so much worse than a CPU? Compared to the output from the CPU inference, the decimal places from the NCS2 results are very inaccurate. Does this have something to do with the FP16 conversion? Can anyone give me tips for the yolov5 inference workflow in Python on the NCS2? I already exported the model as FP16 and followed the structure of detect.py. But the results are so bad... |
@ca-schue Hi, im trying to run yolo model on raspberry pi with ncs2 as well. Did you manage to do it? If so, do you mind sharing your code to carry out inferencing? |
Yes I have done it. This blog records the details and codes. But if conditions permit, I strongly do not recommend using the Raspberry Pi plus NCS2 solution. The speed is reallly slow even with NCS2 (about 2fps). Maybe Jetson Nano is a better solution(about 15fps without any acceleration). |
@Rainbowman0, Hi thanks for the reply, do you have an english version of this document, as I can't fully access this website and I do not speak Chinese. If possible, can you provide your emails or contacts as I have a few questions that I would like to ask? |
@Rainbowman0 when I try to convert from ONNX to IR I get the following error, do you know how to solve it? C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer>python mo.py --input_model=yolov5s.onnx --model_name yolov5OV --scale=255 --data_type=FP16 [ ERROR ] ---------------- END OF BUG REPORT -------------- |
Can you just tell us how to fix it?
|
The command is correct. The reason for my poor results was that I forgot the -s 255 parameter to normalize the color space. It is important that python (or Anacoda) is run as administrator/root.
I think @violet17 is talking about non max suppression. The code for nms from yolov5/general.py should work. Strangely, the inference behaves differently for images over about 1000 px. For example, if inference is done five times in a row on p6 models like yolov5s6 at 1280 px with the same image, only the result of every second inference is correct. I think there is an overflow or memory leak somewhere in openvino. |
I have successfully convert the onnx model to openvino model, running the detect.py using weights yolov5_openvino_model work great, however when i used the xml,bin to openvino environment with object detection code provided by intel, it just return a black screen in cv2.imshow(), any idea on this? |
@glennford49 if you have problems running Intel code you should probably raise that with Intel |
@glennford49 which OpenVINO environment are you using? the current export.py --include OpenVINO converts to openvino2022 version. Are you running openVINO on Windows or any other system? If you're using OpenVINO 2021 or previous version, you need to convert to ONNX and use model optimizer from openVINO to convert to the IR format. You may refer to this thread if you're interested in how I managed to solve my problem.https://github.com/openvinotoolkit/openvino/issues/11458 |
Hi @glennford49 @Averen19 @Humni @ca-schue @Sanoronas @violet17 @hghari sorry to resurrect an old thread. I've got an NCS2- but the documentation from Intel is absolutely dreadful (in my opinion) Maybe I should just give up & use my DepthAI/Luxonis device or Jetson ?? Andrew |
I have the same problem as you, and this troubled me two days. I am using yolov5 tag v4. wget https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt |
@bt5-coder OpenVINO models should work with NCS2 by setting L370 here to MYRIAD: Lines 361 to 374 in 27d831b
|
❔Question
Hello, I have successfully converted the trained yolov5 model to Intermediate representation to use it with NCS2. However when I load the model to ncs2 it gives wrong results which are all negative values. Loading the same model on CPU runs without any problem and gives correct values. The question is can yolov5 be used in NCS2 and if yes what are the right steps to make it work correctly?
Thanks in advance
Additional context
The text was updated successfully, but these errors were encountered: