This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to decode onnx model and get final results on image? #541
Comments
Isn't the --verify option guaranteed the same performance? |
For the post-processing part of onnx model, do I need to rewrite a code to decode the output of onnx model? Is there any relevant code for reference? |
If you only use the quarter shift, it will simply be applicable in C++. It will work well even if you write and paste post-processing in the last layer with Python code. However, it depends on which model is exported. What model are you working on? |
Hi @HoBeom , I am using hrnet and got the NxMx64x64 heatmap through the onnx model. However, which part in this library can i used to convert the 64x64 heatmap to NxMx2 with value of x and y axis? |
@GavinOneCup please check
|
Hi @HoBeom , I got the onnx model of higher_hrnet32_coco_512_512 after using the pytorch2onnx.py and I got its output(layer 3645[1, 34, 128, 128], layer 3678[1, 17, 256, 256]). But how to parse these results to get the final human body 17 key points? |
Hi @jin-s13 , how do I suppose to set up these parameters like unbiased and post_process? I used this function but it gives me wrong x-and- y results, which seems not because of these parameters. Thanks. |
@LSC333 Hi, the format (256x256) and (128x128) you got called heatmaps. You can use the function we talked above you to convert it to x-and-y axis. However, it seems the function has some problem? I use the default set up and it cannot give me the correct x-and-y axis value. If you can get the correct value, please let me know. Thanks. |
@GavinOneCup @LSC333 The above mmpose/mmpose/models/detectors/bottom_up.py Line 197 in b6092dd
|
@jin-s13 |
For animal and face, the decoding is the same as top-down body pose estimator, i.e. |
@jin-s13 Yeah I am using |
I want to double check if it's onnx's bug or the code's bug. Can you tell me how to run the checkpoint pth file and the config file in the classic pytorch way (i.e. use the function model() to get the result). |
After some experiments, I found that the results from the onnx model are different as the results generated in top_down_img_demo.py. I tested the results at line 313 in the file inference.py, even the output results of heatmap are different as the heatmap results on onnx model. Can you explain why? |
@GavinOneCup It seems that your onnx transform is not successful. Have you tried |
@jin-s13 I tried it just now. I got the result: "The numerical values are same between Pytorch and ONNX" To be clear, I thinkg I got the similar problem as LSC333 said above. The output I want is the keypoints with N * M * (x, y, prediction). And the output at line 313 in the file inference.py can give me this result. It can also gives me the results of a heatmap. However, the results from the onnx model is just a heatmap with size N* M* 64* 64, where 64 is my heatmap size. And this output is different as the heatmap value generated by inference.py. |
I think what happened here is that model() is generated by build_posenet(cfg.model) in inference.py directly. However, it is also converted by _convert_batchnorm(model) in pytorch2onnx.py. It seems that i need to convert it back after I got the onnx model? |
@jin-s13 @GavinOneCup I used the hrnet_w48_coco_256x192.onnx to do some testing work, and finally seemed to get the correct analysis result,but why should the scale inside be divided by 200? If I want to predict my own picture, how much should I divide by? |
@LSC333 Hello, thanks for blog! So we should input cropped persons to onnx model? Thanks in advance! |
This is my understanding: for the top down method, we need to send the cropped person and the size of the cropped person(x, y, w, h) into the model. Here x and y refer to the coordinates of the upper left corner |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
After run pytorch2onnx.py, I have got the onnx model, but I have not found the relevant documents that can infer it to check the correctness of resultes of the onnx model. How can I do it? Or do you have the relevant reasoning scripts for reference ONNX MODEL? I need your help
The text was updated successfully, but these errors were encountered: