Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to decode onnx model and get final results on image? #541

Closed
zcc720 opened this issue Mar 30, 2021 · 22 comments
Closed

how to decode onnx model and get final results on image? #541

zcc720 opened this issue Mar 30, 2021 · 22 comments
Labels
kind/discussion community discussion onnx

Comments

@zcc720
Copy link

zcc720 commented Mar 30, 2021

After run pytorch2onnx.py, I have got the onnx model, but I have not found the relevant documents that can infer it to check the correctness of resultes of the onnx model. How can I do it? Or do you have the relevant reasoning scripts for reference ONNX MODEL? I need your help

@HoBeom
Copy link
Contributor

HoBeom commented Mar 30, 2021

Isn't the --verify option guaranteed the same performance?

@zcc720
Copy link
Author

zcc720 commented Mar 30, 2021

Isn't the --verify option guaranteed the same performance?

For the post-processing part of onnx model, do I need to rewrite a code to decode the output of onnx model? Is there any relevant code for reference?

@zcc720 zcc720 changed the title how to inference onnx model how to inference & decode onnx model Mar 30, 2021
@zcc720 zcc720 changed the title how to inference & decode onnx model how to decode onnx model and get final results on image? Mar 30, 2021
@HoBeom
Copy link
Contributor

HoBeom commented Mar 30, 2021

If you only use the quarter shift, it will simply be applicable in C++. It will work well even if you write and paste post-processing in the last layer with Python code. However, it depends on which model is exported. What model are you working on?

@innerlee innerlee added onnx kind/discussion community discussion labels Mar 31, 2021
@GavinOneCup
Copy link

Hi @HoBeom , I am using hrnet and got the NxMx64x64 heatmap through the onnx model. However, which part in this library can i used to convert the 64x64 heatmap to NxMx2 with value of x and y axis?

@jin-s13
Copy link
Collaborator

jin-s13 commented May 22, 2021

@GavinOneCup please check

def keypoints_from_heatmaps(heatmaps,

@LSC333
Copy link

LSC333 commented May 24, 2021

Hi @HoBeom , I got the onnx model of higher_hrnet32_coco_512_512 after using the pytorch2onnx.py and I got its output(layer 3645[1, 34, 128, 128], layer 3678[1, 17, 256, 256]). But how to parse these results to get the final human body 17 key points?

@GavinOneCup
Copy link

GavinOneCup commented May 26, 2021

@GavinOneCup please check

def keypoints_from_heatmaps(heatmaps,

Hi @jin-s13 , how do I suppose to set up these parameters like unbiased and post_process? I used this function but it gives me wrong x-and- y results, which seems not because of these parameters. Thanks.

@GavinOneCup
Copy link

Hi @HoBeom , I got the onnx model of higher_hrnet32_coco_512_512 after using the pytorch2onnx.py and I got its output(layer 3645[1, 34, 128, 128], layer 3678[1, 17, 256, 256]). But how to parse these results to get the final human body 17 key points?

@LSC333 Hi, the format (256x256) and (128x128) you got called heatmaps. You can use the function we talked above you to convert it to x-and-y axis. However, it seems the function has some problem? I use the default set up and it cannot give me the correct x-and-y axis value. If you can get the correct value, please let me know. Thanks.

@jin-s13
Copy link
Collaborator

jin-s13 commented May 26, 2021

@GavinOneCup @LSC333 The above keypoints_from_heatmaps is for top-down approaches.
For bottom-up approaches, please refer to

def forward_test(self, img, img_metas, return_heatmap=False, **kwargs):

@GavinOneCup
Copy link

@jin-s13
Hi, but I am not using either top down or bottom up. I am using animal, body3d, and face. How should I convert then?

@jin-s13
Copy link
Collaborator

jin-s13 commented May 26, 2021

For animal and face, the decoding is the same as top-down body pose estimator, i.e. keypoints_from_heatmaps.

@GavinOneCup
Copy link

GavinOneCup commented May 26, 2021

@jin-s13 Yeah I am using keypoints_from_heatmaps. However, the results is wrong. It is not the key points which the code generated without using onnx. The keypoints' xs and ys are fully wrong.

@GavinOneCup
Copy link

I want to double check if it's onnx's bug or the code's bug. Can you tell me how to run the checkpoint pth file and the config file in the classic pytorch way (i.e. use the function model() to get the result).

@jin-s13
Copy link
Collaborator

jin-s13 commented May 26, 2021

We have provided the demos, and tutorials to run demos.

@GavinOneCup
Copy link

GavinOneCup commented May 27, 2021

@jin-s13

After some experiments, I found that the results from the onnx model are different as the results generated in top_down_img_demo.py. I tested the results at line 313 in the file inference.py, even the output results of heatmap are different as the heatmap results on onnx model. Can you explain why?

@jin-s13
Copy link
Collaborator

jin-s13 commented May 27, 2021

@GavinOneCup It seems that your onnx transform is not successful. Have you tried --verify option in pytorch2onnx?

@GavinOneCup
Copy link

GavinOneCup commented May 27, 2021

@jin-s13 I tried it just now. I got the result: "The numerical values are same between Pytorch and ONNX"

To be clear, I thinkg I got the similar problem as LSC333 said above.

The output I want is the keypoints with N * M * (x, y, prediction). And the output at line 313 in the file inference.py can give me this result. It can also gives me the results of a heatmap.

However, the results from the onnx model is just a heatmap with size N* M* 64* 64, where 64 is my heatmap size. And this output is different as the heatmap value generated by inference.py.

@GavinOneCup
Copy link

@jin-s13

I think what happened here is that model() is generated by build_posenet(cfg.model) in inference.py directly. However, it is also converted by _convert_batchnorm(model) in pytorch2onnx.py. It seems that i need to convert it back after I got the onnx model?

@LSC333
Copy link

LSC333 commented May 27, 2021

@jin-s13 @GavinOneCup I used the hrnet_w48_coco_256x192.onnx to do some testing work, and finally seemed to get the correct analysis result,but why should the scale inside be divided by 200? If I want to predict my own picture, how much should I divide by?
The specific information is in my blog

@jin-s13
Copy link
Collaborator

jin-s13 commented May 27, 2021

@LSC333 Please refer to #205 for more information about pixel_std.
In MMPose, all pixel_std is set as 200, and you should always use pixel_std=200, no matter what the input image is.

@jovenwayfarer
Copy link

@LSC333 Hello, thanks for blog! So we should input cropped persons to onnx model? Thanks in advance!

@LSC333
Copy link

LSC333 commented Jul 15, 2021

@LSC333 Hello, thanks for blog! So we should input cropped persons to onnx model? Thanks in advance!

This is my understanding: for the top down method, we need to send the cropped person and the size of the cropped person(x, y, w, h) into the model. Here x and y refer to the coordinates of the upper left corner

@open-mmlab open-mmlab locked and limited conversation to collaborators Aug 11, 2021
@ly015 ly015 closed this as completed Aug 11, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
kind/discussion community discussion onnx
Projects
None yet
Development

No branches or pull requests

8 participants