Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generate a test image that corresponds to that part of the code. #3

Closed
1691077362 opened this issue Nov 23, 2021 · 6 comments
Closed

Comments

@1691077362
Copy link

Hi,
Thanks for sharing the code!!!
Why did I test with MSMT17 and only get map data, but not the generated images.
Generate a test image that corresponds to that part of the code.

@1691077362
Copy link
Author

你好,
感谢分享代码!!!
为什么我用msmt17进行测试最后只得到了map数据,而没有得到生成的图片。
还有就是生成测试后的图片,对应着那部分的代码。 @ @CHENGY12

@CHENGY12
Copy link
Owner

Thanks for your interest in our paper.
APNet is a discriminative model for the person ReID task. Given a benchmark (including MSMT), APNet extracts the features of both query and gallery images and matches them for evaluation. This evaluation process is not related to image generation. The details can be found in the paper. Hope you find it helpful.

@1691077362
Copy link
Author

不好意思我可能能没表达清楚,我想要的是数据可视化的代码。

@1691077362
Copy link
Author

@CHENGY12 Can you share the code for your visualization part? I really won't, thanks

@Gutianpei
Copy link
Collaborator

@CHENGY12 Can you share the code for your visualization part? I really won't, thanks

Hello,

As mentioned in our paper, we used Grad-CAM to visualize the attention map. Please consider using the open-source implementation of Grad-CAM such as https://github.com/jacobgil/pytorch-grad-cam.

@1691077362
Copy link
Author

@Gutianpei thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants