-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem on the visualization #7
Comments
Dear @linzhlalala, Thanks for your attention to our paper! In our experiment, we visualize the attention weights of the first cross attention layer. To deal with the multiple heads, we select the attention weights from the head which has the largest differences in the attention values. The visualization code will be cleaned and updated here in the future. Best, |
Thanks for sharing. |
Hi Zhihong, |
Hi, any progress? |
I don't need the code. The idea is helpful enough. Thanks. |
Hi Zhihong,
Thank you for sharing your code.
I am interested in the Visualizations of image-text attention mapping part in the paper.
Can you share which approaches you are using for this? (Other repository or code)
I am trying to do this but didn't find a solution for the Transformer-based models.
The text was updated successfully, but these errors were encountered: