Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The shape of vector attention map #2

Open
Liu-Feng opened this issue Jan 17, 2021 · 6 comments
Open

The shape of vector attention map #2

Liu-Feng opened this issue Jan 17, 2021 · 6 comments

Comments

@Liu-Feng
Copy link

Liu-Feng commented Jan 17, 2021

Firstly, thanks for your awesome work!!!
I have a confusion that if the vector attention can modulate individual feature channel, the attention map should have a axis whose dimension is same as the feature dimension of cooresponding value (V). And based on the Eq. (2) in the paper, if the shape of query, key and value is [batch_size, num_point, num_dim],
the shape of vector attention map may should be [batch_size, num_points, num_points, num_dim].
Looking forwoard to your reply!

@lucidrains
Copy link
Owner

@Liu-Feng Hi Feng! Admittedly I wrote this repository and forgot to follow up and double check if it is correct

Do you mean to say it should be like https://github.com/lucidrains/point-transformer-pytorch/pull/3/files

@Liu-Feng
Copy link
Author

@lucidrains Thanks!! If the last dimension of attn_mlp is dim other 1, I think it cloud be used to modulate individual feature channel. But I cannot confirm which axis that the softmax should conduct on. Thanks for your reply!

@lucidrains
Copy link
Owner

@Liu-Feng i'm pretty sure the softmax will be across the similarities of queries against all the keys (dimension j)

I've merged the PR, so do let me know if this work or doesn't work in your training

Thank you!

@Liu-Feng
Copy link
Author

against

Thanks for your reply!!! I have not train the model. I am trying to construct the PT via Tensorflow.

@lucidrains
Copy link
Owner

@Liu-Feng hello! how's your tensorflow port going? are you more certain that you had the correct hunch here?

@Liu-Feng
Copy link
Author

Liu-Feng commented Feb 28, 2021

Hollo, I carefully read the Point transformer, and the vector attention is conducted on a local path of the point cloud( which pointed out in the paper). The application of vector attention in lcoal patch will reduce the memeory cost, as the number of points in local patch is much smaller than whole point cloud.

If the number of downed-sampled points is N, and the dim of the learned feature is D, the B represents the batch size, thus, the feature shape is BND, and the shape of grouped feature is BNKD, where K is number of points in each local patch(K of KNN).

Thus the attention may be conducted between BND and BNKD, the shape of Query , Key and Value are BND, BNKD and BNKD, respectively. The attention weight after substraction and mapping is BNKD(BND-BNKD, like the relative coordinates used in point cloud grouping in Pointnet++). Then the hadamard product is made bettwen attention weight(BNKD) and value(BNKD), following by the reduce_sum made in K dim(axis=2). Therefore, the shape of output feature is BND(with squeeze in dim 2).

In this way, the wector weight could refine every channel in Value. But it works like the attention, other than the self-attention. All these steps are based on my understandings of the paper.

The real workflow of the Point Transformer may different from my own understandings, and the truth cloud be uncoverred after the author open the source code.

By the way, some interesting things may be found in the orignal vector attention paper( the code of which is opened), which is written by the sample author of Point transformer.

Have a good day!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants