Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

attention score calculate in Knowledge-aware Attention #43

Open
kinglai opened this issue Jan 30, 2021 · 4 comments
Open

attention score calculate in Knowledge-aware Attention #43

kinglai opened this issue Jan 30, 2021 · 4 comments

Comments

@kinglai
Copy link

kinglai commented Jan 30, 2021

IN Knowledge-aware Attention, the attention score is fixed during one epoch training ?
image

Cause attentive laplacian matrix is used for attention and only update in kg training schedule.
image

thx

@kinglai
Copy link
Author

kinglai commented Jan 30, 2021

Only consider the cf part, is the code you released equal to NGCF model. Neural Graph Collaborative Filtering ?

@RileyLee95
Copy link

RileyLee95 commented Nov 5, 2021

it seems the attention matrix A is kept same in all propagation layers (within each epoch), I am wondering shouldn't we calculate attention matrix for each layer separately based on node embeddings in that layer according to equation 4 and 5? @xiangwang1223

@Cinderella1001
Copy link

it seems the attention matrix A is kept same in all propagation layers (within each epoch), I am wondering shouldn't we calculate attention matrix for each layer separately based on node embeddings in that layer according to equation 4 and 5? @xiangwang1223

I also have the same question. Moreover, the attention scores on the knowledge graph are very hard to compute, because this process costs too many memories.Can the process be optimised by multiprocessing?

@Cinderella1001
Copy link

The attention score is actually fixed during one epoch training.But attention score in next epoch(also fixed) is different from attention score in this epoch. Maybe, Updating attention score in different layers of one epoch can achieve a little better performence .However, In my opinion, attention score is updated this way can try to avoid memory-out error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants