You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have run your code for a click-through rate prediction task in item recommendation scenario. In my setting, each item is seen as a token and users' earlier interactions on items can be seen as sentences. Then I use DiSAN to encode users' interaction sequence. If I replace the pre-trained item embedding with a random initializer embedding, the results evaluated by AUC become worse sharply.
SO I wonder whether DiSAN is suitable for training a token embedding meanwhile for sentence encoding? If not, then a good pre-trained embedding is in deed necessary for DiSAN to get the excellent performance.
For testify the difference before and after tuning the token embedding, I found that the change of embedding is subtle, especially when I calculate the top similar items with the embeddings.
The text was updated successfully, but these errors were encountered:
Thanks for sharing your code.
I have run your code for a click-through rate prediction task in item recommendation scenario. In my setting, each item is seen as a token and users' earlier interactions on items can be seen as sentences. Then I use DiSAN to encode users' interaction sequence. If I replace the pre-trained item embedding with a random initializer embedding, the results evaluated by AUC become worse sharply.
SO I wonder whether DiSAN is suitable for training a token embedding meanwhile for sentence encoding? If not, then a good pre-trained embedding is in deed necessary for DiSAN to get the excellent performance.
For testify the difference before and after tuning the token embedding, I found that the change of embedding is subtle, especially when I calculate the top similar items with the embeddings.
The text was updated successfully, but these errors were encountered: