You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When calculating the similarity loss between two sentences, it looks like we are using the averaged word embeddings per sentence. Within models.SDR.similarity_modeling.SimilarityModeling we have the following:
When calculating the similarity loss between two sentences, it looks like we are using the averaged word embeddings per sentence. Within
models.SDR.similarity_modeling.SimilarityModeling
we have the following:It appears using the embeddings for the padded tokens since we aren't taking into account any sentence lengths. Was this done by design perhaps?
The text was updated successfully, but these errors were encountered: