Skip to content

Conversation

yuukidach
Copy link
Contributor

@yuukidach yuukidach commented Dec 26, 2023

This PR avoids calculating the embedding when assigning an embedding weight of 0 to answer correctness

Currently, the correctness of the answer relies on both factuality and semantic similarity. However, for most embedding models, the similarity between any two embeddings will be larger than 0. This is equivalent to introducing a bias into the rating system. Therefore, in practical use, I tend to turn off similarity evaluation and only use factual evaluation.

Do you think it would be better to allow embedding to be turned off like this or to add an additional factuality metric?

@shahules786
Copy link
Member

shahules786 commented Dec 26, 2023

. However, for most embedding models, the similarity between any two embeddings will not be larger than 0

Can you explain @yuukidach

@yuukidach
Copy link
Contributor Author

. However, for most embedding models, the similarity between any two embeddings will not be larger than 0

Can you explain @yuukidach

@shahules786 My mistake, I mean "will be larger than 0"

@shahules786
Copy link
Member

Excellent work @yuukidach . I like this optimization. Thank you.

@shahules786 shahules786 merged commit 914a350 into explodinggradients:main Dec 29, 2023
@yuukidach yuukidach deleted the feat/metrics/ac branch January 2, 2024 03:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants