You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that the probability of the final <|endoftext|> token is included in scoring. For the purposes of scoring sentences, it seems to me that it would be more correct (for most use cases) to omit that one, because it doesn't really matter whether or not more text follows a given sentence. Doesn't the probability of an <|endoftext|> token following a sentence depend on the (somewhat arbitrary) details of how text was broken up for training?
The text was updated successfully, but these errors were encountered:
I noticed that the probability of the final
<|endoftext|>
token is included in scoring. For the purposes of scoring sentences, it seems to me that it would be more correct (for most use cases) to omit that one, because it doesn't really matter whether or not more text follows a given sentence. Doesn't the probability of an<|endoftext|>
token following a sentence depend on the (somewhat arbitrary) details of how text was broken up for training?The text was updated successfully, but these errors were encountered: