-
Notifications
You must be signed in to change notification settings - Fork 472
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stop generation with Continuation
when a specific string was generated
#187
Conversation
ec281ef
to
dfb6f32
Compare
dfb6f32
to
17450ff
Compare
) | ||
assert isinstance(sequence, str) | ||
|
||
prompts = ["Write a short sentence", "And another one"] | ||
prompts = ["Write a short sentence ", "And another one "] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am just curious why you added whitespace padding? According to guidance
, its preferable to terminate prompts without any new space or line, because frequent tokens already come with a space before the word.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, that came intuitively with GPT2 and numbers. I'll look at the vocabulary directly to see if that's actually the right thing to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I came back to this and looked at the vocabulary of the GPT2 tokenizer. It is true that most of the tokens begin with a space.
Your point highlights something that we need to be very careful about, and which might be incorrectly implemented in outlines.
It should not affect this PR since we're partially matching on text, so "/n" will match " /n". However the regex [a-z]{3}
will allow "art" to be generated, but not " art". This could make it impossible to generate what would otherwise be the most probable completion.
I need to dig more into this. I opened #193 to keep track of my thinking on this.
Token healing (tracked by #161) should ensure that this kind of quirk doesn't affect generation. Users shouldn't have to worry about the effects of tokenization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, in that token healing should be able to correct all these nuances.
Looks good to me. |
Closes #151