Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop generation with Continuation when a specific string was generated #187

Merged
merged 1 commit into from
Jul 15, 2023

Conversation

rlouf
Copy link
Member

@rlouf rlouf commented Jul 13, 2023

Closes #151

@rlouf rlouf added text Linked to text generation enhancement labels Jul 13, 2023
@rlouf rlouf added this to the 0.1 milestone Jul 13, 2023
@rlouf rlouf force-pushed the continuation-stop-at branch 2 times, most recently from ec281ef to dfb6f32 Compare July 13, 2023 13:27
@rlouf rlouf marked this pull request as ready for review July 13, 2023 13:43
)
assert isinstance(sequence, str)

prompts = ["Write a short sentence", "And another one"]
prompts = ["Write a short sentence ", "And another one "]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am just curious why you added whitespace padding? According to guidance, its preferable to terminate prompts without any new space or line, because frequent tokens already come with a space before the word.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, that came intuitively with GPT2 and numbers. I'll look at the vocabulary directly to see if that's actually the right thing to do.

Copy link
Member Author

@rlouf rlouf Jul 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I came back to this and looked at the vocabulary of the GPT2 tokenizer. It is true that most of the tokens begin with a space.

Your point highlights something that we need to be very careful about, and which might be incorrectly implemented in outlines.

It should not affect this PR since we're partially matching on text, so "/n" will match " /n". However the regex [a-z]{3} will allow "art" to be generated, but not " art". This could make it impossible to generate what would otherwise be the most probable completion.

I need to dig more into this. I opened #193 to keep track of my thinking on this.

Token healing (tracked by #161) should ensure that this kind of quirk doesn't affect generation. Users shouldn't have to worry about the effects of tokenization.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, in that token healing should be able to correct all these nuances.

@arunpatro
Copy link
Contributor

Looks good to me.

@rlouf rlouf merged commit bfa0e94 into dottxt-ai:main Jul 15, 2023
@rlouf rlouf deleted the continuation-stop-at branch July 17, 2023 14:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement text Linked to text generation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Stop generation with Continuation when a specific string has been generated
3 participants