-
Notifications
You must be signed in to change notification settings - Fork 1.4k
trim and remove stop-suffixes from summary #369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
nsarrazin
merged 1 commit into
huggingface:main
from
AndreasMadsen:summarize-stop-suffixes
Aug 2, 2023
Merged
trim and remove stop-suffixes from summary #369
nsarrazin
merged 1 commit into
huggingface:main
from
AndreasMadsen:summarize-stop-suffixes
Aug 2, 2023
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model.
nsarrazin
approved these changes
Aug 2, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR this is working great!
nsarrazin
pushed a commit
to AndreasMadsen/chat-ui
that referenced
this pull request
Aug 2, 2023
The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model.
nsarrazin
added a commit
that referenced
this pull request
Aug 2, 2023
* allow different user and assistant end-token For models like Llama2, the EndToken is not the same for a userMessage and an assistantMessage. This implements `userMessageEndToken` and `assistantMessageEndToken` which overwrites the messageEndToken behavior. This PR also allows empty strings as userMessageToken and assistantMessageToken and makes this the default. This adds additional flexibility, which is required in the case of Llama2 where the first userMessage is effectively different because of the system message. Note that because `userMessageEndToken` and `assistantMessageToken` are nearly always concatenated, it is almost redundant to have both. The exception is `generateQuery` for websearch which have several consecutive user messages. * Make model branding customizable based on env var (#345) * rm open assistant branding * Update SettingsModal.svelte * make settings work with a dynamic list of models * fixed types --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com> * trim and remove stop-suffixes from summary (#369) The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model. * add a login button when users are logged out (#381) * add fallback to message end token if there's no specified tokens for user & assistant --------- Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com> Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
ice91
pushed a commit
to ice91/chat-ui
that referenced
this pull request
Oct 30, 2024
The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model.
ice91
pushed a commit
to ice91/chat-ui
that referenced
this pull request
Oct 30, 2024
* allow different user and assistant end-token For models like Llama2, the EndToken is not the same for a userMessage and an assistantMessage. This implements `userMessageEndToken` and `assistantMessageEndToken` which overwrites the messageEndToken behavior. This PR also allows empty strings as userMessageToken and assistantMessageToken and makes this the default. This adds additional flexibility, which is required in the case of Llama2 where the first userMessage is effectively different because of the system message. Note that because `userMessageEndToken` and `assistantMessageToken` are nearly always concatenated, it is almost redundant to have both. The exception is `generateQuery` for websearch which have several consecutive user messages. * Make model branding customizable based on env var (huggingface#345) * rm open assistant branding * Update SettingsModal.svelte * make settings work with a dynamic list of models * fixed types --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com> * trim and remove stop-suffixes from summary (huggingface#369) The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model. * add a login button when users are logged out (huggingface#381) * add fallback to message end token if there's no specified tokens for user & assistant --------- Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com> Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
maksym-work
pushed a commit
to siilats/chat-ui
that referenced
this pull request
Jul 2, 2025
The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model.
maksym-work
pushed a commit
to siilats/chat-ui
that referenced
this pull request
Jul 2, 2025
* allow different user and assistant end-token For models like Llama2, the EndToken is not the same for a userMessage and an assistantMessage. This implements `userMessageEndToken` and `assistantMessageEndToken` which overwrites the messageEndToken behavior. This PR also allows empty strings as userMessageToken and assistantMessageToken and makes this the default. This adds additional flexibility, which is required in the case of Llama2 where the first userMessage is effectively different because of the system message. Note that because `userMessageEndToken` and `assistantMessageToken` are nearly always concatenated, it is almost redundant to have both. The exception is `generateQuery` for websearch which have several consecutive user messages. * Make model branding customizable based on env var (huggingface#345) * rm open assistant branding * Update SettingsModal.svelte * make settings work with a dynamic list of models * fixed types --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com> * trim and remove stop-suffixes from summary (huggingface#369) The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model. * add a login button when users are logged out (huggingface#381) * add fallback to message end token if there's no specified tokens for user & assistant --------- Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com> Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
Matsenas
pushed a commit
to Matsenas/chat-ui
that referenced
this pull request
Jul 4, 2025
The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model.
Matsenas
pushed a commit
to Matsenas/chat-ui
that referenced
this pull request
Jul 4, 2025
* allow different user and assistant end-token For models like Llama2, the EndToken is not the same for a userMessage and an assistantMessage. This implements `userMessageEndToken` and `assistantMessageEndToken` which overwrites the messageEndToken behavior. This PR also allows empty strings as userMessageToken and assistantMessageToken and makes this the default. This adds additional flexibility, which is required in the case of Llama2 where the first userMessage is effectively different because of the system message. Note that because `userMessageEndToken` and `assistantMessageToken` are nearly always concatenated, it is almost redundant to have both. The exception is `generateQuery` for websearch which have several consecutive user messages. * Make model branding customizable based on env var (huggingface#345) * rm open assistant branding * Update SettingsModal.svelte * make settings work with a dynamic list of models * fixed types --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com> * trim and remove stop-suffixes from summary (huggingface#369) The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model. * add a login button when users are logged out (huggingface#381) * add fallback to message end token if there's no specified tokens for user & assistant --------- Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com> Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
gary149
pushed a commit
to gary149/chat-ui
that referenced
this pull request
Aug 29, 2025
The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model.
gary149
pushed a commit
to gary149/chat-ui
that referenced
this pull request
Aug 29, 2025
* allow different user and assistant end-token For models like Llama2, the EndToken is not the same for a userMessage and an assistantMessage. This implements `userMessageEndToken` and `assistantMessageEndToken` which overwrites the messageEndToken behavior. This PR also allows empty strings as userMessageToken and assistantMessageToken and makes this the default. This adds additional flexibility, which is required in the case of Llama2 where the first userMessage is effectively different because of the system message. Note that because `userMessageEndToken` and `assistantMessageToken` are nearly always concatenated, it is almost redundant to have both. The exception is `generateQuery` for websearch which have several consecutive user messages. * Make model branding customizable based on env var (huggingface#345) * rm open assistant branding * Update SettingsModal.svelte * make settings work with a dynamic list of models * fixed types --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com> * trim and remove stop-suffixes from summary (huggingface#369) The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model. * add a login button when users are logged out (huggingface#381) * add fallback to message end token if there's no specified tokens for user & assistant --------- Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com> Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
gary149
pushed a commit
to gary149/chat-ui
that referenced
this pull request
Aug 29, 2025
The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model.
gary149
pushed a commit
to gary149/chat-ui
that referenced
this pull request
Aug 29, 2025
* allow different user and assistant end-token For models like Llama2, the EndToken is not the same for a userMessage and an assistantMessage. This implements `userMessageEndToken` and `assistantMessageEndToken` which overwrites the messageEndToken behavior. This PR also allows empty strings as userMessageToken and assistantMessageToken and makes this the default. This adds additional flexibility, which is required in the case of Llama2 where the first userMessage is effectively different because of the system message. Note that because `userMessageEndToken` and `assistantMessageToken` are nearly always concatenated, it is almost redundant to have both. The exception is `generateQuery` for websearch which have several consecutive user messages. * Make model branding customizable based on env var (huggingface#345) * rm open assistant branding * Update SettingsModal.svelte * make settings work with a dynamic list of models * fixed types --------- Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com> * trim and remove stop-suffixes from summary (huggingface#369) The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace. This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model. * add a login button when users are logged out (huggingface#381) * add fallback to message end token if there's no specified tokens for user & assistant --------- Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com> Co-authored-by: Nathan Sarrazin <sarrazin.nathan@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The chat generation removes parameters.stop and <|endoftext|> from the generated text. And additionally trims trailing whitespace.
This PR copies that behavior to the summarize functionality, when the summary is produced by a the chat model.