Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip top_p computations when set to 1.0 #8905

Merged
merged 5 commits into from
Apr 22, 2024
Merged

Conversation

odelalleau
Copy link
Collaborator

What does this PR do ?

This avoid doing useless computations (that may even filter out some tokens due to numerical approximation) when top_p=1.0 (which is not supposed to have any effect).

Collection: nlp

Changelog

  • Skip top_p computations when set to 1.0

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation
  • Optimization

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Signed-off-by: Olivier Delalleau <507137+odelalleau@users.noreply.github.com>
@odelalleau
Copy link
Collaborator Author

jenkins

odelalleau added a commit to NVIDIA/NeMo-Aligner that referenced this pull request Apr 12, 2024
This is because we otherwise do useless computations, at least until
        NVIDIA/NeMo#8905
is merged
odelalleau added a commit to NVIDIA/NeMo-Aligner that referenced this pull request Apr 12, 2024
This is because we otherwise do useless computations, at least until
        NVIDIA/NeMo#8905
is merged

Signed-off-by: Olivier Delalleau <507137+odelalleau@users.noreply.github.com>
Copy link
Collaborator

@yidong72 yidong72 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@pablo-garay
Copy link
Collaborator

jenkins

@pablo-garay
Copy link
Collaborator

jenkins

@odelalleau odelalleau merged commit e0b3fe5 into NVIDIA:main Apr 22, 2024
125 checks passed
@odelalleau odelalleau deleted the od/top_p branch April 22, 2024 20:55
xingyaoww pushed a commit to xingyaoww/NeMo that referenced this pull request Apr 23, 2024
Signed-off-by: Olivier Delalleau <507137+odelalleau@users.noreply.github.com>
Co-authored-by: Pablo Garay <palenq@gmail.com>
alxzhang-amazon pushed a commit to alxzhang-amazon/NeMo that referenced this pull request Apr 26, 2024
Signed-off-by: Olivier Delalleau <507137+odelalleau@users.noreply.github.com>
Co-authored-by: Pablo Garay <palenq@gmail.com>
galv pushed a commit to galv/NeMo that referenced this pull request Apr 29, 2024
Signed-off-by: Olivier Delalleau <507137+odelalleau@users.noreply.github.com>
Co-authored-by: Pablo Garay <palenq@gmail.com>
suiyoubi pushed a commit that referenced this pull request May 2, 2024
Signed-off-by: Olivier Delalleau <507137+odelalleau@users.noreply.github.com>
Co-authored-by: Pablo Garay <palenq@gmail.com>
Signed-off-by: Ao Tang <aot@nvidia.com>
rohitrango pushed a commit to rohitrango/NeMo that referenced this pull request Jun 25, 2024
Signed-off-by: Olivier Delalleau <507137+odelalleau@users.noreply.github.com>
Co-authored-by: Pablo Garay <palenq@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants