Skip to content
This repository was archived by the owner on Nov 19, 2025. It is now read-only.

fix: Add triton downgrade as long as we're on pytorch 24.07#493

Merged
terrykong merged 1 commit intomainfrom
tk/triton-break
Jan 26, 2025
Merged

fix: Add triton downgrade as long as we're on pytorch 24.07#493
terrykong merged 1 commit intomainfrom
tk/triton-break

Conversation

@terrykong
Copy link
Copy Markdown
Collaborator

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Changelog

  • Please update the CHANGELOG.md under next version with high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

Before your PR is "Ready for review"

Pre checks:

Checklist when contributing a new algorithm

  • Does the trainer resume and restore model state all states?
  • Does the trainer support all parallelism techniques(PP, TP, DP)?
  • Does the trainer support max_steps=-1 and validation?
  • Does the trainer only call APIs defined in alignable_interface.py?
  • Does the trainer have proper logging?

Additional Information

  • Related to # (issue)

Signed-off-by: Terry Kong <terryk@nvidia.com>
@terrykong terrykong requested review from ashors1 and ko3n1g January 25, 2025 02:12
@terrykong terrykong added the Run CICD Set + un-set to retrigger (add after r*.*.* labels) label Jan 25, 2025
@terrykong terrykong enabled auto-merge (squash) January 25, 2025 02:16
@terrykong terrykong merged commit 1f58260 into main Jan 26, 2025
@terrykong terrykong deleted the tk/triton-break branch January 26, 2025 00:14
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

Run CICD Set + un-set to retrigger (add after r*.*.* labels)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants