Skip to content

[PyTorch] Use FP16 tols for distributed tests with TF32 compute#1831

Merged
timmoon10 merged 6 commits intoNVIDIA:mainfrom
timmoon10:debug-distributed-tests
Jun 19, 2025
Merged

[PyTorch] Use FP16 tols for distributed tests with TF32 compute#1831
timmoon10 merged 6 commits intoNVIDIA:mainfrom
timmoon10:debug-distributed-tests

Conversation

@timmoon10
Copy link
Copy Markdown
Collaborator

Description

#1806 relaxed the numerical tolerances for the distributed tests using FP32 because they are actually doing compute in TF32. However, it seems the tolerances are still too tight because we are still seeing test failures on some systems. Since TF32 has the same number of mantissa bits as FP16, this PR drops the FP32 tolerances to match the FP16 tolerances in torch.testing.assert_close.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

  • Use FP16 tols for distributed tests with TF32 compute

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: Tim Moon <tmoon@nvidia.com>
@timmoon10
Copy link
Copy Markdown
Collaborator Author

/te-ci pytorch L1

@timmoon10
Copy link
Copy Markdown
Collaborator Author

/te-ci pytorch L1

@timmoon10
Copy link
Copy Markdown
Collaborator Author

/te-ci pytorch L1

timmoon10 and others added 2 commits June 4, 2025 23:56
Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <4406448+timmoon10@users.noreply.github.com>
@timmoon10
Copy link
Copy Markdown
Collaborator Author

/te-ci pytorch L1

Copy link
Copy Markdown
Member

@ksivaman ksivaman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@timmoon10 timmoon10 merged commit 766e3b7 into NVIDIA:main Jun 19, 2025
26 of 27 checks passed
@timmoon10 timmoon10 deleted the debug-distributed-tests branch June 19, 2025 00:20
KshitijLakhani pushed a commit that referenced this pull request Jun 27, 2025
* Use FP16 tols for tests with TF32

Signed-off-by: Tim Moon <tmoon@nvidia.com>

* Use uniform init instead of constant init

Signed-off-by: Tim Moon <tmoon@nvidia.com>

* Revert constant init test, but reduce value

Signed-off-by: Tim Moon <tmoon@nvidia.com>

---------

Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <4406448+timmoon10@users.noreply.github.com>
chtruong814 pushed a commit to chtruong814/TransformerEngine that referenced this pull request Jul 17, 2025
…IA#1831)

* Use FP16 tols for tests with TF32

Signed-off-by: Tim Moon <tmoon@nvidia.com>

* Use uniform init instead of constant init

Signed-off-by: Tim Moon <tmoon@nvidia.com>

* Revert constant init test, but reduce value

Signed-off-by: Tim Moon <tmoon@nvidia.com>

---------

Signed-off-by: Tim Moon <tmoon@nvidia.com>
Signed-off-by: Tim Moon <4406448+timmoon10@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants