Skip to content

Add Half (float16) support to slim ScalarType enum (#18959)#18959

Merged
kirklandsign merged 1 commit intopytorch:mainfrom
digantdesai:export-D101218928
Apr 23, 2026
Merged

Add Half (float16) support to slim ScalarType enum (#18959)#18959
kirklandsign merged 1 commit intopytorch:mainfrom
digantdesai:export-D101218928

Conversation

@digantdesai
Copy link
Copy Markdown
Contributor

@digantdesai digantdesai commented Apr 16, 2026

Summary:

The CUDA runtime shims for sort operations use Half (float16) dtype, but it was
not defined in the slim ScalarType enum, causing compiler warnings treated as
errors (-Werror=switch). This adds proper Half support to the slim ScalarType
enum so switch statements can use the enum value directly instead of casting
to the underlying type.

Reviewed By: kirklandsign

Differential Revision: D101218928

Copilot AI review requested due to automatic review settings April 16, 2026 22:36
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 16, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18959

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 2 Unrelated Failures

As of commit 50e9eff with merge base 7fdd306 (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 16, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Apr 16, 2026

@digantdesai has exported this pull request. If you are a Meta employee, you can view the originating Diff in D101218928.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds float16 (Half) to the AOTI “slim” c10::ScalarType so CUDA sort shims can switch on ScalarType::Half directly without relying on hard-coded casts (avoiding -Werror=switch issues).

Changes:

  • Add ScalarType::Half = 5 plus kHalf, and extend helpers (elementSize, toString, isFloatingType, isValidScalarType) to support Half.
  • Update CUDA sort shim to use c10_slim::ScalarType::Half in switch statements.
  • Extend slim ScalarType unit tests to cover Half (enum value, element size, constants, validity).

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

File Description
backends/cuda/runtime/shims/sort.cu Switch dispatch now uses ScalarType::Half instead of a locally-cast placeholder.
backends/aoti/slim/c10/core/ScalarType.h Introduces Half into the slim dtype enum and updates dtype utility helpers accordingly.
backends/aoti/slim/c10/core/test/test_scalar_type.cpp Adds coverage ensuring Half is correctly represented and handled by helper APIs.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +27 to 29
// PyTorch ScalarType::Half = 5, now defined in slim ScalarType enum.
using c10_slim::kHalf;

case ScalarType::Long:
return sizeof(int64_t);
case ScalarType::Half:
return 2; // sizeof(__half) = 2 bytes
digantdesai added a commit to digantdesai/executorch-1 that referenced this pull request Apr 16, 2026
Summary:
Pull Request resolved: pytorch#18959

The CUDA runtime shims for sort operations use Half (float16) dtype, but it was
not defined in the slim ScalarType enum, causing compiler warnings treated as
errors (-Werror=switch). This adds proper Half support to the slim ScalarType
enum so switch statements can use the enum value directly instead of casting
to the underlying type.

Differential Revision: D101218928
@meta-codesync meta-codesync Bot changed the title Add Half (float16) support to slim ScalarType enum Add Half (float16) support to slim ScalarType enum (#18959) Apr 16, 2026
Comment on lines +215 to +217
TEST_F(IsValidScalarTypeTest, HalfIsValid) {
EXPECT_TRUE(isValidScalarType(ScalarType::Half));
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can remove

case ScalarType::Long:
return sizeof(int64_t);
case ScalarType::Half:
return 2; // sizeof(__half) = 2 bytes
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove comment?

Summary:
Pull Request resolved: pytorch#18959

The CUDA runtime shims for sort operations use Half (float16) dtype, but it was
not defined in the slim ScalarType enum, causing compiler warnings treated as
errors (-Werror=switch). This adds proper Half support to the slim ScalarType
enum so switch statements can use the enum value directly instead of casting
to the underlying type.

Reviewed By: kirklandsign

Differential Revision: D101218928
Copilot AI review requested due to automatic review settings April 17, 2026 01:48
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds float16 (Half) support to the slim c10::ScalarType enum to eliminate CUDA shim build warnings/errors caused by missing enum coverage (notably in sort.cu switch statements).

Changes:

  • Add ScalarType::Half = 5 (and kHalf) to slim ScalarType, plus support in elementSize(), toString(), isFloatingType(), and isValidScalarType().
  • Update CUDA sort shims to switch directly on c10_slim::ScalarType::Half rather than using a casted placeholder value.
  • Extend slim ScalarType unit tests to cover Half constants, sizes, and validity.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
backends/cuda/runtime/shims/sort.cu Removes the cast-based kHalf workaround and switches directly on ScalarType::Half.
backends/aoti/slim/c10/core/ScalarType.h Introduces Half into the slim enum and updates helper utilities accordingly.
backends/aoti/slim/c10/core/test/test_scalar_type.cpp Adds test coverage for Half (enum value, element size, constants, validity).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

case ScalarType::Long:
return sizeof(int64_t);
case ScalarType::Half:
return 2; // sizeof(__half) = 2 bytes
Comment on lines 36 to 40
{ScalarType::Short, 2, 2, "Short", false, true, true, false},
{ScalarType::Int, 3, 4, "Int", false, true, true, false},
{ScalarType::Long, 4, 8, "Long", false, true, true, false},
{ScalarType::Half, 5, 2, "Half", true, false, false, false},
{ScalarType::Float, 6, 4, "Float", true, false, false, false},
Comment on lines +27 to 29
// PyTorch ScalarType::Half = 5, now defined in slim ScalarType enum.
using c10_slim::kHalf;

@kirklandsign kirklandsign merged commit 4a69750 into pytorch:main Apr 23, 2026
181 of 185 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants