Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MPS] Add fmax fmin op #95191

Closed
wants to merge 2 commits into from
Closed

[MPS] Add fmax fmin op #95191

wants to merge 2 commits into from

Conversation

qqaatw
Copy link
Collaborator

@qqaatw qqaatw commented Feb 21, 2023

Fixes #ISSUE_NUMBER

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 21, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/95191

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 3 Failures

As of commit 11c289b:

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added ciflow/mps Run MPS tests (subset of trunk) release notes: mps Release notes category labels Feb 21, 2023
@@ -1,5 +1,7 @@
// Copyright © 2022 Apple Inc.

#pragma once
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guards possible duplicate includes in the future.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

@bdhirsh bdhirsh added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Feb 22, 2023
@qqaatw
Copy link
Collaborator Author

qqaatw commented Feb 24, 2023

@pytorchbot label "accept2run"

@kulinseth
Copy link
Collaborator

@pytorchbot merge -f "MPS tests are green."

@pytorch-bot pytorch-bot bot added ciflow/trunk Trigger trunk jobs on your pull request and removed accept2run labels Feb 25, 2023
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 5, 2023
Fixes #ISSUE_NUMBER

Pull Request resolved: pytorch/pytorch#95191
Approved by: https://github.com/kulinseth
pruthvistony added a commit to ROCm/pytorch that referenced this pull request May 2, 2023
}

void fmax_fmin_mps_impl(TensorIteratorBase& iter, const std::string max_min) {
TORCH_CHECK(iter.common_dtype() != at::kDouble, "float64 is not supported on MPS");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi,
May i know why assert double datatype here? Is it because the Apple GPU doesn't support double precision?
BTW, why doesn't unary op have such assertion?
Thank you.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it doesn't support double precision AFAIK. As for why unary ops don't have such assertion, I think they're just missed. But in general, you're not allowed to create an fp64 MPS tensor nor allowed to type cast / move a fp64 tensor to MPS.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for help. I find the code below,

TORCH_CHECK_TYPE(false,
"Cannot convert a float64 Tensor to MPS as the MPS framework doesn't support float64. "
"Please use float32 instead.")

and
static const std::string& getMetalType(const c10::ScalarType& t) {
// Mapping from c10::ScalarType to integral type that can be used for bitwise ops
// As bitwise ops sign-agnostic map signed/unsigned char and boolean to the same type
static std::unordered_map<c10::ScalarType, std::string> scalar_to_metal_type = {
{c10::ScalarType::Long, "long"},
{c10::ScalarType::Int, "int"},
{c10::ScalarType::Short, "short"},
{c10::ScalarType::Byte, "char"},
{c10::ScalarType::Char, "char"},
{c10::ScalarType::Bool, "char"},
};
auto it = scalar_to_metal_type.find(t);
TORCH_CHECK(it != scalar_to_metal_type.end(), "Unsupported type ", t);

Any FP64 data type will throw error in these MPS utilities.

BTW, here code is used to register the supported datatype for the fmax kernel, right? Does it mean only the half and the float32 kernels will be precompiled when AOT compiling.

REGISTER_FMAX_OP(float);
REGISTER_FMAX_OP(half);

Thank you.

jhavukainen pushed a commit to kulinseth/pytorch that referenced this pull request Mar 15, 2024
Fixes #ISSUE_NUMBER

Pull Request resolved: pytorch#95191
Approved by: https://github.com/kulinseth
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/mps Run MPS tests (subset of trunk) ciflow/trunk Trigger trunk jobs on your pull request Merged open source release notes: mps Release notes category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants