Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MPS] Add support for aten::median for MPS backend #87220

Closed
DenisVieriu97 opened this issue Oct 18, 2022 · 4 comments
Closed

[MPS] Add support for aten::median for MPS backend #87220

DenisVieriu97 opened this issue Oct 18, 2022 · 4 comments
Assignees
Labels
good first issue module: mps Related to Apple Metal Performance Shaders framework triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@DenisVieriu97
Copy link
Collaborator

DenisVieriu97 commented Oct 18, 2022

🐛 Describe the bug

First time contributors are welcome! 🙂
Add support for aten::median for MPS backend. Generic support for adding operations to MPS backend is captured here: https://github.com/pytorch/pytorch/wiki/MPS-Backend#adding-op-for-mps-backend

Versions

N/A

cc @kulinseth @albanD @malfet @razarmehr @abhudev

@DenisVieriu97 DenisVieriu97 added good first issue module: mps Related to Apple Metal Performance Shaders framework labels Oct 18, 2022
@cpuhrsch cpuhrsch added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 18, 2022
@Raman-Kumar
Copy link
Contributor

Raman-Kumar commented Oct 18, 2022

@DenisVieriu97
I want to take and work on this issue.
started working on it ...

@Raman-Kumar
Copy link
Contributor

Raman-Kumar commented Nov 6, 2022

I sent a Pull request for torch.median(input_tensor). test passed
pytorch/pull/88554

Now I am working on median with dim value
For that I used sortWithTensor and argSortWithTensor from MPSGraph

But it will only work on macOS Ventura

@Raman-Kumar
Copy link
Contributor

Raman-Kumar commented Nov 7, 2022

will check macOS Ventura like this PR

I need a small help in
the implementation function signature of
torch.median(input, dim=- 1, keepdim=False, *, out=None)

Successfully added median_mps of entire tensor and its implementation function

Now trying to add median op with dim
I added Here MPS: median_out_mps

- func: median.dim_values(Tensor self, int dim, bool keepdim=False, *, Tensor(a!) values, Tensor(b!) indices) -> (Tensor(a!) values, Tensor(b!) indices)
dispatch:
CPU: median_out_cpu
CUDA: median_out_cuda

and gave function signature like below replacing my median op name

// Max out with dim
TORCH_IMPL_FUNC(max_out_mps)
(const Tensor& input_t,
int64_t dim,
bool keepdim,
const Tensor& output_t,
const Tensor& indices_t) {
int64_t dim_ = maybe_wrap_dim(dim, input_t.dim());
native::zero_numel_check_dims(input_t, dim_, "max()");
min_max_out_mps(input_t, dim, keepdim, output_t, indices_t, MPSReductionType::MAX, "max_out_mps");
}

But this gives error while building

../aten/src/ATen/native/mps/operations/ReduceOps.mm:1392:1: error: use of undeclared identifier 'structured_median_out_mps'; did you mean 'structured_mean_out_mps'?
TORCH_IMPL_FUNC(median_out_mps)
^
../aten/src/ATen/TensorMeta.h:63:36: note: expanded from macro 'TORCH_IMPL_FUNC'
#define TORCH_IMPL_FUNC(name) void structured_##name::impl
                                   ^
<scratch space>:17:1: note: expanded from here
structured_median_out_mps
^
aten/src/ATen/ops/mean_native.h:23:18: note: 'structured_mean_out_mps' declared here
struct TORCH_API structured_mean_out_mps : public at::meta::structured_mean_dim {
                 ^
../aten/src/ATen/native/mps/operations/ReduceOps.mm:1392:1: error: out-of-line definition of 'impl' does not match any declaration in 'at::native::structured_mean_out_mps'
TORCH_IMPL_FUNC(median_out_mps)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../aten/src/ATen/TensorMeta.h:63:55: note: expanded from macro 'TORCH_IMPL_FUNC'
#define TORCH_IMPL_FUNC(name) void structured_##name::impl

What should be median_out_mps with dim implementation function signature
and Where it should be placed in /native_functions.yaml?

@Raman-Kumar
Copy link
Contributor

This function signature is working

TORCH_API ::std::tuple<at::Tensor &,at::Tensor &> median_out_mps
(const at::Tensor & self,
 int64_t dim,
bool keepdim, 
at::Tensor & values, 
at::Tensor & indices){

  return std::tuple<Tensor&, Tensor&>(values, indices);
}

Going ahead with this.

kulinseth pushed a commit to kulinseth/pytorch that referenced this issue Dec 10, 2022
## Summary ⚡

**Aim**: Add support for aten::median for MPS backend (Fixes pytorch#87220)

This is fresh clean PR from the previous [PR](pytorch#88554)

- Implementing the new median function in aten/src/ATen/native/mps/operations/ReduceOps.mm
- Adding it to aten/src/ATen/native/native_functions.yaml
- Adding it to existing test_median

### **this will works like this** 🪶
median of entire input tensor on MPS
`torch.median(mps_inputTensor)`
median of along a dim
`torch.median(mps_inputTensor, dim=[int], keepdim=[Bool])`
Pull Request resolved: pytorch#88807
Approved by: https://github.com/kulinseth
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue module: mps Related to Apple Metal Performance Shaders framework triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants