-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some operation are not implemented when using mps backend #77754
Comments
For me, even when executing sample code like mps_device = torch.device("mps")
z = torch.ones(5, device=mps_device) Result in
FYI, I'm using an Intel Mac with AMD graphics card. |
I am curious are you building PyTorch yourself ? or is it based on nightly binaries? Because my understanding was currently the runners don't have the MPS support enabled. Also what's the OS version ? |
I'm running in a similar issue trying to run
Check that you are using the right python version (built for ARM and not x84): #77748 (comment) |
@singularity-s0 I am afraid the x86+AMD GPU are not fully finalized right now (we plan on getting them ready as soon as possible). You can build from source to get an MPS-enabled build or wait for #77662 to land which will enable MPS on the nightly intel build. |
@thipokKub thanks for the report! And also sharing the exact model you're looking for. I created a tracking issue for this #77764 so that we have a centralized place to know who is working on what. |
Oh... I didn't realize the MPS was (currently) M1-only. The blog post mentioned "GPU-accelerated PyTorch training on Mac" so I tried the nightly version on my Intel Mac (macOS 12.4). Thanks for the clarification. |
Another unsupported operation:
Reproduction: import torch
dist = torch.distributions.Categorical(torch.tensor([0.5, 0.5]).to('mps'))
print(dist.sample()) |
for intel it's better to use https://github.com/oneapi-src/oneDNN |
Thanks @thipokKub for reporting the ops. These ops are captured as part of #77764. Closing this issue, please re-open or comment on the linked issue for any other ops. |
|
NotImplementedError: The operator 'aten::cumsum.out' is not current implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on #77764. As a temporary fix, you can set the environment variable |
Tried to Fine tune with transformer with M2, but got NotImplementedError: The operator 'aten::cumsum.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on #77764. As a temporary fix, you can set the environment variable any solution yet? |
Running on an M1 Max: NotImplementedError: The operator 'aten::upsample_linear1d.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on #77764. As a temporary fix, you can set the environment variable |
|
Thanks @nigelparsad , we will look into adding this op. Can you please provide more information about your use-case or application which you are targeting with this? |
@kulinseth Thank you for looking into this. I have run into this issue using NeuralForecast, specifically their NHITS model: In the latter link, the NHITS model's last parameter is **trainer_kwargs, which are the keyword trainer arguments inherited from PyTorch Lighning鈥檚 trainer. I pass Lightning's accelerator='mps' argument here resulting in the error listed above. Please let me know if you require any more information for this particular use case. |
Getting this error with torch 2.2.1 and torchvision 0.17.1
|
Getting the following error
when running
train_df is configured according to the NIXTLA's API. |
@yaniv92648, I am facing the same issue with AutoTCN on M1 Pro |
馃悰 Describe the bug
Recently, pytorch add support for metal backend (see #47702 (comment)) but it seems like there are some missing operations. For example
and
To reproduce
Versions
PyTorch version: 1.12.0.dev20220518
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.3 (arm64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2)
CMake version: version 3.22.2
Libc version: N/A
Python version: 3.9.12 | packaged by conda-forge | (main, Mar 24 2022, 23:25:14) [Clang 12.0.1 ] (64-bit runtime)
Python platform: macOS-12.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.22.3
[pip3] pytorch-lightning==1.6.3
[pip3] pytorch-metric-learning==1.3.0
[pip3] torch==1.12.0.dev20220518
[pip3] torchaudio==0.11.0
[pip3] torchinfo==1.6.6
[pip3] torchmetrics==0.8.2
[pip3] torchvision==0.12.0
[conda] numpy 1.21.6 pypi_0 pypi
[conda] pytorch-lightning 1.6.3 pypi_0 pypi
[conda] pytorch-metric-learning 1.3.0 pypi_0 pypi
[conda] torch 1.12.0.dev20220518 pypi_0 pypi
[conda] torchaudio 0.11.0 pypi_0 pypi
[conda] torchinfo 1.6.6 pypi_0 pypi
[conda] torchmetrics 0.8.2 pypi_0 pypi
[conda] torchvision 0.12.0 pypi_0 pypi
The text was updated successfully, but these errors were encountered: