Skip to content

Building PyTorch for M1: problems using CPU #83905

@Selimonder

Description

@Selimonder

🐛 Describe the bug

Hello,
I am using an M1 and trying to build PyTorch from source. The torch works as expected using mps acceleration. However, when I move compute to cpu I start receiving MKL errors. I tried conda install nomkl but made no difference. Any idea what can be causing this?
How I build PyTorch

# build pytorch
conda install astunparse numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses pkg-config libuv
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-“$(dirname $(which conda))/../“}
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ USE_NNPACK=0 USE_CUDA=0 python setup.py install

The actual bug

Python 3.8.13 (default, Mar 28 2022, 06:16:26)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import torch
>>> x = torch.zeros(2).to(‘cpu’)
<stdin>:1: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xe (Triggered internally at /Users/so-synth/Desktop/research/code/pytorch/torch/csrc/utils/tensor_numpy.cpp:77.)
>>> l = torch.nn.Linear(2, 4).to(‘cpu’)
>>> o = l(x)
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library.
The processor must support the Intel(R) Supplemental Streaming SIMD Extensions 3 (Intel(R) SSSE3) instructions.
The processor must support the Intel(R) Streaming SIMD Extensions 4.2 (Intel(R) SSE4.2) instructions.
The processor must support the Intel(R) Advanced Vector Extensions (Intel(R) AVX) instructions.

Same code running successfully on MPS acceleration

Python 3.8.13 (default, Mar 28 2022, 06:16:26)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin
Type “help”, “copyright”, “credits” or “license” for more information.
>>> import torch
>>> x = torch.zeros(2).to(‘mps’)
<stdin>:1: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xe (Triggered internally at /Users/so-synth/Desktop/research/code/pytorch/torch/csrc/utils/tensor_numpy.cpp:77.)
>>> l = torch.nn.Linear(2, 4).to(‘mps’)
>>> o = l(x)
>>> o
/Users/so-synth/opt/anaconda3/envs/frontend/lib/python3.8/site-packages/torch/_tensor_str.py:106: UserWarning: The operator ‘aten::masked_select’ is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/so-synth/Desktop/research/code/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
  nonzero_finite_vals = torch.masked_select(
tensor([ 0.4546,  0.2424,  0.3183, -0.5697], device=‘mps:0’,
       grad_fn=<MpsLinearBackward0>)
>>>

Versions

Versions

Collecting environment information...
PyTorch version: 1.13.0a0+gitda520a4
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 12.5.1 (x86_64)
GCC version: Could not collect
Clang version: 13.1.6 (clang-1316.0.21.2.5)
CMake version: version 3.22.1
Libc version: N/A
Python version: 3.8.13 (default, Mar 28 2022, 06:16:26)  [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-10.16-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.20.3
[pip3] torch==1.13.0a0+gitda520a4
[conda] mkl                       2022.0.0           hecd8cb5_105
[conda] mkl-include               2022.0.0           hecd8cb5_105
[conda] numpy                     1.20.3                   pypi_0    pypi
[conda] torch                     1.13.0a0+gitda520a4          pypi_0    pypi```

cc @malfet @seemethere

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: buildBuild system issuesmodule: m1triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions