Skip to content

JAX + PyTorch produces OMP: Error #13: Assertion failure at kmp_affinity.cpp(532) #97635

@patrick-kidger

Description

@patrick-kidger

🐛 Describe the bug

On my machine, running:

import jax
import torch

produces

OMP: Error #13: Assertion failure at kmp_affinity.cpp(532).
OMP: Hint Please submit a bug report with this message, compile and run commands used, and machine configuration info including native compiler and operating system versions. Faster response will be obtained by including all program sources. For information on submitting this issue, please see http://www.intel.com/software/products/support/.
fish: “ipython” terminated by signal SIGABRT (Abort)

I've tested this for all versions (jax,jaxlib,pytorch) ∈ {(0.3.25, 0.3.25, 1.13.1), (0.4.6, 0.4.6, 1.13.1), (0.4.6, 0.4.6, 2.0.0)}

JAX is installed via pip install jax jaxlib; PyTorch is installed via conda install pytorch cpuonly -c pytorch.

Switching the import order seems to fix things.

Versions

PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.18.4
Libc version: glibc-2.31

Python version: 3.8.16 (default, Mar  2 2023, 03:21:46)  [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.89-16172-g8db7d2810659-x86_64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   39 bits physical, 48 bits virtual
CPU(s):                          8
On-line CPU(s) list:             0-7
Thread(s) per core:              1
Core(s) per socket:              1
Socket(s):                       8
<Snipped, not sure I should be publicly sharing my robustness to certain vulnerabilities! Let me know if anything here is particularly needed.>

Versions of relevant libraries:
[pip3] mypy==1.0.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.2
[pip3] torch==2.0.0
[pip3] torchvision==0.15.0
[conda] blas                      1.0                         mkl  
[conda] cpuonly                   2.0                           0    pytorch
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] mkl                       2021.4.0           h06a4308_640  
[conda] mkl-service               2.4.0            py38h7f8727e_0  
[conda] mkl_fft                   1.3.1            py38hd3c417c_0  
[conda] mkl_random                1.2.2            py38h51133e4_0  
[conda] numpy                     1.24.2                   pypi_0    pypi
[conda] numpy-base                1.23.5           py38h31eccc5_0  
[conda] pytorch                   2.0.0               py3.8_cpu_0    pytorch
[conda] pytorch-mutex             1.0                         cpu    pytorch
[conda] torchvision               0.15.0                 py38_cpu    pytorch

Metadata

Metadata

Assignees

No one assigned

    Labels

    needs reproductionEnsure you have actionable steps to reproduce the issue. Someone else needs to confirm the repro.triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions