Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Package request intel oneapi #74148

Open
alkeryn opened this issue Nov 25, 2019 · 16 comments
Open

Package request intel oneapi #74148

alkeryn opened this issue Nov 25, 2019 · 16 comments

Comments

@alkeryn
Copy link
Contributor

alkeryn commented Nov 25, 2019

So intel just released oneapi and i wish to have it in nixos
basically a gpgpu and other accelerators solution :

https://software.intel.com/en-us/oneapi/base-kit

@stale
Copy link

stale bot commented Jun 1, 2020

Thank you for your contributions.
This has been automatically marked as stale because it has had no activity for 180 days.
If this is still important to you, we ask that you leave a comment below. Your comment can be as simple as "still important to me". This lets people see that at least one person still cares about this. Someone will have to do this at most twice a year if there is no other activity.
Here are suggestions that might help resolve this more quickly:

  1. Search for maintainers and people that previously touched the
    related code and @ mention them in a comment.
  2. Ask on the NixOS Discourse. 3. Ask on the #nixos channel on
    irc.freenode.net.

@stale stale bot added the 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md label Jun 1, 2020
@bzizou
Copy link
Contributor

bzizou commented Feb 11, 2022

I created a NUR package for OneAPI: https://github.com/Gricad/nur-packages/blob/master/pkgs/intel/oneapi.nix
It may still need some improvements, but I managed to use it to re-compile the package of a modlecular dynamics software (Lammps) to use the Intel compiler and Intel MPI:
https://github.com/Gricad/nur-packages/tree/master/pkgs/lammps (tested, runs OK on two 32 cores nodes)

My intel-oneapi package needs to be compiled with "sandbox=false", I have to investigate to fix this; but I'm wondering what else could prevent it from going one day into the official nixpkgs repository...

@stale stale bot removed the 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md label Feb 11, 2022
@svenstaro
Copy link

This is done, isn't it? https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/science/math/mkl/default.nix

@alkeryn
Copy link
Contributor Author

alkeryn commented Aug 19, 2022

@svenstaro this looks like a library based on oneapi and not the oneapi toolchain itself.

@svenstaro
Copy link

I think it's it. The source URLs are all the real deal.

@alkeryn
Copy link
Contributor Author

alkeryn commented Aug 19, 2022

@svenstaro it isn't oneapi but oneapi-mkl, the package name itself is "mkl" and the description reads :

 Intel OneAPI Math Kernel Library (Intel oneMKL) optimizes code with minimal
      effort for future generations of Intel processors. It is compatible with your
      choice of compilers, languages, operating systems, and linking and
      threading models.

it isn't oneapi itself but just a library.

@ziguana
Copy link
Contributor

ziguana commented Nov 21, 2022

I'm trying to merge the L0 loader and GPU driver in #201063, and would love a review. Thanks!

@tyler274
Copy link

Would greatly appreciate this as well.

@nviets
Copy link
Contributor

nviets commented May 5, 2023

@bzizou - what's in the way of merging your OneAPI directly into nixpkgs?

@bzizou
Copy link
Contributor

bzizou commented May 9, 2023

@bzizou - what's in the way of merging your OneAPI directly into nixpkgs?

@nviets The problem is that my package only builds with sandbox=false

@arunoruto
Copy link

This would be awesome since I would love to test the numba-dpex project. However, it needs the oneAPI as a prerequisite (see the pypi page of dpctl, a dependency of numba-dpex). I always get the same error message ImportError: libintlc.so.5: cannot open shared object file: No such file or directory.

@diptorupd
Copy link

diptorupd commented Apr 11, 2024

I am an engineer at Intel and maintain the numba-dpex project, which is part of the larger oneAPI software stack. I right now have no understanding of Nix and how the packaging works, but wanted to provide few suggestions with respect to @arunoruto's question about numba-dpex.

Specific to numba-dpex and dpctl, at this point the whole of oneAPI is not needed. At runtime both numba-dpex and dpctl require the dpcpp compiler runtime and OpenCL/LevelZero drivers. To build from source, the dpcpp compiler is needed. Dpcpp is an LLVM-based compiler that implements the SYCL kernel programming language.

One detour that I can suggest is to build a standalone Nix package for the dpcpp compiler. It is even possible to use the open-source version of the Intel dpcpp compiler (https://github.com/intel/llvm/releases) instead of the proprietary version in oneAPI. The open-source version of the compiler can support Nvidia and AMD devices out of the box along with Intel devices.

Once there is a dpcpp compiler package, dpctl and numba-dpex can be built using it.

Edit: Tagging @oleksandr-pavlyk who maintains dpctl.

@diptorupd
Copy link

The open-source version of the compiler can support Nvidia and AMD devices out of the box along with Intel devices.

I think the https://github.com/NixOS/nixpkgs/blob/nixos-23.11/pkgs/development/compilers/llvm/17/llvm/default.nix can be the starting point to build a dpcpp nix package, but instead of using upstream LLVM the https://github.com/intel/llvm repository can be used.

@MordragT
Copy link

I am an engineer at Intel and maintain the numba-dpex project, which is part of the larger oneAPI software stack. I right now have no understanding of Nix and how the packaging works, but wanted to provide few suggestions with respect to @arunoruto's question about numba-dpex.

Specific to numba-dpex and dpctl, at this point the whole of oneAPI is not needed. At runtime both numba-dpex and dpctl require the dpcpp compiler runtime and OpenCL/LevelZero drivers. To build from source, the dpcpp compiler is needed. Dpcpp is an LLVM-based compiler that implements the SYCL kernel programming language.

One detour that I can suggest is to build a standalone Nix package for the dpcpp compiler. It is even possible to use the open-source version of the Intel dpcpp compiler (https://github.com/intel/llvm/releases) instead of the proprietary version in oneAPI. The open-source version of the compiler can support Nvidia and AMD devices out of the box along with Intel devices.

Once there is a dpcpp compiler package, dpctl and numba-dpex can be built using it.

Edit: Tagging @oleksandr-pavlyk who maintains dpctl.

Thank you for the insights, I actually tried to package that some time ago https://github.com/MordragT/nixos/tree/master/pkgs/xpu-packages (I know weird name). But to support intel arc gpus you would need the proprietary version right ?

@diptorupd
Copy link

diptorupd commented Apr 12, 2024

But to support intel arc gpus you would need the proprietary version right ?

@MordragT I do not think so. There are two aspects for device support:

  1. The compiler should support the device as a code-generation target. The ARC devices are listed as supported targets here: https://intel.github.io/llvm-docs/UsersManual.html.
  • intel_gpu_acm_g12, intel_gpu_dg2_g12 - Alchemist G12 Intel graphics architecture
  • intel_gpu_acm_g11, intel_gpu_dg2_g11 - Alchemist G11 Intel graphics architecture
  • intel_gpu_acm_g10, intel_gpu_dg2_g10 - Alchemist G10 Intel graphics architecture
  1. At the level of dpcpp, supported device means that the compiler is able to generate a kernel in the device-specific intermediate representation (IR) format. In our case, it is the SPIR-V format. The compilation from the IR to the actual binary executable happens in the device driver that is called by the compiler runtime. For the ARC devices, the compilation happens using the Intel Graphics Compiler (igc) JIT compiler that is invoked by the Level Zero driver. Any recent Intel compute runtime release will have support for ARC (previously known as DG2) you can refer to the device compatibility matrix here: https://github.com/intel/compute-runtime/releases

Based on this, specifically for ARC gpus the open source compiler should work provided both compiler and compute runtime packages have support for them. I normally do not work on these GPUs myself, but will confirm once I get in to work and access one of our lab systems.

But, you are right in general there can be devices that are not supported in the open source compiler stack.

I also looked at your recipes (?) (sorry coming from conda world), it will also need an extra step for libsycl or dpcpp-cpp-rt if the tool chain has to support the SYCL language. If it is of any use, here is how we package the proprietary compiler and its runtime for the Python conda package manager for the conda-forge channel: https://github.com/conda-forge/intel-compiler-repack-feedstock.

@jiriks74
Copy link

jiriks74 commented May 8, 2024

Hello,
could I somehow help to push this further? I have a laptop with an Intel dGPU and I'd like to run Blender every now and then. It would really help me if I could use my dGPU as it cuts rendering times quite a lot (by half in the Classroom demo).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants