Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New packages: ROCm core and OpenCL #21153

Closed
wants to merge 9 commits into from
Closed

Conversation

ahesford
Copy link
Member

@ahesford ahesford commented Apr 19, 2020

This PR includes several packages designed to bring the OpenCL portion of the AMD ROCm ecosystem to Void. It addresses Issue #19507. There are many other packages that AMD provides for GPGPU computing, but these can be added piecemeal as users demand.

The packages are currently only for x86_64*. While at least some of the packages will compile on other 64-bit architectures, I have no hardware to test. They are certainly not suitable for 32-bit architectures; some internal data structures rely on uint64_t values that are cast to pointers. At a
minimum, a thorough audit would be necessary to ensure that these casts are safe (e.g., that the values stored were only ever upcast from 32-bit pointers). More extensive work may be necessary to support 32-bit architectures, probably without significant benefit.

The packages successfully identify a Radeon RX 580 on an x86_64 installation using both clinfo and rocminfo as provided. Furthermore, a version of pyopencl linked against these ROCm packages successfully runs a simple program that validates arithmetic on GPU-bound arrays.

There are caveats with this set of packages, almost all of which revolve around the incompatibilities between the Void-provided ocl-icd and the Khronos OpenCL ICD loader required by (and built into) rocm-opencl-runtime.

  1. All packages install into /opt/rocm. This keeps the environment closer to that officially sanctioned by AMD and helps avoid conflicts between other packages and those provided here. (For example, clinfo and the ICD loader itself.) Eventually, we may be able to move the files into /usr.
  2. The Khronos OpenCL ICD loader built into rocm-opencl-runtime is an outdated, pre-release commit. If AMD can update its sources to use the release version of that loader (which has a backward-incompatible API change), we may be able to make ROCm compatible with ocl-icd or replace ocl-icd with the official Khronos loader.
  3. In the meantime, to avoid shlibs conflicts, the OpenCL ICD loader is installed as /opt/rocm/lib/libOpenCL-ROCm.so (and an appropriately versioned shared library), which means that programs wishing to use the ROCm OpenCL must explicitly link against this library instead of the generic libOpenCL.so.
  4. I do not recommend making any Void packages link against this specific library, because it will make those packages ROCm-only. For the time being, ROCm is intended for end-users to specifically link against. Linking Void packages against ROCm would requires the ICD loader used by ROCm be compatible with ocl-icd, or ocl-icd be replaced by the Khronos loader. However, because the Khronos loader changed its API for the release version, such a change is not yet appropriate.

Hopefully, if AMD updates its dependence on the Khronos ICD loader, we can resolve some of these caveats in the future and make ROCm a more natural Void component. For now, these packages are useful for those who need an AMD OpenCL solution and would rather custom-link software against the AMD ICD loader than hack the amdgpu-pro driver into an ICD compatible with ocl-icd.

Some patches were made to relocate some files in /opt/rocm and make everyting build on x86_64-musl`. Where appropriate, these patches will be pushed upstream to clean up the distribution and packaging.

Update: I pushed some new commits to update license information that triggerd an xlint failure. I've also disabled CI builds because they will time out on rocm-llvm.

@ahesford
Copy link
Member Author

Some updates:

  • All packages install into the normal /usr hierarchy now.
  • The rocm-llvm package (which is really only used to build rocm-comgr and isn't intended for end-users) only targets AMDGPU and installs under /usr/lib/rocm-llvm.
  • All files which might conflict with other packages (usr/bin/clinfo, usr/lib/libOpenCL.so* and usr/include/CL/*) have custom names: clinfo is rocm-clinfo, libOpenCL.so* is libOpenCL-ROCm.so* and usr/include/CL becomes usr/include/rocm/CL.

I believe this is ready for roll-out, although individual OpenCL-aware programs will need special handling to support ROCm. The incompatibilities between libOpenCL in ocl-icd and libOpenCL in rocm-opencl-runtime make this unavoidable.

The library rename in rocm-opencl-runtime is more than just a linking issue. While some programs explicitly link against libOpenCL and could be altered to link against libOpenCL-ROCm, many OpenCL programs load the OpenCL ICD with dlopen. Those packages will need source patches to move libOpenCL.so* references in the code to libOpenCL-ROCm.so*. Packages will also need to use the header files in usr/include/rocm/CL, which is probably as simple as not depending on opencl2-headers and adding usr/include/rocm to the compiler include paths.

In most cases, I think OpenCL-aware packages can be custom-built to support ROCm with a build option that toggles a few simple things. This PR includes a modified hashcat template to do just that. Of course, because ROCm will not be universally useful to Void, the rocm build option should be disabled by default.

@ahesford ahesford force-pushed the rocm branch 6 times, most recently from 3fbceea to 2c48c94 Compare April 30, 2020 19:32
@ahesford
Copy link
Member Author

ahesford commented Apr 30, 2020

Still more updates:

  • I didn't catch this with the last push, but the ROCm libamdocl64.so ICD installed by rocm-opencl-runtime does work with ocl-icd, at least with clinfo from ocl-icd, hashcat, darktable (subject to the next point below) and a simple test with pyopencl installed as a wheel from PyPI (which installs its own copy of the ocl-icd loader). As a result, the rocm-opencl-runtime package now installs an ICD descriptor in /etc/OpenCL/vendors to allow discovery and use through ocl-icd.
  • Because there may be lingering incompatibilities between the libOpenCL implementations, I still install the dedicated ROCm ICD loader at /usr/lib/libOpenCL-ROCm.so* and the /etc/OpenCL/rocm-vendors directory with an ICD descriptor pointing to libamdocl64.so. This allows isolation of the ROCm environment if desired.
  • The darktable package requires OpenCL image support. AMD offers image support only through a closed-source extension library and claims a fully open-source version is "on [their] list of goals" (see ROCm-OpenCL-Runtime Issue #59). I've wrapped the closed-source library in an optional package, rocm-hsa-ext, based on the Debian package of the library. The only license I can find relating to this software is in the header files that ship in the package, which asserts an NCSA license that does not seem to prohibit redistribution in binary form. Accordingly, I do not think restricted is required here. Because the source is not (yet) available, I've marked this extension package for the nonfree repo.

I hope that, with these new changes, ROCm will "just work" with (the majority of) the OpenCL-aware packages in the Void repos. I've left the hashcat modifications intact to allow explicit linking against the ROCm ICD loader as a non-default build option, but the standard build (with a patch folded into these modifications) should work too.

@lemmi
Copy link
Member

lemmi commented May 1, 2020

Hey,

awesome work so far. For my purposes I built and installed rocm-opencl-runtime and rocm-hsa-ext. I used LD_LIBRARY_PATH=/opt/rocm/lib <cmd> to run clinfo, blender, hashcat and darktable with a vega64.

  • blender cycles "works"
    • Too slow to be usable
    • AMDGPU-PRO drivers are several times faster
  • darktable works
    • I can't make out a difference to the AMDGPU-PRO drivers
    • Depending on your card you may need to set opencl_avoid_atomics=true in ~/.config/darktable/darktablerc
  • hashcat --benchmark fails with clCreateCommandQueue(): CL_OUT_OF_HOST_MEMORY
    • might be unrelated, can't get the benchmark to run with AMDGPU-PRO, too
    • maybe some algorithms work, but I didn't test. This is the case fore AMDGPU-PRO
  • clinfo output:
Number of platforms                               1
  Platform Name                                   AMD Accelerated Parallel Processing
  Platform Vendor                                 Advanced Micro Devices, Inc.
  Platform Version                                OpenCL 2.0 AMD-APP.internal (3098.0)
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_icd cl_amd_object_metadata cl_amd_event_callback 
  Platform Max metadata object keys (AMD)         8
  Platform Extensions function suffix             AMD

  Platform Name                                   AMD Accelerated Parallel Processing
Number of devices                                 1
  Device Name                                     gfx900
  Device Vendor                                   Advanced Micro Devices, Inc.
  Device Vendor ID                                0x1002
  Device Version                                  OpenCL 2.0 
  Driver Version                                  3098.0 (HSA1.1,LC)
  Device OpenCL C Version                         OpenCL C 2.0 
  Device Type                                     GPU
  Device Board Name (AMD)                         Vega 10 XL/XT [Radeon RX Vega 56/64]
  Device Topology (AMD)                           PCI-E, 03:00.0
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               64
  SIMD per compute unit (AMD)                     4
  SIMD width (AMD)                                16
  SIMD instruction width (AMD)                    1
  Max clock frequency                             1630MHz
  Graphics IP (AMD)                               9.0
  Device Partition                                (core)
    Max number of sub-devices                     64
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x1024
  Max work group size                             256
  Preferred work group size (AMD)                 256
  Max work group size (AMD)                       1024
  Preferred work group size multiple              64
  Wavefront width (AMD)                           64
  Preferred / native vector sizes                 
    char                                                 4 / 4       
    short                                                2 / 2       
    int                                                  1 / 1       
    long                                                 1 / 1       
    half                                                 1 / 1        (cl_khr_fp16)
    float                                                1 / 1       
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (cl_khr_fp16)
    Denormals                                     No
    Infinity and NANs                             No
    Round to nearest                              No
    Round to zero                                 No
    Round to infinity                             No
    IEEE754-2008 fused multiply-add               No
    Support is emulated in software               No
  Single-precision Floating-point support         (core)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              8573157376 (7.984GiB)
  Global free memory (AMD)                        8372224 (7.984GiB)
  Global memory channels (AMD)                    64
  Global memory banks per channel (AMD)           4
  Global memory bank width (AMD)                  256 bytes
  Error Correction support                        No
  Max memory allocation                           7287183769 (6.787GiB)
  Unified memory for Host and Device              No
  Shared Virtual Memory (SVM) capabilities        (core)
    Coarse-grained buffer sharing                 Yes
    Fine-grained buffer sharing                   Yes
    Fine-grained system sharing                   No
    Atomics                                       No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       1024 bits (128 bytes)
  Preferred alignment for atomics                 
    SVM                                           0 bytes
    Global                                        0 bytes
    Local                                         0 bytes
  Max size for global variable                    7287183769 (6.787GiB)
  Preferred total size of global vars             8573157376 (7.984GiB)
  Global Memory cache type                        Read/Write
  Global Memory cache size                        16384 (16KiB)
  Global Memory cache line size                   64 bytes
  Image support                                   Yes
    Max number of samplers per kernel             26751
    Max size for 1D images from buffer            65536 pixels
    Max 1D or 2D image array size                 2048 images
    Base address alignment for 2D image buffers   256 bytes
    Pitch alignment for 2D image buffers          256 pixels
    Max 2D image size                             16384x16384 pixels
    Max 3D image size                             2048x2048x2048 pixels
    Max number of read image args                 128
    Max number of write image args                8
    Max number of read/write image args           64
  Max number of pipe args                         16
  Max active pipe reservations                    16
  Max pipe packet size                            2992216473 (2.787GiB)
  Local memory type                               Local
  Local memory size                               65536 (64KiB)
  Local memory syze per CU (AMD)                  65536 (64KiB)
  Local memory banks (AMD)                        32
  Max number of constant args                     8
  Max constant buffer size                        7287183769 (6.787GiB)
  Preferred constant buffer size (AMD)            16384 (16KiB)
  Max size of kernel argument                     1024
  Queue properties (on host)                      
    Out-of-order execution                        No
    Profiling                                     Yes
  Queue properties (on device)                    
    Out-of-order execution                        Yes
    Profiling                                     Yes
    Preferred size                                262144 (256KiB)
    Max size                                      8388608 (8MiB)
  Max queues on device                            1
  Max events on device                            1024
  Prefer user sync for interop                    Yes
  Number of P2P devices (AMD)                     0
  P2P devices (AMD)                               <printDeviceInfo:147: get number of CL_DEVICE_P2P_DEVICES_AMD : error -30>
  Profiling timer resolution                      1ns
  Profiling timer offset since Epoch (AMD)        0ns (Thu Jan  1 01:00:00 1970)
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Thread trace supported (AMD)                  No
    Number of async queues (AMD)                  8
    Max real-time compute queues (AMD)            8
    Max real-time compute units (AMD)             64
  printf() buffer size                            4194304 (4MiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_depth_images cl_amd_copy_buffer_p2p cl_amd_assembly_program 

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  AMD Accelerated Parallel Processing
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   Success [AMD]
  clCreateContext(NULL, ...) [default]            Success [AMD]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx900
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx900
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  Success (1)
    Platform Name                                 AMD Accelerated Parallel Processing
    Device Name                                   gfx900

ICD loader properties
  ICD loader Name                                 OpenCL ICD Loader
  ICD loader Vendor                               OCL Icd free software
  ICD loader Version                              2.2.12
  ICD loader Profile                              OpenCL 2.2

@ahesford
Copy link
Member Author

ahesford commented May 1, 2020

Thanks for recording this. LD_LIBRARY_PATH is not necessary anymore because the reworked packages install libraries in /usr.

I suspect you ran hashcat from an official package. That version has a problem with ROCm because it sets the TMP environment variable as a "workaround" for bad argument parsing by some OCL compilers. ROCm uses the TMP variable (or TEMP, if found) as the location where OCL compilation objects are stored. hashcat sets this to /use/share/hashcat, so the ROCm OCL compiler can't write the objects it wants to and fails. There are two ways around this:

  1. Build the version of hashcat in this PR; it includes a patch to undo the TMP set, or
  2. Run hashcat as root to allow it to write the build products it wants (they will be deleted when the program exits).

I've filed an issue upstream to address this problematic behavior.

@ahesford ahesford force-pushed the rocm branch 4 times, most recently from afd1d81 to b5d10ba Compare May 2, 2020 03:29
sgn pushed a commit to sgn/void-packages that referenced this pull request May 3, 2020
sgn pushed a commit to sgn/void-packages that referenced this pull request May 3, 2020
sgn pushed a commit to sgn/void-packages that referenced this pull request May 3, 2020
sgn pushed a commit to sgn/void-packages that referenced this pull request May 3, 2020
sgn pushed a commit to sgn/void-packages that referenced this pull request May 3, 2020
sgn pushed a commit to sgn/void-packages that referenced this pull request May 3, 2020
sgn pushed a commit to sgn/void-packages that referenced this pull request May 3, 2020
sgn pushed a commit to sgn/void-packages that referenced this pull request May 3, 2020
@FiCacador
Copy link

FiCacador commented May 15, 2020

@ahesford thank you for the explanation. I'm not sure if there is a violation (by the repo) if no binary is distributed and the package is built at the user's end, but either way, Void packaging has rules and I understand that. I won't keep expecting the possibility of an opencl-amd package on Void anymore and I will keep an eye on this. Again, thank you.

@fosslinux there is also nothing pushing me to replace Manjaro on my main system that works well for me and keep using Void only on my 10 year old laptop. Which is ok, that's the beauty of Linux after all.

@fosslinux
Copy link
Contributor

@FiCacador Nope, there isn't, and I don't see how that is relevant to my point. I don't have any problem with you using Manjaro!

@fosslinux
Copy link
Contributor

Ping?

@ahesford
Copy link
Member Author

ahesford commented Aug 2, 2020

The reliance on a custom LLVM fork has raised some eyebrows, so I put this on the back burner for awhile. It also seems like official dpkg builds rely on the deprecated clang-ocl, while official docs recommend using the setup I put in place here. To make matters worse, the 3.5 release seemed to reshuffle the components a bit more, and I haven't been terribly interested in updating everything for the new version.

@fosslinux
Copy link
Contributor

Hm. That is interesting; why does it need a custom LLVM fork?

@ahesford
Copy link
Member Author

ahesford commented Aug 4, 2020

I'm not sure what changes are in the AMD fork, but they want the code object manager and device libs to be compiled with their own version.

@ahesford ahesford reopened this Aug 4, 2020
@aurieh
Copy link
Contributor

aurieh commented Aug 18, 2020

Wouldn't it be better to use the official package names as provided in the docs to maintain consistency with Ubuntu, SLES, CentOS, Arch and Gentoo? I'm currently working on packaging the rest of the ROCm stack (namely the HIP* libs), and would like to know which naming scheme to use for consistency.

@ahesford
Copy link
Member Author

Void naming policy is to use the upstream repo names, not invent our own names, even when they would be consistent with packages in other distributions.

@fosslinux
Copy link
Contributor

@ahesford I have a diff for ROCm 3.10.0 and will update for 4.0.0 shortly, would you like me to open a new PR or would oyu like to pull the changes into this one?

Patch attached:

rocm-3.10.0.patch.txt

@ahesford
Copy link
Member Author

@fosslinux Open a new one; this discussion is stale.

@ahesford
Copy link
Member Author

This may be revisited some day, but today is not the day.

@ahesford ahesford closed this Feb 15, 2021
@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 16, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
new-package This PR adds a new package
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants