Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[flang-rt] Add experimental support for GPU build #131826

Merged
merged 1 commit into from
Mar 24, 2025
Merged

Conversation

jhuber6
Copy link
Contributor

@jhuber6 jhuber6 commented Mar 18, 2025

Summary:
This patch adds initial support for compiling flang-rt directly for
the GPU. The method used here matches what's already done for libc and
libc++ for the GPU and builds off of those projects.

Mainly this requires setting up some flags and setting the sources that
currently work. This will deposit the resulting library in the
appropriate directory. These files are then intended to be linked via
-Xoffload-linker support in the offloading driver.

lib/clang/21/lib/nvptx64-nvidia-cuda/libflang_rt.runtime.a
lib/clang/21/lib/amdgcn-amd-amdhsa/libflang_rt.runtime.a

This is obviously missing a lot of functions, mainly the io support.
Most of what we cannot support is due to using POSIX things that just
don't make sense on the GPU. Stuff like pthreads or sema.

Getting unit tests to run on this will also be a challenge. We could run
tests the same way we do with libc, but the problem there is that the
libc test suite is freestanding while gtest currently doesn't
compile on the GPU bcause it uses a lot of weird stuff. If the unit
tests were simply int main then it would work.

I don't understand the actual runtime code very well, I'd appreciate
some guidance on how to actually support Fortran IO from this interface.
As I understand it, Fortran IO requires a stack-like operation, which
conflicts with the SIMT model GPUs use. Worst case scenario we could
burn some LDS to keep a stack, or serialize it somehow since we can
always just iterate over all the active lanes.

Building this right now looks like this, which depends on the arguments
added in #131695.

    -DRUNTIMES_nvptx64-nvidia-cuda_LLVM_ENABLE_RUNTIMES=compiler-rt;libc;libcxx;libcxxabi;flang-rt \
    -DRUNTIMES_amdgcn-amd-amdhsa_LLVM_ENABLE_RUNTIMES=compiler-rt;libc;libcxx;libcxxabi;flang-rt \
    -DRUNTIMES_nvptx64-nvidia-cuda_FLANG_RT_LIBC_PROVIDER=llvm \
    -DRUNTIMES_nvptx64-nvidia-cuda_FLANG_RT_LIBCXX_PROVIDER=llvm \
    -DRUNTIMES_amdgcn-amd-amdhsa_FLANG_RT_LIBC_PROVIDER=llvm \
    -DRUNTIMES_amdgcn-amd-amdhsa_FLANG_RT_LIBCXX_PROVIDER=llvm

@llvmbot llvmbot added the flang Flang issues not falling into any other category label Mar 18, 2025
@klausler klausler removed their request for review March 18, 2025 17:07
Comment on lines +90 to +91
# Sources that can be compiled directly for the GPU.
set(gpu_sources
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we avoid duplicating the list of sources? What makes theses gpu-compatible?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly this is just the list of files that didn't error. There's a few different ones from the existing offloading builds, mostly due to the fact that we don't borrow the system's headers. I was thinking of a creative way to split these up but figured this was easiest.

Most of these fail due to missing headers, mostly POSIX related things because as much as we try pthreads and sema are not going to work on a GPU.

Copy link
Member

@Meinersbur Meinersbur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

The gpu_sources list will probably evolve over time, fixing more and more. It might actually a good idea to have a separate list until it stabilized.

Copy link
Member

@Meinersbur Meinersbur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

The gpu_sources list will probably evolve over time, fixing more and more. It might actually a good idea to have a separate list until it stabilized.

Copy link
Member

@Meinersbur Meinersbur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

The gpu_sources list will probably evolve over time, fixing more and more. It might actually a good idea to have a separate list until it stabilized.

@jhuber6
Copy link
Contributor Author

jhuber6 commented Mar 21, 2025

LGTM.

The gpu_sources list will probably evolve over time, fixing more and more. It might actually a good idea to have a separate list until it stabilized.

What I figured, could you look at #131695 as well? I think in the future we'll need to think of how to run tests for this and investigating setting up the missing files.

Summary:
This patch adds initial support for compiling `flang-rt` directly for
the GPU. The method used here matches what's already done for `libc` and
`libc++` for the GPU and builds off of those projects.

Mainly this requires setting up some flags and setting the sources that
currently work. This will deposit the resulting library in the
appropriate directory. These files are then intended to be linked via
`-Xoffload-linker` support in the offloading driver.
```
lib/clang/21/lib/nvptx64-nvidia-cuda/libflang_rt.runtime.a
lib/clang/21/lib/amdgcn-amd-amdhsa/libflang_rt.runtime.a
```

This is obviously missing a lot of functions, mainly the `io` support.
Most of what we cannot support is due to using POSIX things that just
don't make sense on the GPU. Stuff like `pthreads` or `sema`.

Getting unit tests to run on this will also be a challenge. We could run
tests the same way we do with `libc`, but the problem there is that the
`libc` test suite is freestanding while `gtest` currently doesn't
compile on the GPU bcause it uses a lot of weird stuff. If the unit
tests were simply `int main` then it would work.

I don't understand the actual runtime code very well, I'd appreciate
some guidance on how to actually support Fortran IO from this interface.
As I understand it, Fortran IO requires a stack-like operation, which
conflicts with the SIMT model GPUs use. Worst case scenario we could
burn some LDS to keep a stack, or serialize it somehow since we can
always just iterate over all the active lanes.

Building this right now looks like this, which depends on the arguments
added in llvm#131695.
```
    -DRUNTIMES_nvptx64-nvidia-cuda_LLVM_ENABLE_RUNTIMES=compiler-rt;libc;libcxx;libcxxabi;flang-rt \
    -DRUNTIMES_amdgcn-amd-amdhsa_LLVM_ENABLE_RUNTIMES=compiler-rt;libc;libcxx;libcxxabi;flang-rt \
    -DRUNTIMES_nvptx64-nvidia-cuda_FLANG_RT_LIBC_PROVIDER=llvm \
    -DRUNTIMES_nvptx64-nvidia-cuda_FLANG_RT_LIBCXX_PROVIDER=llvm \
    -DRUNTIMES_amdgcn-amd-amdhsa_FLANG_RT_LIBC_PROVIDER=llvm \
    -DRUNTIMES_amdgcn-amd-amdhsa_FLANG_RT_LIBCXX_PROVIDER=llvm
```
@jhuber6 jhuber6 merged commit 85974a0 into llvm:main Mar 24, 2025
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
flang Flang issues not falling into any other category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants