-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider adopting a metaprogramming library #1296
Comments
FYI we actually tried |
That's good to know! I have not encountered GPU issues yet in LLAMA, but my GPU usage is also limited. |
Your last comment is extremely relevant. HPC is often bleeding edge and certain hardware 2-3 years behind in supporting standards, but these years make all the difference in real world applications. Thus, dependencies have to be carefully chosen. Without first hand experience this choice is difficult. One would be amazed which simple assumptions may be wrong for HPC applications. |
Could you please link the stand-alone mp11 version? A quick search was always showing the boost version only. |
Thankfully, mp11 is written in C++11 and thus uses a 10 year old standard.
https://github.com/boostorg/mp11
|
Thanks for the link: https://github.com/boostorg/mp11 |
Very good point! Maybe you can test this implicitely if we test LLAMA on these compilers.
Yes nvcc is covered by LLAMA. |
You would only get a real meaningful result if you test the stand-alone version of mp11. |
Just to complete the list, adding:
All but |
Thanks, @ax3l! Those are good points!
LLAMA has icpc (which is ICC IIRC) and icpx (DPCPP) in the CI and builds mp11 fine.
I want to test that one soonish! I guess since it's using a clang frontend, that it will work.
I guess this is also a clang frontend?
Those are definitely my nemeses and we need to think about how we can deal with these. Like where we can get systems to test there etc. I think we could schedule a discussion in an alpaka VC at some point. |
Yes, Fujitsu comes with a "traditional" and a new Clang fronted (same story with IBM and Cray and Intel).
Although |
If am interpreting the NVHPC docs correctly, then this does not mean, that nvc++ supports cuda. nvc++ is pgc++ and supports NVIDA GPU via OpenACC (and OpenMP target). There is no mention of CUDA in the description of nvc++. The NVHPC SDK also ships nvcc for CUDA. Does anyone have some different information on this? |
Yes, this changed as of GTC 2021 and will be released with the next HPC Toolkit (not the CTK).
…On April 26, 2021 3:43:19 PM MDT, jkelling ***@***.***> wrote:
> * new and hot: nvc++ (Nvidia GPU)
If am interpreting the [NVHPC
docs](https://docs.nvidia.com/hpc-sdk/index.html) correctly, then this
does not mean, that nvc++ supports cuda. nvc++ is pgc++ and supports
NVIDA GPU via OpenACC (and OpenMP target). There is no mention of CUDA
in the description of nvc++. The NVHPC SDK also ships nvcc for CUDA.
Does anyone have some different information on this?
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#1296 (comment)
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|
The linked documentation reads: Last updated April 08, 2021 |
Since we switched to C++17 last year: Which useful parts of (for example) Mp11 are we missing that can't be easily implemented through fold expressions etc.? |
I am sure almost 90% of the list on the right here: https://www.boost.org/doc/libs/master/libs/mp11/doc/html/mp11.html. It would be crazy to implement that ourselves. |
Btw, LLAMA with |
Btw: PIConGPU adopted Boost.Mp11 in the meantime. And here is a new contender: https://github.com/boost-ext/mp. Just saw it at this lightning talk: https://www.youtube.com/watch?v=-4MSlna4gKE. I am amazed with how much it can do a lot with just a few hundret LOCs. |
During the accessor development #1249 I needed to implement a few meta functions on the side. Since alpaka is TMP heavy, we are going to need such metaprogramming facilities regularly so I think we should consider picking an appropriate library providing this functionality.
LLAMA uses boost::mp11 quite successfully and it provides a good feature set. Mind, that boost::mp11 is also available as a standalone library outside the usual boost distribution.
The text was updated successfully, but these errors were encountered: