Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions re migration from CUDA to C++20, SYCL, oneAPI DPC++ #307

Closed
dentarthur opened this issue Jun 11, 2020 · 1 comment
Closed

Questions re migration from CUDA to C++20, SYCL, oneAPI DPC++ #307

dentarthur opened this issue Jun 11, 2020 · 1 comment
Labels

Comments

@dentarthur
Copy link

dentarthur commented Jun 11, 2020

Taking up invitation from @mondus to ask questions here.

I'm still catching up on a high level overview of CUDA and C++ but have now convinced myself that I should focus on oneAPI/SYCL and use their vocabulary rather than CUDA terms to express my understanding of enough about how FGPU works internally given a primary focus on designing higher level CSXMS models that use it rather than actually modifying it.

My assumption is that it will soon be feasible and eventually necessary for FGPU to migrate to oneAPI/SYCL while still retaining all benefits from having been initially developed using CUDA ecosystem.

Assumption is based on my (uninformed) reading of following links:

  1. https://www.codeplay.com/portal/02-03-20-codeplay-contribution-to-dpcpp-brings-sycl-support-for-nvidia-gpus

  2. https://github.com/intel/llvm/blob/sycl/sycl/doc/GetStartedGuide.md#cuda-back-end-limitations

  3. https://software.intel.com/content/www/us/en/develop/documentation/oneapi-programming-guide/top/software-development-process/migrating-code-to-dpc/migrating-from-cuda-to-dpc.html

Those above convince me migration is not (or was not) appropriate yet, but soon will be.

Below are about the technical level I am willing to cope with for an overview rather than actually tackle details of CUDA:

I remain hopeful that this would be more than enough for a reasonable appreciation of how FGPU works.

  1. https://software.intel.com/content/dam/develop/public/us/en/documents/oneapiprogrammingguide-9.pdf

  2. https://jamesreinders.com/dpcpp/

  3. https://github.com/codeplaysoftware/syclacademy

@book{book:2391161,
title = {A Tour of C++},
author = {Bjarne Stroustrup},
publisher = {Addison-Wesley Professional},
isbn = {0134997832,9780134997834},
year = {2018},
series = {},
edition = {2},
volume = {},
url = {http://gen.lib.rus.ec/book/index.php?md5=1c011356da01da91c6982642a5592fd4}
}

(Based on C++17 with lots of parallelization support, this includes material on significant changes that I expect to make C++20 converging with DPC++ and SYCL a lot more palatable, ie "pythonic")

There seems to be an adequate ecosystem with documentation above.

I find the necessary concepts much easier to grasp in those materials than from more "hands on" CUDA docs.

Q1. Please point me to anything I would need to learn re FGPU's CUDA code instead of just focussing on above plus specifically FLAME/FGPU related papers and docs.

Q2. Any thoughts on likely timetable for future migration?

@dentarthur dentarthur changed the title Questions re migration from CUDA to C++20, SYCL, oneAPI DCP+ Questions re migration from CUDA to C++20, SYCL, oneAPI DPC++ Jun 13, 2020
@mondus
Copy link
Member

mondus commented Jul 10, 2020

SYCL is a great choice for general porting of C++ code to the GPU. Flame GPU2 is however less about porting a specific piece of code as to providing a framework in which multi agent models can be defined and mapped to a GPU architecture. The idea is to abstract the device away from modellers. CUDA is currently how we do this as it provides a huge amount of flexibility and opportunity for low level optimisation. Perhaps SYCL or C++20 can be considered i the future but it is not currently on the roadmap.

@mondus mondus closed this as completed Jul 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants