The Python programming language is increasingly popular. It is a versatile language for general purpose programming and accessible for novice programmers. However, it is also increasingly used for scientific computing and can be used to develop code that runs on GPGPUs. Additionally, a number of libraries that are commonly used in scientific computing, data science and machine learning can use GPGPUs to improve performance.
When you complete this training you will
- have an understanding of the architecture and features of GPGPUs,
- be able to transfer data between the host and the GPGPU device,
- be able to do linear algebra computations on GPGPUs using scikit-cuda,
- be able to generate random numbers on a GPGPU using curand,
- be able to define your own kernels to run on GPGPUs,
- use numba to generate kernels to run on GPGPUs,
- run machine learning algorithms on GPGPUs,
- speed up data science tasks using Rapids.
Total duration: 4 hours.
Subject | Duration |
---|---|
introduction and motivation | 5 min. |
GPGPU architecture and features | 45 min. |
moving data between host and device | 15 min. |
linear algebra on GPGPUs | 25 min. |
coffee break | 10 min. |
writing your own kernels | 60 min. |
data science with Rapids | 30 min. |
machine learning on GPUs | 20 min. |
wrap up | 10 min. |
Slides are available in the GitHub repository, as well as example code and hands-on material.
This training is for you if you speed up your Python by using GPUs.
You will need experience programming in Python. This is not a training that starts from scratch. Some familiarity with numpy is required as well.
If you plan to do Python GPU programming in a Linux or HPC environment (and you should), then familiarity with these environments is required as well.
- Geert Jan Bex (geertjan.bex@uhasselt.be)