Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 73 additions & 0 deletions _data/openprojectlist.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,76 @@
- name: "Enable CUDA compilation on Cppyy-Numba generated IR"
description: |
Cppyy is an automatic, run-time, Python-C++ bindings generator, for calling
C++ from Python and Python from C++. Initial support has been added that
allows Cppyy to hook into the high-performance Python compiler,
Numba which compiles looped code containing C++ objects/methods/functions
defined via Cppyy into fast machine code. Since Numba compiles the code in
loops into machine code it crosses the language barrier just once and avoids
large slowdowns accumulating from repeated calls between the two languages.
Numba uses its own lightweight version of the LLVM compiler toolkit (llvmlite)
that generates an intermediate code representation (LLVM IR) which is also
supported by the Clang compiler capable of compiling CUDA C++ code.

The project aims to demonstrate Cppyy's capability to provide CUDA paradigms to
Python users without any compromise in performance. Upon successful completion
a possible proof-of-concept can be expected in the below code snippet -

```python
import cppyy
import cppyy.numba_ext

cppyy.cppdef('''
__global__ void MatrixMul(float* A, float* B, float* out) {
// kernel logic for matrix multiplication
}
''')

@numba.njit
def run_cuda_mul(A, B, out):
# Allocate memory for input and output arrays on GPU
# Define grid and block dimensions
# Launch the kernel
MatrixMul[griddim, blockdim](d_A, d_B, d_out)
```
tasks: |
* Add support for declaration and parsing of Cppyy-defined CUDA code on
the Numba extension.
* Design and develop a CUDA compilation and execution mechanism.
* Prepare proper tests and documentation.

- name: "Cppyy STL/Eigen - Automatic conversion and plugins for Python based ML-backends"
description: |
Cppyy is an automatic, run-time, Python-C++ bindings generator, for calling
C++ from Python and Python from C++. Cppyy uses pythonized wrappers of useful
classes from libraries like STL and Eigen that allow the user to utilize them
on the Python side. Current support follows container types in STL like
std::vector, std::map, and std::tuple and the Matrix-based classes in
Eigen/Dense. These cppyy objects can be plugged into idiomatic expressions
that expect Python builtin-types. This behaviour is achieved by growing
pythonistic methods like `__len__` while also retaining its C++ methods
like `size`.

Efficient and automatic conversion between C++ and Python is essential
towards high-performance cross-language support. This approach eliminates
overheads arising from iterative initialization such as comma insertion in
Eigen. This opens up new avenues for the utilization of Cppyy’s bindings in
tools that perform numerical operations for transformations, or optimization.

The on-demand C++ infrastructure wrapped by idiomatic Python enables new
techniques in ML tools like JAX/CUTLASS. This project allows the C++
infrastructure to be plugged into at service to the users seeking
high-performance library primitives that are unavailable in Python.

tasks: |
* Extend STL support for std::vectors of arbitrary dimensions
* Improve the initialization approach for Eigen classes
* Develop a streamlined interconversion mechanism between Python
builtin-types, numpy.ndarray, and STL/Eigen data structures
* Implement experimental plugins that perform basic computational
operations in frameworks like JAX
* Work on integrating these plugins with toolkits like CUTLASS that
utilise the bindings to provide a Python API

- name: "Enable cross-talk between Python and C++ kernels in xeus-clang-REPL by using Cppyy"
description: |
xeus-clang-REPL is a C++ kernel for Jupyter notebooks using clang-REPL as
Expand Down