This project contains two primary modules: a suite of tools for parallel matrix operations using Python's multiprocessing library, and a CUDA-based solver for differential equations, available in both a C++/CUDA and a Python/PyCUDA implementation.
This part of the project provides Python scripts that use the multiprocessing module to perform matrix multiplication and inversion in parallel. This is particularly effective for large matrices where computational tasks can be distributed across multiple CPU cores to improve performance.
parallel_matrix_multiplication.py: Contains functions for performing matrix multiplication using both standard serial processing and parallel processing.parallel_matrix_inversion.py: Provides functions for inverting matrices using serial and parallel methods based on Gauss-Jordan elimination.test_matrix_multiplication.py&test_matrix_inverse.py: Unit tests to verify the correctness of the parallel and serial matrix operations.
To ensure the implementations are correct, you can run the provided unit tests from the root directory of the project.
python -m unittest src/tests/test_matrix_multiplication.py
python -m unittest src/tests/test_matrix_inverse.pyThis module is designed to solve second-order linear ordinary differential equations (ODEs) using the finite difference method, accelerated with NVIDIA CUDA. It demonstrates how to leverage GPU parallelism to speed up matrix inversion and matrix-vector multiplication, which are the core components of the solver.
-
Python (PyCUDA) Implementation:
solver.py: A Python script that uses PyCUDA to dynamically compile and run CUDA kernels. It handles data initialization, orchestrates the GPU computations, and verifies the results.
-
C++/CUDA Implementation:
matrix_inversion.cu: Contains the core CUDA C++ kernels for performing parallel matrix inversion using Gauss-Jordan elimination.solver.cu: The main CUDA C++ file that sets up the problem, manages GPU memory, and calls the kernels to solve the ODE.
- NVIDIA GPU with CUDA support.
- NVIDIA CUDA Toolkit (for
nvcccompiler). - For the Python version: A Python environment with
numpyandpycuda.
-
Python (PyCUDA) Version: Run the Python script from the command line. The script will compile the CUDA code within it, execute the solver, and print the results.
python solver.py
-
C++/CUDA Version: First, compile the
.cufiles using thenvcccompiler, then run the resulting executable.# Navigate to the CUDA solver directory cd cuda/differential_solver # Compile the solver nvcc solver.cu -o differential_solver # Run the executable ./differential_solver