A benchmarking suite to compare the performance of different Python mathematical libraries (NumPy, JAX, PyTorch) across various operations.
This project provides a comprehensive benchmarking framework to evaluate the performance of different mathematical libraries in Python. It measures execution times for various operations (element-wise operations, matrix multiplication, gradient computation, FFT, and sorting) across different libraries and hardware configurations (CPU/GPU).
- Benchmark multiple mathematical libraries (NumPy, JAX, PyTorch)
- Compare CPU and GPU performance
- Test various operations: element-wise operations, matrix multiplication, gradient computation, FFT, and sorting
- Generate high-quality SVG plots of the results
- Customizable output directory for saving results
Run the benchmark suite with default settings:
uv run main.pySpecify a custom output directory:
uv run main.py --output-dir my_benchmark_resultsThe benchmark suite tests the following operations:
- Element-wise Operations: Addition, multiplication, and sine functions
- Matrix Multiplication: Small and large matrix multiplications
- Gradient Computation: Computation of gradients
- FFT: Fast Fourier Transform
- Sorting: Sorting algorithms
Each operation is tested with both small and large tensor sizes to evaluate performance across different scales.
The benchmark generates SVG plots for each operation, showing execution times for each library and hardware configuration. The plots are saved in the specified output directory.
This project is licensed under the MIT License.
Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.