A compiler-based framework for big data in Python
High Performance Analytics Toolkit (HPAT) scales analytics/ML codes in Python to bare-metal cluster/cloud performance automatically. It compiles a subset of Python (Pandas/Numpy) to efficient parallel binaries with MPI, requiring only minimal code changes. HPAT is orders of magnitude faster than alternatives like Apache Spark.
HPAT's documentation can be found here.
HPAT can be installed in Anaconda environment easily (Linux/Mac/Windows):
conda create -n HPAT -c ehsantn -c numba -c anaconda -c conda-forge hpat
Windows installaton requires Intel MPI to be installed.
An HPAT docker image is also available for running containers. For example:
docker run -it ehsantn/hpat bash
Here is a Pi calculation example in HPAT:
import hpat import numpy as np import time @hpat.jit def calc_pi(n): t1 = time.time() x = 2 * np.random.ranf(n) - 1 y = 2 * np.random.ranf(n) - 1 pi = 4 * np.sum(x**2 + y**2 < 1) / n print("Execution time:", time.time()-t1, "\nresult:", pi) return pi calc_pi(2 * 10**8)
Save this in a file named pi.py and run (on 8 cores):
mpiexec -n 8 python pi.py
This should demonstrate about 100x speedup compared to regular Python version without @hpat.jit and mpiexec.
These academic papers describe the underlying methods in HPAT: