Heat is a distributed tensor framework for high performance data analytics.
- Quick Start for new users and contributors (Jan 14, 2023)
Heat is a flexible and seamless open-source software for high performance data analytics and machine learning. It provides highly optimized algorithms and data structures for tensor computations using CPUs, GPUs and distributed cluster systems on top of MPI. The goal of Heat is to fill the gap between data analytics and machine learning libraries with a strong focus on single-node performance, and traditional high-performance computing (HPC). Heat's generic Python-first programming interface integrates seamlessly with the existing data science ecosystem and makes it as effortless as using numpy to write scalable scientific and data science applications.
Heat allows you to tackle your actual Big Data challenges that go beyond the computational and memory needs of your laptop and desktop.
- High-performance n-dimensional tensors
- CPU, GPU and distributed computation using MPI
- Powerful data analytics and machine learning methods
- Abstracted communication via split tensors
- Python API
TL;DR: Quick Start
Check out our Jupyter Notebook tutorial right here on Github or in the /scripts directory.
The complete documentation of the latest version is always deployed on Read the Docs.
We use StackOverflow as a forum for questions about Heat. If you do not find an answer to your question, then please ask a new question there and be sure to tag it with "pyheat".
You can also reach us on GitHub Discussions.
Heat requires Python 3.7 or newer. Heat is based on PyTorch. Specifically, we are exploiting PyTorch's support for GPUs and MPI parallelism. For MPI support we utilize mpi4py. Both packages can be installed via pip or automatically using the setup.py.
TL;DR: Quick Start
Tagged releases are made available on the Python Package Index (PyPI). You can typically install the latest version with
$ pip install heat[hdf5,netcdf]
where the part in brackets is a list of optional dependencies. You can omit it, if you do not need HDF5 or NetCDF support.
It is recommended to use the most recent supported version of PyTorch!
It is also very important to ensure that the PyTorch version is compatible with the local CUDA installation. More information can be found here.
TL;DR: Quick Start
If you want to work with the development version, you can check out the sources using
$ git clone https://github.com/helmholtz-analytics/heat.git
The installation can then be done from the checked-out sources with
$ pip install .[hdf5,netcdf,dev]
We welcome contributions from the community, please check out our Contribution Guidelines before getting started!
Heat is distributed under the MIT license, see our LICENSE file.
If you find Heat helpful for your research, please mention it in your publications. You can cite:
- Götz, M., Debus, C., Coquelin, D., Krajsek, K., Comito, C., Knechtges, P., Hagemeier, B., Tarnawa, M., Hanselmann, S., Siggel, S., Basermann, A. & Streit, A. (2020). HeAT - a Distributed and GPU-accelerated Tensor Framework for Data Analytics. In 2020 IEEE International Conference on Big Data (Big Data) (pp. 276-287). IEEE, DOI: 10.1109/BigData50022.2020.9378050.
@inproceedings{heat2020,
title={{HeAT -- a Distributed and GPU-accelerated Tensor Framework for Data Analytics}},
author={
Markus Götz and
Charlotte Debus and
Daniel Coquelin and
Kai Krajsek and
Claudia Comito and
Philipp Knechtges and
Björn Hagemeier and
Michael Tarnawa and
Simon Hanselmann and
Martin Siggel and
Achim Basermann and
Achim Streit
},
booktitle={2020 IEEE International Conference on Big Data (Big Data)},
year={2020},
pages={276-287},
month={December},
publisher={IEEE},
doi={10.1109/BigData50022.2020.9378050}
}
This work is supported by the Helmholtz Association Initiative and Networking Fund under project number ZT-I-0003 and the Helmholtz AI platform grant.