Skip to content

CODARcode/MGARD

Repository files navigation

build status format status

MGARD (MultiGrid Adaptive Reduction of Data) is a technique for multilevel lossy compression and refactoring of scientific data based on the theory of multigrid methods. We encourage you to make a GitHub issue if you run into any problems using MGARD, have any questions or suggestions, etc.

MGARD framework consists of the following modules. Please see the detailed instructions for each module to build and install MGARD.

MGARD-CPU: MGARD implementation for CPUs

MGARD-CPU is design for running compression on CPUs. See detailed user guide in here

MGARD-CUDA: CUDA accelerated compression

MGARD-CUDA is designed for accelerating compression specifically using NVIDIA GPUs. See detailed user guide in here.

MGARD-X: Accelerated and portable compression

MGARD-X is designed for portable compression on NVIDIA GPUs, AMD GPUs, and CPUs. See detailed user guide in here.

MGARD-DR/MGARD-XDR: Fine-grain progressive data reconstruction

MGARD-DR and MGARD-XDR are designed for enabling fine-grain data refactoring and progressive data reconstruction. See detailed user guide in here.

MGARD-ROI: Preserving Region-of-Interest

MGARD-ROI is designed for preserving region-of-interest during data compression. See detailed user guide in here.

MGARD-QOI: Preserving Linear Quantity-of-Interest

MGARD-QOI is designed for preserving linear quantity-of-interest during data compression. See detailed user guide in here.

MGARD-Lambda: Preserving Non-Linear Quantity-of-Interest

MGARD-Lambda is designed for preserving non-linear quantity-of-interest during data compression. This is an experimental part of MGARD. Currently only support certain QoIs derived from XGC 5D data. See theory in here and example in here.

Self-describing format for compressed and refactored data

Data produced by MGARD, MGARD-X, and MDR-X are designed to follow a unified self-describing format. See format details in here.

Publications

Fundamental Theory

Preserving Quantites of Interest (QoIs)

Pregressive Retrieval

Parallelization and GPU Acceleration

System Optimizations