Skip to content

Commit

Permalink
Merge pull request #9 from unifyai/addCompressionTools
Browse files Browse the repository at this point in the history
Add compression tools
  • Loading branch information
hello-fri-end committed Nov 10, 2023
2 parents d86a8e2 + cd308b6 commit 50eb857
Show file tree
Hide file tree
Showing 6 changed files with 110 additions and 1 deletion.
19 changes: 19 additions & 0 deletions compression/awq.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
llm-awq:

name: "llm-awq"

image_url: https://github.com/mit-han-lab/llm-awq/blob/main/figures/overview.png

tags:
- quantization
- pytorch
- llms
- open-source

url: https://github.com/mit-han-lab/llm-awq

description: "Efficient and accurate low-bit weight quantization (INT3/4) for LLMs, supporting instruction-tuned models and multi-modal LMs."

features:
- "INT 3/4 Weight Quantization for LLMs"
- "Calibration Data-set free quantization"
19 changes: 19 additions & 0 deletions compression/bitsandbytes.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
bitsandbytes:

name: "bitsandbytes"

image_url: https://huggingface.co/blog/assets/96_hf_bitsandbytes_integration/Thumbnail_blue.png

tags:
- quantization

url: https://github.com/TimDettmers/bitsandbytes

description: "The bitsandbytes is a lightweight wrapper around CUDA custom functions,
in particular 8-bit optimizers, matrix multiplication (LLM.int8()),
and quantization functions.""
features:
- "8 Bit Weight Only Quantization"
- "Support for QLora Finetuning through NF4 dtype"
- "Calibration free Zero Shot Quantization"
20 changes: 20 additions & 0 deletions compression/gptq.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
gptq:

name: "gptq"

image_url: https://www.marktechpost.com/wp-content/uploads/2023/08/Screenshot-2023-08-26-at-6.10.51-PM.png

tags:
- quantization

url: https://github.com/IST-DASLab/gptq

description: "GPTQ is a technique for post-training quantization, which is used to quantize large language models (LLMs) such as GPT.
This method minimizes the storage requirements for GPT models by decreasing the bit count necessary to represent each
weight within the model, reducing it from 32 bits to a mere 3-4 bits or even 2 bits."

features:
- "2, 3, 4 bit Weight Quantization for LLMs"
- "Requires Calibration Dataset"
- "Calibration free Zero Shot Quantization"
- "Supports AMD GPU out of the box."
28 changes: 28 additions & 0 deletions compression/huggingfaceoptimum.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
optimum:

name: "hugginface/optimum"

image_url: https://huggingface.co/front/thumbnails/docs/optimum.png

tags:
- quantization
- pruning
- knowledge distillation
- pytorch
- open-source

url: https://github.com/huggingface/optimum

description: "🤗 Optimum is an extension of 🤗 Transformers and Diffusers, providing a set of optimization
tools enabling maximum efficiency to train and run models on targeted hardware, while
keeping things easy to use."

features:
- "Post training dynamic quantization"
- "Post training static quantization"
- "Quantization Aware Training"
- "Mixed Precision Training"
- "Pruning"
- "Distillation"
- "Joint Pruning, Quantization and Distillation"
- "Graph Optimization using Habana and ONNX"
21 changes: 21 additions & 0 deletions compression/tensorly-torch.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@

tensorly-torch:

name: "TensorLy-Torch"

image_url: https://tensorly.org/torch/dev/_images/tensorly-torch-pyramid.png

tags:
- tensorization
- pytorch

url: https://github.com/tensorly/torch

description: "TensorLy-Torch is a Python library for deep tensor networks that builds on top of TensorLy and PyTorch.
It allows to easily leverage tensor methods in a deep learning setting and comes with all batteries included."

features:
- "Tensorized/Factorized Layers"
- "Tensor Regression and Contraction"
- "Tensor Dropout"
- "Tensor Hooks"
4 changes: 3 additions & 1 deletion compression/tensorly.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,15 @@ tensorly:
- jax
- open-source

url: https://github.com/intel/neural-compressor
url: https://github.com/tensorly/tensorly

description: "TensorLy is a Python library that aims at making tensor learning simple and accessible. It allows simple performing of tensor decomposition,
tensor learning and tensor algebra. Its backend system allows to seamlessly perform computation with NumPy,
PyTorch, JAX, MXNet, TensorFlow or CuPy, and run methods at scale on CPU or GPU."

features:
- "Tensor Algebra"
- "Tucker Decomposition"
- "Canonical Polyadic Decomposition"
- "Tensor Train Decomposition"
- "Parafac Decomposition"

0 comments on commit 50eb857

Please sign in to comment.