-
Notifications
You must be signed in to change notification settings - Fork 7
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #9 from unifyai/addCompressionTools
Add compression tools
- Loading branch information
Showing
6 changed files
with
110 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
llm-awq: | ||
|
||
name: "llm-awq" | ||
|
||
image_url: https://github.com/mit-han-lab/llm-awq/blob/main/figures/overview.png | ||
|
||
tags: | ||
- quantization | ||
- pytorch | ||
- llms | ||
- open-source | ||
|
||
url: https://github.com/mit-han-lab/llm-awq | ||
|
||
description: "Efficient and accurate low-bit weight quantization (INT3/4) for LLMs, supporting instruction-tuned models and multi-modal LMs." | ||
|
||
features: | ||
- "INT 3/4 Weight Quantization for LLMs" | ||
- "Calibration Data-set free quantization" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
bitsandbytes: | ||
|
||
name: "bitsandbytes" | ||
|
||
image_url: https://huggingface.co/blog/assets/96_hf_bitsandbytes_integration/Thumbnail_blue.png | ||
|
||
tags: | ||
- quantization | ||
|
||
url: https://github.com/TimDettmers/bitsandbytes | ||
|
||
description: "The bitsandbytes is a lightweight wrapper around CUDA custom functions, | ||
in particular 8-bit optimizers, matrix multiplication (LLM.int8()), | ||
and quantization functions."" | ||
features: | ||
- "8 Bit Weight Only Quantization" | ||
- "Support for QLora Finetuning through NF4 dtype" | ||
- "Calibration free Zero Shot Quantization" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
gptq: | ||
|
||
name: "gptq" | ||
|
||
image_url: https://www.marktechpost.com/wp-content/uploads/2023/08/Screenshot-2023-08-26-at-6.10.51-PM.png | ||
|
||
tags: | ||
- quantization | ||
|
||
url: https://github.com/IST-DASLab/gptq | ||
|
||
description: "GPTQ is a technique for post-training quantization, which is used to quantize large language models (LLMs) such as GPT. | ||
This method minimizes the storage requirements for GPT models by decreasing the bit count necessary to represent each | ||
weight within the model, reducing it from 32 bits to a mere 3-4 bits or even 2 bits." | ||
|
||
features: | ||
- "2, 3, 4 bit Weight Quantization for LLMs" | ||
- "Requires Calibration Dataset" | ||
- "Calibration free Zero Shot Quantization" | ||
- "Supports AMD GPU out of the box." |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
optimum: | ||
|
||
name: "hugginface/optimum" | ||
|
||
image_url: https://huggingface.co/front/thumbnails/docs/optimum.png | ||
|
||
tags: | ||
- quantization | ||
- pruning | ||
- knowledge distillation | ||
- pytorch | ||
- open-source | ||
|
||
url: https://github.com/huggingface/optimum | ||
|
||
description: "🤗 Optimum is an extension of 🤗 Transformers and Diffusers, providing a set of optimization | ||
tools enabling maximum efficiency to train and run models on targeted hardware, while | ||
keeping things easy to use." | ||
|
||
features: | ||
- "Post training dynamic quantization" | ||
- "Post training static quantization" | ||
- "Quantization Aware Training" | ||
- "Mixed Precision Training" | ||
- "Pruning" | ||
- "Distillation" | ||
- "Joint Pruning, Quantization and Distillation" | ||
- "Graph Optimization using Habana and ONNX" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
|
||
tensorly-torch: | ||
|
||
name: "TensorLy-Torch" | ||
|
||
image_url: https://tensorly.org/torch/dev/_images/tensorly-torch-pyramid.png | ||
|
||
tags: | ||
- tensorization | ||
- pytorch | ||
|
||
url: https://github.com/tensorly/torch | ||
|
||
description: "TensorLy-Torch is a Python library for deep tensor networks that builds on top of TensorLy and PyTorch. | ||
It allows to easily leverage tensor methods in a deep learning setting and comes with all batteries included." | ||
|
||
features: | ||
- "Tensorized/Factorized Layers" | ||
- "Tensor Regression and Contraction" | ||
- "Tensor Dropout" | ||
- "Tensor Hooks" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters