Skip to content
#

smoothquant

Here are 2 public repositories matching this topic...

This is the official implementation of "LLM-QBench: A Benchmark Towards the Best Practice for Post-training Quantization of Large Language Models", and it is also an efficient LLM compression tool with various advanced compression methods, supporting multiple inference backends.

  • Updated Jun 7, 2024
  • Python

Improve this page

Add a description, image, and links to the smoothquant topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the smoothquant topic, visit your repo's landing page and select "manage topics."

Learn more