Popular repositories Loading
-
-
folk_AutoAWQ
folk_AutoAWQ PublicForked from casper-hansen/AutoAWQ
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
Python
-
folk_AutoGPTQ
folk_AutoGPTQ PublicForked from AutoGPTQ/AutoGPTQ
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
Python
-
folk_unsloth
folk_unsloth PublicForked from unslothai/unsloth
Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory
Python
-
-
folk_transformers
folk_transformers PublicForked from huggingface/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Python
If the problem persists, check the GitHub status page or contact support.