From dae7041a401c48984863db622919d84603469848 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Alejandro=20Rodr=C3=ADguez=20Salamanca?= Date: Mon, 15 May 2023 22:33:16 +0200 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 727a86cb5..5cf1e6ee0 100644 --- a/README.md +++ b/README.md @@ -102,7 +102,7 @@ For straight Int8 matrix multiplication with mixed precision decomposition you c bnb.matmul(..., threshold=6.0) ``` -For instructions how to use LLM.int8() inference layers in your own code, see the TL;DR above or for extended instruction see [this blog post](https://github.com/huggingface/transformers). +For instructions how to use LLM.int8() inference layers in your own code, see the TL;DR above or for extended instruction see [this blog post](https://huggingface.co/blog/hf-bitsandbytes-integration). ### Using the 8-bit Optimizers