From ae6c1ec4a6e8b05ae28ecee6c8f9c3c8fd1e36cc Mon Sep 17 00:00:00 2001 From: mobicham <37179323+mobicham@users.noreply.github.com> Date: Tue, 21 Nov 2023 15:57:28 +0100 Subject: [PATCH] typo --- index.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/index.html b/index.html index 3d73b5a..f595c18 100644 --- a/index.html +++ b/index.html @@ -1425,7 +1425,7 @@

LLama2 Benchmark

ViT Benchmark

-

We evaluate the effectiveness of our quantization method on vision models as well. More specifically, we quantize various OpenCLIP models from the Visual Transformers (ViT) family trained on the LAION dataset. Since Auto-GPTQ and Auto-AWQ calibration only work with text inputs, we can only evaluate against bitsandbytes by replacing all the linear layers inside the transformer blocks with their quantized versions. +

We evaluate the effectiveness of our quantization method on vision models as well. More specifically, we quantize various OpenCLIP models from the Visual Transformers (ViT) family trained on the LAION dataset. Since Auto-GPTQ and Auto-AWQ calibration only works with text inputs, we can only evaluate against bitsandbytes by replacing all the linear layers inside the transformer blocks with their quantized versions.