LLM.int8() is a quantization method that doesn't degrade performance which makes large model inference more accessible. The key is to extract the outliers from the inputs and weights and multiply them in 16-bit. All other values are multiplied in 8-bit and quantized to Int8 before being dequantized back to 16-bits. The outputs from the 16-bit and 8-bit multiplication are combined to produce the final output.
[[autodoc]] bitsandbytes.nn.Linear8bitLt - init
[[autodoc]] bitsandbytes.nn.Int8Params - init