From 2294884fe12c2892e21db1a8faff801c19423803 Mon Sep 17 00:00:00 2001 From: sayakpaul Date: Fri, 17 Apr 2026 17:28:56 +0530 Subject: [PATCH] add a mention of torchao and other backends in speed memory docs. --- docs/source/en/optimization/speed-memory-optims.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/source/en/optimization/speed-memory-optims.md b/docs/source/en/optimization/speed-memory-optims.md index 80c6c79a3c83..08cf933494a5 100644 --- a/docs/source/en/optimization/speed-memory-optims.md +++ b/docs/source/en/optimization/speed-memory-optims.md @@ -33,6 +33,8 @@ The table below provides a comparison of optimization strategy combinations and This guide will show you how to compile and offload a quantized model with [bitsandbytes](../quantization/bitsandbytes#torchcompile). Make sure you are using [PyTorch nightly](https://pytorch.org/get-started/locally/) and the latest version of bitsandbytes. +While we use bitsandbytes in this example, other quantization backends such as [TorchAO](../quantization/torchao.md) also support these features. + ```bash pip install -U bitsandbytes ```