From d97e9151637be3c80e3f8ac32ade1ea8191deefe Mon Sep 17 00:00:00 2001 From: William Zhang <108840645+WilliamZhang20@users.noreply.github.com> Date: Wed, 10 Dec 2025 19:37:32 -0500 Subject: [PATCH] update hyperlink --- docs/source/tutorials_source/pt2e_quant_qat.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/tutorials_source/pt2e_quant_qat.rst b/docs/source/tutorials_source/pt2e_quant_qat.rst index d8eb013d70..87422b3e28 100644 --- a/docs/source/tutorials_source/pt2e_quant_qat.rst +++ b/docs/source/tutorials_source/pt2e_quant_qat.rst @@ -5,7 +5,7 @@ PyTorch 2 Export Quantization-Aware Training (QAT) This tutorial shows how to perform quantization-aware training (QAT) in graph mode based on `torch.export.export `_. For more details about PyTorch 2 Export Quantization in general, refer -to the `post training quantization tutorial `_ +to the `post training quantization tutorial `_ The PyTorch 2 Export QAT flow looks like the following—it is similar to the post training quantization (PTQ) flow for the most part: