From 56038ad400c1941633cde6604260bbfa278c9086 Mon Sep 17 00:00:00 2001 From: "Huang, Tai" Date: Wed, 16 Oct 2024 21:27:20 +0800 Subject: [PATCH] fix broken link to FP8 example Signed-off-by: Huang, Tai --- docs/source/3x/PT_FP8Quant.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/3x/PT_FP8Quant.md b/docs/source/3x/PT_FP8Quant.md index 06fd37b367f..51b681787ef 100644 --- a/docs/source/3x/PT_FP8Quant.md +++ b/docs/source/3x/PT_FP8Quant.md @@ -107,7 +107,7 @@ model = convert(model) | Task | Example | |----------------------|---------| -| Computer Vision (CV) | [Link](../../examples/3.x_api/pytorch/cv/fp8_quant/) | +| Computer Vision (CV) | [Link](../../../examples/3.x_api/pytorch/cv/fp8_quant/) | | Large Language Model (LLM) | [Link](https://github.com/huggingface/optimum-habana/tree/main/examples/text-generation#running-with-fp8) | > Note: For LLM, Optimum-habana provides higher performance based on modified modeling files, so here the Link of LLM goes to Optimum-habana, which utilize Intel Neural Compressor for FP8 quantization internally.