diff --git a/docs/source/3x/PT_FP8Quant.md b/docs/source/3x/PT_FP8Quant.md index 06fd37b367f..51b681787ef 100644 --- a/docs/source/3x/PT_FP8Quant.md +++ b/docs/source/3x/PT_FP8Quant.md @@ -107,7 +107,7 @@ model = convert(model) | Task | Example | |----------------------|---------| -| Computer Vision (CV) | [Link](../../examples/3.x_api/pytorch/cv/fp8_quant/) | +| Computer Vision (CV) | [Link](../../../examples/3.x_api/pytorch/cv/fp8_quant/) | | Large Language Model (LLM) | [Link](https://github.com/huggingface/optimum-habana/tree/main/examples/text-generation#running-with-fp8) | > Note: For LLM, Optimum-habana provides higher performance based on modified modeling files, so here the Link of LLM goes to Optimum-habana, which utilize Intel Neural Compressor for FP8 quantization internally.