diff --git a/docs/source/conf.py b/docs/source/conf.py index ef9fe1445a..6452c5d6d3 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -86,4 +86,6 @@ .. _ncnn: https://github.com/tencent/ncnn .. _LibriSpeech: https://www.openslr.org/12 .. _musan: http://www.openslr.org/17/ +.. _ONNX: https://github.com/onnx/onnx +.. _onnxruntime: https://github.com/microsoft/onnxruntime """ diff --git a/docs/source/model-export/export-onnx.rst b/docs/source/model-export/export-onnx.rst index 83c8440b5a..ddcbc965fe 100644 --- a/docs/source/model-export/export-onnx.rst +++ b/docs/source/model-export/export-onnx.rst @@ -1,20 +1,21 @@ Export to ONNX ============== -In this section, we describe how to export the following models to ONNX. +In this section, we describe how to export models to `ONNX`_. In each recipe, there is a file called ``export-onnx.py``, which is used -to export trained models to ONNX. +to export trained models to `ONNX`_. There is also a file named ``onnx_pretrained.py``, which you can use -the exported ONNX model in Python to decode sound files. +the exported `ONNX`_ model in Python with `onnxruntime`_ to decode sound files. Example ======= In the following, we demonstrate how to export a streaming Zipformer pre-trained -model from ``_ -to ONNX. +model from +``_ +to `ONNX`_. Download the pre-trained model ------------------------------