Skip to content

Latest commit

 

History

History
44 lines (25 loc) · 666 Bytes

overview.rst

File metadata and controls

44 lines (25 loc) · 666 Bytes

Triton

Nvidia Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs.

The following content describes how to deploy ASR models trained by icefall using Triton.

./installation/index

./server/index

./client/index

./perf/index

./trt/index