Skip to content

NVIDIA Triton Inference Server Organization

NVIDIA Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs.

This top level GitHub organization host repositories for officially supported backends, including TensorRT, TensorFlow, PyTorch, Python, ONNX Runtime, and OpenVino. The organization also hosts several popular Triton tools, including:

  • Model Analyzer: A tool to analyze the runtime performance of a model and provide an optimized model configuration for Triton Inference Server.

  • Model Navigator: a tool that provides the ability to automate the process of moving a model from source to optimal format and configuration for deployment on Triton Inference Server.

Getting Started

To learn about NVIDIA Triton Inference Server, refer to the Triton developer page and read our Quickstart Guide. Official Triton Docker containers are available from NVIDIA NGC.

Product Documentation

User documentation on Triton features, APIs, and architecture is located in the server documents on GitHub. A table of contents for the user documentation is located in the server README file.

Release Notes, Support Matrix, and Licenses information are available in the NVIDIA Triton Inference Server Documentation.

Examples

Specific end-to-end examples for popular models, such as ResNet, BERT, and DLRM are located in the NVIDIA Deep Learning Examples page on GitHub. Additional generic examples can be found in the server documents.

Feedback

Share feedback or ask questions about NVIDIA Triton Inference Server by filing a GitHub issue.

Pinned

  1. server server Public

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    Python 7.4k 1.4k

  2. core core Public

    The core library and APIs implementing the Triton Inference Server.

    C++ 92 87

  3. backend backend Public

    Common source, scripts and utilities for creating Triton backends.

    C++ 261 79

  4. client client Public

    Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

    C++ 490 216

  5. model_analyzer model_analyzer Public

    Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

    Python 381 73

  6. model_navigator model_navigator Public

    Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.

    Python 160 24

Repositories

Showing 10 of 34 repositories
  • client Public

    Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

    C++ 490 BSD-3-Clause 216 6 22 Updated May 10, 2024
  • triton_cli Public

    Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.

    Python 19 1 1 2 Updated May 11, 2024
  • server Public

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    Python 7,403 BSD-3-Clause 1,379 409 51 Updated May 10, 2024
  • tutorials Public

    This repository contains tutorials and examples for Triton Inference Server

    Python 416 BSD-3-Clause 74 8 11 Updated May 10, 2024
  • Python 112 BSD-3-Clause 13 0 1 Updated May 10, 2024
  • core Public

    The core library and APIs implementing the Triton Inference Server.

    C++ 92 BSD-3-Clause 87 0 20 Updated May 10, 2024
  • tensorrt_backend Public

    The Triton backend for TensorRT.

    C++ 45 BSD-3-Clause 27 0 0 Updated May 10, 2024
  • python_backend Public

    Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.

    C++ 476 BSD-3-Clause 137 0 8 Updated May 10, 2024
  • onnxruntime_backend Public

    The Triton backend for the ONNX Runtime.

    C++ 113 BSD-3-Clause 53 63 2 Updated May 9, 2024
  • model_analyzer Public

    Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

    Python 381 Apache-2.0 73 10 4 Updated May 9, 2024

Top languages

Loading…