Triton version 2.33.0 corresponding to NGC container 23.04 is now Released! #5700
tanmayv25
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
New Release Available
https://github.com/triton-inference-server/server/releases/tag/v2.33.0
What's New in 2.33.0
Triton can now load models concurrently reducing the server start-up times.
Sequence batcher with direct scheduling strategy now includes experimental support for schedule policy.
Triton’s ragged batching support has been extended to PyTorch backend.
Triton can now forward HTTP/GRPC headers as inference request parameters to the backend.
Triton python backend’s business logic scripting now allows developers to select a specific device to receive output tensors from a BLS call.
Triton latency metrics can now be obtained as configurable quantiles over a sliding time window using experimental metrics summary support.
Users can now restrict the access of the protocols on a given Triton endpoint.
Triton now provides a limited support for tracing inference requests using OpenTelemetry Trace APIs.
Model Analyzer now supports BLS Models.
Refer to the 23.04 column of the Frameworks Support Matrix for container image versions on which the 23.04 inference server container is based.
Known Issues
Tensorflow backend no longer supports TensorFlow version 1.
Triton Inferentia guide is out of date. Some users have reported issues with running Triton on AWS Inferentia instances.
Some systems which implement
malloc()
may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library inLD_PRELOAD
.Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with
--disable-auto-complete-config
.Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: Adding model metadata in TorchScript model file pytorch/pytorch#38273.
Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.
The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.
Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.
Refer to JIT vs. eager mismatches for jit.traced
int8
toint32
casting pytorch/pytorch#66930 for more information.Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).
Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.
Beta Was this translation helpful? Give feedback.
All reactions