Skip to content

Releases: pytorch/serve

TorchServe v0.11.0 Release Notes

17 May 03:59
34bc370
Compare
Choose a tag to compare

This is the release of TorchServe v0.11.0.

Highlights Include

  • GenAI inference optimizations showcasing
    • torch.compile with OpenVINO backend for Stable Diffusion
    • Intel IPEX for Llama
  • Experimental support for Apple MPS and linux-aarch64
  • Security bug fixing

GenAI

  • Upgraded LLama2 examples to Llama3
    • Supported Llama3 in HuggingFace Accelerate Example #3108 @mreso
    • Supported Llama3 in chat bot #3131 @mreso
    • Supported Llama3 on inf2 Neuronx transformer using continuous batching or micro batching #3133 #3035 @lxning
  • Examples for LoRA and Mistral #3077 @lxning
  • IPEX LLM serving example with Intel AMX #3068 @bbhattar
  • Integration of Intel Openvino with TorchServe using torch.compile. Example showcase of openvino torch.compile backend with Stable Diffusion #3116 @suryasidd
  • Enabling retrieval of guaranteed sequential order of input sequences with low latency for stateful inference via HTTP extending this previously gRPC-only feature #3142 @lxning

Linux aarch64 Support:

TorchServe adds support for linux-aarch64 and shows an example working on AWS Graviton. This provides users with a new platform alternative for serving models on CPU.

Apple Silicon Support:

XGBoost Support:

With the XGBoost Classifier example, we show how to deploy any pickled model with TorchServe.

Security

The ability to bypass allowed_urls using relative paths has been fixed by ensuring preemptive check for relative paths prior to copying the model archive to the model store directory. Also, the default gRPC inference and management addresses are now set to localhost(127.0.0.1) to reduce scope of default access to gRPC endpoints.

C++ Backend

Documentation

Improvements and Bug Fixing

Platform Support

Ubuntu 20.04 MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above, and JDK17.

GPU Support Matrix

TorchServe version PyTorch version Python Stable CUDA Experimental CUDA
0.11.0 2.3.0 >=3.8, <=3.11 CUDA 11.8, CUDNN 8.7.0.84 CUDA 12.1, CUDNN 8.9.2.26
0.10.0 2.2.1 >=3.8, <=3.11 CUDA 11.8, CUDNN 8.7.0.84 CUDA 12.1, CUDNN 8.9.2.26
0.9.0 2.1 >=3.8, <=3.11 CUDA 11.8, CUDNN 8.7.0.84 CUDA 12.1, CUDNN 8.9.2.26
0.8.0 2.0 >=3.8, <=3.11 CUDA 11.7, CUDNN 8.5.0.96 CUDA 11.8, CUDNN 8.7.0.84
0.7.0 1.13 >=3.7, <=3.10 CUDA 11.6, CUDNN 8.3.2.44 CUDA 11.7, CUDNN 8.5.0.96

Inferentia2 Support Matrix

TorchServe version PyTorch version Python Neuron SDK
0.11.0 2.1 >=3.8, <=3.11 2.18.2+
0.10.0 1.13 >=3.8, <=3.11 2.16+
0.9.0 1.13 >=3.8, <=3.11 2.13.2+

TorchServe v0.10.0 Release Notes

15 Mar 00:03
Compare
Choose a tag to compare

This is the release of TorchServe v0.10.0.

Highlights include

  • Extended support for PyTorch 2.x inference
  • C++ backend
  • GenAI fast series torch.compile showcase examples
  • Token authentication support for enhanced security.

C++ Backend

TorchServe presented the experimental C++ backend at the PyTorch Conference 2022. Similar to the Python backend, C++ backend also runs as a process and utilizes the BaseHandler to define APIs for customizing the handler. By providing a backend and handler written in pure C++ for TorchServe, it is now possible to deploy PyTorch models without any Python overhead. This release officially promoted the experimental branch to the master branch and included additional examples and Docker images for development.

torch.compile

With the launch of PT2 Inference at the PyTorch Conference 2023, we have added several key examples showcasing out-of-box speedups for torch.compile and AOT Compile. Since there is no new development being done in TorchScript, starting this release, TorchServe is preparing the migration path for customers to switch from TorchScript to torch.compile.

GenAI torch.compile series

The fast series GenAI models - GPTFast, SegmentAnythingFast, DiffusionFast with 3-10x speedups using torch.compile and native PyTorch optimizations:

Cold start problem solution

To address cold start problems, there is an example included to show how torch._export.aot_load (experimental API) can be used to load a pre-compiled model. TorchServe has also started benchmarking models with torch.compile and tracking their performance compared to TorchScript.

The new TorchServe C++ backend also includes torch.compile and AOTInductor related examples for ResNet50, BERT and Llama2.

  1. torch.compile
    a. Example torch.compile with image classifier model densenet161 #2915 @agunapal
    b. Example torch._export.aot_compile with image classification model ResNet-18 #2832 #2906 #2932 #2948 @agunapal
    c. Example torch inductor fx graph caching with image classification model densenet161 #2925 @agunapal

  2. C++ AOTInductor
    a. Example AOT Inductor with Llama2 #2913 @mreso
    b. Example AOT Inductor with ResNet-50 #2944 @lxning
    c. Example AOT Inductor with BERTSequenceClassification #2931 @lxning

Gen AI

  • Supported sequence batching for stateful inference in gRPC bi-directional streaming #2513 @lxning
  • The fast series Gen AI models using torch.compile and native PyTorch optimizations.
  • Example Mistral 7B with vLLM #2781 @agunapal
  • Example PyTorch native tensor parallel with Llama2 with continuous batching #2709 @mreso @HamidShojanazeri
  • Supported inf2 Neuronx transformer continuous batching for both no coding style and advanced customers with Llama2-70B example #2803 #3016 @lxning
  • Example deepspeed mii fastgen with Llama2-13B #2779 @lxning

Security

TorchServe has implemented token authentication for management and inference APIs. This is an optional config and can be enabled using torchserve-endpoint-plugin. This plugin can be downloaded from maven. This further strengthens TorchServe’s capability as a secure model serving solution. The security features of TorchServe are documented here

Apple Silicon Support

TorchServe is now supported on Apple Silicon mac. The current support is for CPU only. We have also posted an RFC for the deprecation of x86 mac support.

KServe Updates

While serving large models, model loading can take some time even though the pod is running. Even though TorchServe is up, the worker is not ready till the model is loaded. To address this, TorchServe now sets the model ready status in KServe after the model has been loaded on workers. TorchServe also includes native open inference protocol support in gRPC. This is an experiment feature.

  • Supported native KServe open inference protocol in gRPC #2609 @andyi2it
  • Refactored TorchServe configuration in KServe #2995 @sgaist
  • Improved KServe protocol version handling #2957 @sgaist
  • Updated KServe test script to return model version #2973 @agunapal
  • Set model status using TorchServe API in KServe #1878 @byeongjokim
  • Supported no-archive model archiver in KServe #2839 @agunapal
  • How to deploy MNIST using KServe with minikube #2718 @agunapal
  • Changes to support no-model archive mode with KServe #2839 @agunpal

Metrics Updates

In order to extend backwards compatibility support for metrics, auto-detection of backend metrics enables the flexibility to publish custom model metrics without having to explicitly specify them in the metrics configuration file. Furthermore, a customized script to collect system metrics is also now supported.

Improvements and Bug Fixing

Documentation

Platform Support

Ubuntu 20.04 MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above, and JDK17.

GPU Support Matrix

TorchServe version PyTorch version Python Stable CUDA Experimental CUDA
0.10.0 2.2.1 >=3.8, <=3.11 CUDA 11.8, CUDNN 8.7.0.84 CUDA 12.1, CUDNN 8.9.2.26
0.9.0 2.1 >=3.8, <=3.11 CUDA 11.8, CUDNN 8.7.0.84 CUDA 12.1, CUDNN 8.9.2.26
0.8.0 2.0 >=3.8, <=3.11 CUDA 11.7, CUDNN 8.5.0.96 CUDA 11.8, CUDNN 8.7.0.84
0.7.0 1.13 >=3.7, <=3.10 CUDA 11.6, CUDNN ...
Read more

TorchServe v0.9.0 Release Notes

13 Oct 00:21
db47936
Compare
Choose a tag to compare

This is the release of TorchServe v0.9.0.

Security

Our security process is documented here

We rely heavily on automation to improve the security of torchserve namely by

  1. On a monthly basis updating our gradle and pip dependencies
  2. Docker scanning via Snyk
  3. Code analysis via CodeQL

A key point to remember is that torchserve will allow you to configure things in an unsecure way so make sure to read our security docs and relevant security warnings to make sure your product is secure in production. In general we do not encourage you to download untrusted mar files from the internet, running a .mar file effectively is running arbitrary python code so make sure to unzip mar files and validate whether they are doing anything suspicious.

Code scanning fixes

  1. Used Sha-256 in ziputils #2629 @msaroufim
  2. Verified default hostname in Test #2631 @msaroufim
  3. Fixed zip slip error #2634 @msaroufim
  4. Used string array as Process arguments input #2632 #2635 @msaroufim
  5. Enabled Netty HTTP header validation as default #2630 @msaroufim
  6. Verified 3rd party package installation path #2687 @lxning
  7. Allowed url validation #2685 @lxning including
  • Disabled loading TS_ALLOWED_URLS from env by default.
  • Moved the model url validation to last step.
  • Sanity check model archive name to guard Uncontrolled data used in path expression

Address configuration updates

  1. Updated default address from 0.0.0.0 to 127.0.0.1 #2624 #2704 @namannandan @agunapal
  2. Bind container ports to localhost ports #2646 @namannandan

Documentation improvements

  1. Updated security readme #2643 #2690 @msaroufim @agunapal
  2. Updated security guidance in docker readme #2669 @agunapal

Dependency improvements

  1. Created dependabot.yml #2642 #2675 @msaroufim
  2. Bumped packaging from 23.1 to 23.2
  3. Bumped pygit2 from 1.21.1 to 1.13.1
  4. Bumped com.github.spotbugs from 4.0.2 to 5.1.3
  5. Bumped ONNX from 1.14.0 to 1.14.1
  6. Bumped Pillow from 9.3.0 to 10.0.1
  7. Bumped Bump com.amazonaws:DynamoDBLocal from 1.13.2 to 2.0.0
  8. Upgraded node to version 18 #2663 @agunapal

Blogs

New Features

New Examples

  1. Deploy Llama2 on Inferentia2 #2458 @namannandan
  2. Using TorchServe on SageMaker Inf2.24xlarge with Llama2-13B @lxning
  3. PyTorch tensor parallel on Llama2 example #2623 #2689 @HamidShojanazeri
  4. Enabled better transformer (ie. flash attention 2) on Llama2 #2700 @HamidShojanazeri @lxning
  5. Llama2 Chatbot on Mac #2618 @agunapal
  6. ASR speech recognition example #2047 @husenzhang

Improvements

Documentation

Platform Support

Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04 MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above, and JDK17.

GPU Support

Torch 2.1.0 + Cuda 11.8, 12.1
Torch 2.0.1 + Cuda 11.7
Torch 2.0.0 + Cuda 11.7
Torch 1.13 + Cuda 11.7
Torch 1.11 + Cuda 10.2, 11.3, 11.6
Torch 1.9.0 + Cuda 11.1
Torch 1.8.1 + Cuda 9.2

TorchServe v0.8.2 Release Notes

28 Aug 23:20
04e0b37
Compare
Choose a tag to compare

This is the release of TorchServe v0.8.2.

Security

Custom metrics backwards compatibility

  • add_metric is now backwards compatible with versions [< v0.6.1] but the default metric type is inferred to be COUNTER. If the metric is of a different type, it will need to be specified in the call to add_metric as follows:
    metrics.add_metric(name='GenericMetric', value=10, unit='count', dimensions=[...], metric_type=MetricTypes.GAUGE)
  • When upgrading from versions [v0.6.1 - v0.8.1] to v0.8.2, replace the call to add_metric with add_metric_to_cache.
  • All custom metrics updated in the custom handler will need to be included in the metrics configuration file for them to be emitted by Torchserve. This is shown here.
  • A detailed upgrade guide is included in the metrics documentation.

New Features

New Examples

  1. Example LLama v2 70B chat using HuggingFace Accelerate #2494 @lxning @HamidShojanazeri @agunapal

  2. large model example OPT-6.7B on Inferentia2 #2399 @namannandan

    • This example demonstrates how NeuronX compiles the model , detects neuron core availability and runs the inference.
  3. DeepSpeed deferred init with OPT-30B #2419 @agunapal

    • This PR added feature deferred model init in OPT-30B example by leveraging DeepSpeed new version. This feature is able to significantly reduce model loading latency.
  4. Torch TensorRT example #2483 @agunapal

    • This PR uses Resnet-50 as an example to demonstrate Torch TensorRT.
  5. K8S mnist example using minikube #2323 @agunapal

    • This example shows how to use a pre-trained custom MNIST model to performing real time Digit recognition via K8S.
  6. Example for custom metrics #2516 @namannandan

  7. Example for object detection with ultralytics YOLO v8 model #2508 @agunapal

Improvements

Documentation

Platform Support

Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04 MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above, and JDK17.

GPU Support

Torch 2.0.1 + Cuda 11.7, 11.8
Torch 2.0.0 + Cuda 11.7, 11.8
Torch 1.13 + Cuda 11.7, 11.8
Torch 1.11 + Cuda 10.2, 11.3, 11.6
Torch 1.9.0 + Cuda 11.1
Torch 1.8.1 + Cuda 9.2

TorchServe v0.8.1 Release Notes

14 Jun 23:54
c2cdcfb
Compare
Choose a tag to compare

This is the release of TorchServe v0.8.1.

New Features

  1. Supported microbatch in handler to parallel process a batch request from frontend. #2210 @mreso

Because pre- and post- processing are often carried out on the CPU the GPU sits idle until the two CPU bound steps are executed and the worker receives a new batch. Microbatch in handler is able to parallel process inference, pre- and post- processing for a batch request from frontend.

  1. Supported job ticket #2350 @lxning

This feature help with use cases where inference latency can be high, such as generative models, auto regressive decoder models like chatGPT. Applications can take effective actions, for example, routing the rejected request to a different server, or scaling up model server capacity, based on the business requirements.

  1. Supported job queue size configuration per model #2350 @lxning

New Examples

This example demonstrates creative content assisted by generative AI by using TorchServe on SageMaker MME.

Improvements

  • Upgraded to PyTorch 2.0.1 #2374 @namannandan

  • Significant reduction in Docker Image Size

    • Reduce GPU docker image size by 3GB #2392 @agunapal
    • Reduced dependency installation time and decrease docker image size #2364 @mreso
        GPU
        pytorch/torchserve   0.8.1-gpu   04eef250c14e   4 hours ago     2.34GB
        pytorch/torchserve   0.8.0-gpu   516bb13a3649   4 weeks ago     5.86GB
        pytorch/torchserve   0.6.0-gpu   fb6d4b85847d   12 months ago   2.13GB
      
        CPU
        pytorch/torchserve   0.8.1-cpu   68a3fcae81af   4 hours ago     662MB
        pytorch/torchserve   0.8.0-cpu   958ef6dacea2   4 weeks ago     2.37GB
        pytorch/torchserve   0.6.0-cpu   af91330a97bd   12 months ago   496MB
      
  • Updated CPU information for IPEX #2372 @min-jean-cho

  • Fixed inf2 example handler #2378 @namannandan

  • Added inf2 nightly benchmark #2283 @namannandan

  • Fixed archiver tgz format model directory structure mismatch on SageMaker #2405 @lxning

  • Fixed model archiver to fail if extra files are missing #2212 @mreso

  • Fixed device type setting in model config yaml #2408 @lxning

  • Fixed batchsize in config.properties not honored #2382 @lxning

  • Upgraded torchrun argument names and fixed backend tcp port connection #2377 @lxning

  • Fixed error thrown while loading multiple models in KServe #2235 @jagadeeshi2i

  • Fixed KServe fastapi migration issues #2175 @jagadeeshi2i

  • Added type annotation in model_server.py #2384 @josephcalise

  • Speed up unit test by removing sleep in start/stop torchserve #2383 @mreso

  • Removed cu118 from regression tests #2380 @agunapal

  • Enabled ONNX CI test #2363 @msaroufim

  • Removed session_mocker usage to prevent test cross talking #2375 @mreso

  • Enabled regression test in CI #2370 @msaroufim

  • Fixed regression test failures #2371 @namannandan

  • Bump up transformers version from 4.28.1 to 4.30.0 #2410

Documentation

Platform Support

Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04 MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above, and JDK17.

GPU Support

Torch 2.0.1 + Cuda 11.7, 11.8
Torch 2.0.0 + Cuda 11.7, 11.8
Torch 1.13 + Cuda 11.7, 11.8
Torch 1.11 + Cuda 10.2, 11.3, 11.6
Torch 1.9.0 + Cuda 11.1
Torch 1.8.1 + Cuda 9.2

TorchServe v0.8.0 Release Notes

12 May 23:01
35fb574
Compare
Choose a tag to compare

This is the release of TorchServe v0.8.0.

New Features

  1. Supported large model inference in distributed environment #2193 #2320 #2209 #2215 #2310 #2218 @lxning @HamidShojanazeri

TorchServe added the deep integration to support large model inference. It provides PyTorch native large model inference solution by integrating PiPPy. It also provides the flexibility and extensibility to support other popular libraries such as Microsoft Deepspeed, and HuggingFace Accelerate.

  1. Supported streaming response for GRPC #2186 and HTTP #2233 @lxning

To improve UX in Generative AI inference, TorchServe allows for sending intermediate token response to client side by supporting GRPC server side streaming and HTTP 1.1 chunked encoding .

  1. Supported PyTorch/XLA on GPU and TPU #2182 @morgandu

By leveraging torch.compile it's now possible to run torchserve using XLA which is optimized for both GPU and TPU deployments.

  1. Implemented New Metrics platform #2199 #2190 #2165 @namannandan @lxning

TorchServe fully supports metrics in Prometheus mode or Log mode. Both frontend and backend metrics can be configured in a central metrics YAML file.

  1. Supported map based model config YAML file. #2193 @lxning

Added config-file option for model config to model archiver tool. Users is able to flexibly define customized parameters in this YAML file, and easily access them in backend handler via variable context.model_yaml_config. This new feature also made TorchServe easily support the other new features and enhancements.

  1. Refactored PT2.0 support #2222 @msaroufim

We've refactored our model optimization utilities, improved logging to help debug compilation issues. We've also now deprecated compile.json in favor of using the new YAML config format, follow our guide here to learn more https://github.com/pytorch/serve/blob/master/examples/pt2/README.md the main difference is while archiving a model instead of passing in compile.json via --extra-files we can pass in a --config-file model_config.yaml

  1. Supported user specified gpu deviceIds for a model #2193 @lxning

By default, TorchServe uses a round-robin algorithm to assign GPUs to a worker on a host. Starting from v0.8.0, TorchServe allows users to define deviceIds in the model_config.yaml. to assign GPUs to a model.

  1. Supported cpu model on a GPU host #2193 @lxning

TorchServe supports hybrid mode on a GPU host. Users are able to define deviceType in model config YAML file to deploy a model on CPU of a GPU host.

  1. Supported Client Timeout #2267 @lxning

TorchServe allows users to define clientTimeoutInMills in a model config YAML file. TorchServe calculates the expired timestamp of an incoming inference request if clientTimeoutInMills is set, and drops the request once it is expired.

  1. Updated ping endpoint default behavior #2254 @lxning

Supported maxRetryTimeoutInSec, which defines the max maximum time window of recovering a dead backend worker of a model, in model config YAML file. The default value is 5 min. Users are able to adjust it in model config YAML file. The ping endpoint returns 200 if all models have enough healthy workers (ie, equal or larger the minWorkers); otherwise returns 500.

New Examples

Improvements

TorchServe can be used with Intel® Extension for PyTorch* to give performance boost on Intel hardware. Intel® Extension for PyTorch* is a Python package extending PyTorch with up-to-date features optimizations that take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI), Intel® Advanced Matrix Extensions (Intel® AMX), and more.

dashboard

Enabling core pinning in TorchServe CPU nightly benchmark shows significant performance speedup. This feature is implemented via a script under PyTorch Xeon backend, initiated from Intel® Extension for PyTorch*. To try out core pinning on your workload, add cpu_launcher_enable=true in config.properties.

To try out more optimizations with Intel® Extension for PyTorch*, install Intel® Extension for PyTorch* and add ipex_enable=true in config.properties.

In case of OOM , return error code 507 instead of generic code 503

a). Added wildcard file search in model archiver --extra-file #2142 @gustavhartz
b). Added zip-store option to model archiver tool #2196 @mreso
c). Made model archiver tests runnable from any directory #2191 @mreso
d). Supported tgz format model decompression in TorchServe frontend #2214 @lxning

Automatically flag deviation of metrics from the average of last 30 runs

Dependency Upgrades

Documentation

This study compares TPS b/w TorchServe with Nvidia MPS enabled and TorchServe without Nvidia MPS enabled on P3 and G4. It can help to the decision in enabling MPS for your deployment or not.

Platform Support

Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04 MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above, and JDK17.

GPU Support

Torch 2.0.0 + Cuda 11.7, 11.8
Torch 1.13 + Cuda 11.7, 11.8
Torch 1.11 + Cuda 10.2, 11.3, 11.6
Torch 1.9.0 + Cuda 11.1
Torch 1.8.1 + Cuda 9.2

TorchServe v0.7.1 Release Notes

09 Feb 00:00
5140113
Compare
Choose a tag to compare

This is the release of TorchServe v0.7.1.

Security

Dependency Upgrades

Improvements

Documentation

Deprecation

Platform Support

Ubuntu 16.04, Ubuntu 18.04, Ubuntu 20.04 MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above, and JDK17.

GPU Support

Torch 1.13 + Cuda 11.7
Torch 1.11 + Cuda 10.2, 11.3, 11.6
Torch 1.9.0 + Cuda 11.1
Torch 1.8.1 + Cuda 9.2

TorchServe v0.7.0 Release Notes

13 Dec 22:15
7845403
Compare
Choose a tag to compare

This is the release of TorchServe v0.7.0.

New Examples

Better Transformer / Flash Attention & Xformer Memory Efficient provides out of box performance with major speed ups for PyTorch Transformer encoders. This has been integrated into Torchserve HF Transformer example, please read more about this integration here.

Main speed ups in Better Transformers comes from exploiting sparsity on padded inputs and kernel fusions. As a result you would see the biggest gains when dealing with larger workloads, such sequences with longer paddings and larger batch sizes.

In our benchmarks on P3 instances with 4 V100 GPUs, using Torchserve benchmarking workloads, throughput has shown significant improvement with large batch sizes. 45.5% increase with batch size 8; 50.8% increase with batch size 16; 45.2% increase with batch size 32; 47.2% increase with batch size 64. and 17.2 increase with batch size 4. These number can vary based on your workload (batch size , padding percentage) and your hardware. Please look up some other benchmarks in the blog post.

We've added experimental support for PT 2.0 as in torch.compile() support within torchserve. To use it you need to supply a file compile.json when archiving your model to specify which backend you want. We've also enabled by default mode=reduce-overhead which is ideally suited for smaller batch sizes which are more common for inference. We recommend for now to leverage GPUs with tensor cores available like A10G or A100 since you're likely to see the greatest speedups there.

On training we've seen speedups ranging from 30% to 2x https://pytorch.org/get-started/pytorch-2.0/ but we haven't ran any performance benchmarks yet for inference. Until then we recommend you continue leveraging other runtimes like TensorRT or IPEX for accelerated inference which we highlight in our performance_guide.md. There are a few important caveats to consider when you're using torch.compile: changes in batch sizes will cause recompilations so make sure to leverage a small batch size, there will be additional overhead to start a model since you need to compile it first and you'll likely still see the largest speedups with TensorRT.

However, we hope that adding this support will make it easier for you to benchmark and try out PT 2.0. Learn more here https://github.com/pytorch/serve/tree/master/examples/pt2

Dependency Upgrades

Improvements

Documentation

Platform Support

Ubuntu 16.04, Ubuntu 18.04, MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above, and JDK17.

GPU Support

Torch 1.13 + Cuda 11.7
Torch 1.11 + Cuda 10.2, 11.3, 11.6
Torch 1.9.0 + Cuda 11.1
Torch 1.8.1 + Cuda 9.2

TorchServe v0.6.1 Release Notes

14 Nov 20:15
Compare
Choose a tag to compare

This is the release of TorchServe v0.6.1.

New Features

New Examples

Dependency Upgrades

Improvements

Build and CI

Documentation

Deprecations

Platform Support

Ubuntu 16.04, Ubuntu 18.04, MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above, and JDK17.

GPU Support

Torch 1.11+ Cuda 10.2, 11.3, 11.6
Torch 1.9.0 + Cuda 11.1
Torch 1.8.1 + Cuda 9.2

TorchServe v0.6.0 Release Notes

16 May 20:02
Compare
Choose a tag to compare

This is the release of TorchServe v0.6.0.

New Features

  • Support PyTorch 1.11 and Cuda 11.3 - Added support for PyTorch 1.11 and Cuda 11.3.
  • Universal Auto Benchmark and Dashboard Tool - Added one command line tool for model analyzer to get benchmark report(sample) and dashboard on any device.
  • HuggingFace model parallelism integration - Added example for HuggingFace model parallelism integration.

Build and CI

  • Added nightly benchmark dashboard - Added nightly benchmark dashboard.
  • Migrated CI, nightly binary and docker build to github workflow - Added CI, docker migration.
  • Fixed gpu regression test buildspec.yaml - Added fixing for gpu regression test buildspec.yaml.

Documentation

Deprecations

  • Deprecated old benchmark/automated directory in favor of new Github Action based workflow

Improvements

  • Fixed workflow threads cleanup - Added fixing to clean workflow inference threadpool.
  • Fixed empty model url - Added fixing for empty model url in model archiver.
  • Fixed load model failure - Added support for loading a model directory.
  • HuggingFace text generation example - Added text generation example.
  • Updated metrics json and qlog format log - Added support for metrics json and qlog format log in log4j2.
  • Added cpu, gpu and memory usage - Added cpu, gpu and memory usage in benchmark-ab.py report.
  • Added exception for torch < 1.8.1 - Added exception to notify torch < 1.8.1.
  • Replaced hard code in install_dependencies.py - Added sys.executable in install_dependencies.py.
  • Added default envelope for workflow - Added default envelope in model manager for workflow.
  • Fixed multiple docker build errors - Fixed /home/venv write permission, typo in docker and added common requirements in docker.
  • Fixed snapshot test - Added fixing for snapshot test.
  • Updated model_zoo.md - Added dog breed, mmf and BERT in model zoo.
  • Added nvgpu in common requirements - Added nvgpu in common dependencies.
  • Fixed Inference API ping response - Fixed typo in Inference API ping response.

Platform Support

Ubuntu 16.04, Ubuntu 18.04, MacOS 10.14+, Windows 10 Pro, Windows Server 2019, Windows subsystem for Linux (Windows Server 2019, WSLv1, Ubuntu 18.0.4). TorchServe now requires Python 3.8 and above.

GPU Support

Torch 1.11+ Cuda 10.2, 11.3
Torch 1.9.0 + Cuda 11.1
Torch 1.8.1 + Cuda 9.2