diff --git a/README.md b/README.md index 307a07cd5c..c0800485a5 100644 --- a/README.md +++ b/README.md @@ -16,7 +16,7 @@ limitations under the License. # ![icon for DeepSparse](https://raw.githubusercontent.com/neuralmagic/deepsparse/main/docs/source/icon-deepsparse.png) DeepSparse Engine -### CPU inference engine that delivers unprecedented performance for sparse models +### Neural network inference engine that delivers GPU-class performance for sparsified models on CPUs

GitHub @@ -46,20 +46,29 @@ limitations under the License. ## Overview -The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory bound workloads. It is focused on model deployment and scaling machine learning pipelines, fitting seamlessly into your existing deployments as an inference backend. +The DeepSparse Engine is a CPU runtime that delivers GPU-class performance by taking advantage of sparsity within neural networks to reduce compute required as well as accelerate memory bound workloads. +It is focused on model deployment and scaling machine learning pipelines, fitting seamlessly into your existing deployments as an inference backend. -This repository includes package APIs along with examples to quickly get started learning about and actually running sparse models. +This repository includes package APIs along with examples to quickly get started benchmarking and inferencing sparse models. -### Related Products +## Sparsification -- [SparseZoo](https://github.com/neuralmagic/sparsezoo): - Neural network model repository for highly sparse models and optimization recipes -- [SparseML](https://github.com/neuralmagic/sparseml): - Libraries for state-of-the-art deep neural network optimization algorithms, - enabling simple pipelines integration with a few lines of code -- [Sparsify](https://github.com/neuralmagic/sparsify): - Easy-to-use autoML interface to optimize deep neural networks for - better inference performance and a smaller footprint +Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model. +Techniques for sparsification are all encompassing including everything from inducing sparsity using [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to enabling naturally occurring sparsity using [activation sparsity](http://proceedings.mlr.press/v119/kurtz20a.html) or [winograd/FFT](https://arxiv.org/abs/1509.09308). +When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics. +For example, pruning plus quantization can give over [7x improvements in performance](https://neuralmagic.com/blog/benchmark-resnet50-with-deepsparse) while recovering to nearly the same baseline accuracy. + +The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches. +Recipes encode the directions for how to sparsify a model into a simple, easily editable format. +- Download a sparsification recipe and sparsified model from the [SparseZoo](https://github.com/neuralmagic/sparsezoo). +- Alternatively, create a recipe for your model using [Sparsify](https://github.com/neuralmagic/sparsify). +- Apply your recipe with only a few lines of code using [SparseML](https://github.com/neuralmagic/sparseml). +- Finally, for GPU-level performance on CPUs, deploy your sparse-quantized model with the [DeepSparse Engine](https://github.com/neuralmagic/deepsparse). + + +**Full Deep Sparse product flow:** + + ## Compatibility @@ -67,21 +76,22 @@ The DeepSparse Engine ingests models in the [ONNX](https://onnx.ai/) format, all ## Quick Tour -To expedite inference and benchmarking on real models, we include the `sparsezoo` package. [SparseZoo](https://github.com/neuralmagic/sparsezoo) hosts inference optimized models, trained on repeatable optimization recipes using state-of-the-art techniques from [SparseML](https://github.com/neuralmagic/sparseml). +To expedite inference and benchmarking on real models, we include the `sparsezoo` package. [SparseZoo](https://github.com/neuralmagic/sparsezoo) hosts inference-optimized models, trained on repeatable sparsification recipes using state-of-the-art techniques from [SparseML](https://github.com/neuralmagic/sparseml). ### Quickstart with SparseZoo ONNX Models -**MobileNetV1 Dense** +**ResNet-50 Dense** -Here is how to quickly perform inference with DeepSparse Engine on a pre-trained dense MobileNetV1 from SparseZoo. +Here is how to quickly perform inference with DeepSparse Engine on a pre-trained dense ResNet-50 from SparseZoo. ```python from deepsparse import compile_model from sparsezoo.models import classification + batch_size = 64 # Download model and compile as optimized executable for your machine -model = classification.mobilenet_v1() +model = classification.resnet_50() engine = compile_model(model, batch_size=batch_size) # Fetch sample input and predict output using engine @@ -89,44 +99,68 @@ inputs = model.data_inputs.sample_batch(batch_size=batch_size) outputs, inference_time = engine.timed_run(inputs) ``` -**MobileNetV1 Optimized** +**ResNet-50 Sparsified** When exploring available optimized models, you can use the `Zoo.search_optimized_models` utility to find models that share a base. -Let us try this on the dense MobileNetV1 to see what is available. +Try this on the dense ResNet-50 to see what is available: ```python from sparsezoo import Zoo from sparsezoo.models import classification -print(Zoo.search_optimized_models(classification.mobilenet_v1())) + +model = classification.resnet_50() +print(Zoo.search_optimized_models(model)) ``` Output: ```shell -[Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/base-none), - Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/pruned-conservative), - Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/pruned-moderate), - Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/pruned_quant-moderate)] +[ + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none), + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned-conservative), + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned-moderate), + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned_quant-moderate), + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet-augmented/pruned_quant-aggressive) +] ``` -Great. We can see there are two pruned versions targeting FP32, `conservative` at 100% and `moderate` at >= 99% of baseline accuracy. There is also a `pruned_quant` variant targetting INT8. +We can see there are two pruned versions targeting FP32 and two pruned, quantized versions targeting INT8. +The `conservative`, `moderate`, and `aggressive` tags recover to 100%, >=99%, and <99% of baseline accuracy respectively. -Let's say you want to evaluate best performance on FP32 and are okay with a small drop in accuracy, so we can choose `pruned-moderate` over `pruned-conservative`. +For a version of ResNet-50 that recovers close to the baseline and is very performant, choose the pruned_quant-moderate model. +This model will run [nearly 7x faster](https://neuralmagic.com/blog/benchmark-resnet50-with-deepsparse) than the baseline model on a compatible CPU (with the VNNI instruction set enabled). +For hardware compatibility, see the Hardware Support section. ```python from deepsparse import compile_model -from sparsezoo.models import classification -batch_size = 64 - -model = classification.mobilenet_v1(optim_name="pruned", optim_category="moderate") -engine = compile_model(model, batch_size=batch_size) +import numpy -inputs = model.data_inputs.sample_batch(batch_size=batch_size) -outputs, inference_time = engine.timed_run(inputs) +batch_size = 64 +sample_inputs = [numpy.random.randn(batch_size, 3, 224, 224).astype(numpy.float32)] + +# run baseline benchmarking +engine_base = compile_model( + model="zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none", + batch_size=batch_size, +) +benchmarks_base = engine_base.benchmark(sample_inputs) +print(benchmarks_base) + +# run sparse benchmarking +engine_sparse = compile_model( + model="zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned_quant-moderate", + batch_size=batch_size, +) +if not engine_sparse.cpu_vnni: + print("WARNING: VNNI instructions not detected, quantization speedup not well supported") +benchmarks_sparse = engine_sparse.benchmark(sample_inputs) +print(benchmarks_sparse) + +print(f"Speedup: {benchmarks_sparse.items_per_second / benchmarks_base.items_per_second:.2f}x") ``` -### Quickstart with custom ONNX models +### Quickstart with Custom ONNX Models We accept ONNX files for custom models, too. Simply plug in your model to compare performance with other solutions. diff --git a/docs/source/index.rst b/docs/source/index.rst index 27f7253e5e..0a2344b29b 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -17,7 +17,7 @@ DeepSparse |version| ==================== -CPU inference engine that delivers unprecedented performance for sparse models. +Neural network inference engine that delivers GPU-class performance for sparsified models on CPUs .. raw:: html @@ -51,54 +51,59 @@ CPU inference engine that delivers unprecedented performance for sparse models. Overview ======== -The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of -natural sparsity within neural networks to reduce compute required as well as accelerate memory bound workloads. -It is focused on model deployment and scaling machine learning pipelines, -fitting seamlessly into your existing deployments as an inference backend. +The DeepSparse Engine is a CPU runtime that delivers GPU-class performance by taking advantage of sparsity within neural networks to reduce compute required as well as accelerate memory bound workloads. +It is focused on model deployment and scaling machine learning pipelines, fitting seamlessly into your existing deployments as an inference backend. -`This repository `_ includes package APIs along with examples to quickly get started learning about and -actually running sparse models. +`This repository `_ includes package APIs along with examples to quickly get started benchmarking and inferencing sparse models. + +Sparsification +============== + +Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model. +Techniques for sparsification are all encompassing including everything from inducing sparsity using `pruning `_ and `quantization `_ to enabling naturally occurring sparsity using `activation sparsity `_ or `winograd/FFT `_. +When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics. +For example, pruning plus quantization can give over `7x improvements in performance `_ while recovering to nearly the same baseline accuracy. + +The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches. +Recipes encode the directions for how to sparsify a model into a simple, easily editable format. +- Download a sparsification recipe and sparsified model from the `SparseZoo `_. +- Alternatively, create a recipe for your model using `Sparsify `_. +- Apply your recipe with only a few lines of code using `SparseML `_. +- Finally, for GPU-level performance on CPUs, deploy your sparse-quantized model with the `DeepSparse Engine `_. + + +**Full Deep Sparse product flow:** + + Compatibility ============= -The DeepSparse Engine ingests models in the `ONNX `_ format, -allowing for compatibility with `PyTorch `_, -`TensorFlow `_, `Keras `_, -and `many other frameworks `_ that support it. +The DeepSparse Engine ingests models in the `ONNX `_ format, +allowing for compatibility with `PyTorch `_, +`TensorFlow `_, `Keras `_, +and `many other frameworks `_ that support it. This reduces the extra work of preparing your trained model for inference to just one step of exporting. -Related Products -================ - -- `SparseZoo `_: - Neural network model repository for highly sparse models and optimization recipes -- `SparseML `_: - Libraries for state-of-the-art deep neural network optimization algorithms, - enabling simple pipelines integration with a few lines of code -- `Sparsify `_: - Easy-to-use autoML interface to optimize deep neural networks for - better inference performance and a smaller footprint - Resources and Learning More =========================== -- `SparseZoo Documentation `_ -- `SparseML Documentation `_ -- `Sparsify Documentation `_ -- `Neural Magic Blog `_, - `Resources `_, - `Website `_ +- `SparseZoo Documentation `_ +- `SparseML Documentation `_ +- `Sparsify Documentation `_ +- `Neural Magic Blog `_, + `Resources `_, + `Website `_ Release History =============== Official builds are hosted on PyPi -- stable: `deepsparse `_ -- nightly (dev): `deepsparse-nightly `_ +- stable: `deepsparse `_ +- nightly (dev): `deepsparse-nightly `_ Additionally, more information can be found via -`GitHub Releases `_. +`GitHub Releases `_. .. toctree:: :maxdepth: 3 diff --git a/docs/source/quicktour.md b/docs/source/quicktour.md index 432c267f70..7bb3c94a1a 100644 --- a/docs/source/quicktour.md +++ b/docs/source/quicktour.md @@ -16,24 +16,22 @@ limitations under the License. ## Quick Tour -To expedite inference and benchmarking on real models, we include the `sparsezoo` package. -[SparseZoo](https://github.com/neuralmagic/sparsezoo) hosts inference optimized models, -trained on repeatable optimization recipes using state-of-the-art techniques from -[SparseML](https://github.com/neuralmagic/sparseml). +To expedite inference and benchmarking on real models, we include the `sparsezoo` package. [SparseZoo](https://github.com/neuralmagic/sparsezoo) hosts inference-optimized models, trained on repeatable sparsification recipes using state-of-the-art techniques from [SparseML](https://github.com/neuralmagic/sparseml). ### Quickstart with SparseZoo ONNX Models -**MobileNetV1 Dense** +**ResNet-50 Dense** -Here is how to quickly perform inference with DeepSparse Engine on a pre-trained dense MobileNetV1 from SparseZoo. +Here is how to quickly perform inference with DeepSparse Engine on a pre-trained dense ResNet-50 from SparseZoo. ```python from deepsparse import compile_model from sparsezoo.models import classification + batch_size = 64 # Download model and compile as optimized executable for your machine -model = classification.mobilenet_v1() +model = classification.resnet_50() engine = compile_model(model, batch_size=batch_size) # Fetch sample input and predict output using engine @@ -41,46 +39,68 @@ inputs = model.data_inputs.sample_batch(batch_size=batch_size) outputs, inference_time = engine.timed_run(inputs) ``` -**MobileNetV1 Optimized** +**ResNet-50 Sparsified** -When exploring available optimized models, you can use the `Zoo.search_optimized_models` -utility to find models that share a base. +When exploring available optimized models, you can use the `Zoo.search_optimized_models` utility to find models that share a base. -Let us try this on the dense MobileNetV1 to see what is available. +Try this on the dense ResNet-50 to see what is available: ```python from sparsezoo import Zoo from sparsezoo.models import classification -print(Zoo.search_optimized_models(classification.mobilenet_v1())) + +model = classification.resnet_50() +print(Zoo.search_optimized_models(model)) ``` + Output: -``` -[Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/base-none), - Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/pruned-conservative), - Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/pruned-moderate), - Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/pruned_quant-moderate)] + +```shell +[ + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none), + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned-conservative), + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned-moderate), + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned_quant-moderate), + Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet-augmented/pruned_quant-aggressive) +] ``` -Great. We can see there are two pruned versions targeting FP32, -`conservative` at 100% and `moderate` at >= 99% of baseline accuracy. -There is also a `pruned_quant` variant targeting INT8. +We can see there are two pruned versions targeting FP32 and two pruned, quantized versions targeting INT8. +The `conservative`, `moderate`, and `aggressive` tags recover to 100%, >=99%, and <99% of baseline accuracy respectively. -Let's say you want to evaluate best performance on FP32 and are okay with a small drop in accuracy, -so we can choose `pruned-moderate` over `pruned-conservative`. +For a version of ResNet-50 that recovers close to the baseline and is very performant, choose the pruned_quant-moderate model. +This model will run [nearly 7x faster](https://neuralmagic.com/blog/benchmark-resnet50-with-deepsparse) than the baseline model on a compatible CPU (with the VNNI instruction set enabled). +For hardware compatibility, see the Hardware Support section. ```python from deepsparse import compile_model -from sparsezoo.models import classification -batch_size = 64 - -model = classification.mobilenet_v1(optim_name="pruned", optim_category="moderate") -engine = compile_model(model, batch_size=batch_size) +import numpy -inputs = model.data_inputs.sample_batch(batch_size=batch_size) -outputs, inference_time = engine.timed_run(inputs) +batch_size = 64 +sample_inputs = [numpy.random.randn(batch_size, 3, 224, 224).astype(numpy.float32)] + +# run baseline benchmarking +engine_base = compile_model( + model="zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none", + batch_size=batch_size, +) +benchmarks_base = engine_base.benchmark(sample_inputs) +print(benchmarks_base) + +# run sparse benchmarking +engine_sparse = compile_model( + model="zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned_quant-moderate", + batch_size=batch_size, +) +if not engine_sparse.cpu_vnni: + print("WARNING: VNNI instructions not detected, quantization speedup not well supported") +benchmarks_sparse = engine_sparse.benchmark(sample_inputs) +print(benchmarks_sparse) + +print(f"Speedup: {benchmarks_sparse.items_per_second / benchmarks_base.items_per_second:.2f}x") ``` -### Quickstart with custom ONNX models +### Quickstart with Custom ONNX Models We accept ONNX files for custom models, too. Simply plug in your model to compare performance with other solutions. diff --git a/notebooks/classification.ipynb b/notebooks/classification.ipynb index 7b090b4926..fce1d9324e 100644 --- a/notebooks/classification.ipynb +++ b/notebooks/classification.ipynb @@ -63,11 +63,11 @@ "source": [ "## Gathering the Model and Data\n", "\n", - "By default, you will download a MobileNetV1 model trained on the ImageNet dataset.\n", + "By default, you will download a sparsified ResNet-50 model trained on the ImageNet dataset.\n", "The model's pretrained weights and exported ONNX file are downloaded from the SparseZoo model repo.\n", "The sample batch of data is downloaded from SparseZoo as well.\n", "\n", - "If you want to try different architectures replace `mobilenet_v1()` with your choice, for example: `resnet50()` or `efficientnet_b0()`.\n", + "If you want to try different architectures replace `resnet50()` with your choice, for example: `mobilenet_v1()` or `efficientnet_b0()`.\n", "\n", "You may also want to try different batch sizes to evaluate accuracy and performance for your task." ] @@ -95,7 +95,7 @@ "# Define your model below\n", "# =====================================================\n", "print(\"Downloading model...\")\n", - "model = classification.mobilenet_v1()\n", + "model = classification.resnet_50(optim_name=\"pruned_quant\", optim_category=\"moderate\")\n", "\n", "# Gather sample batch of data for inference and visualization\n", "batch = model.sample_batch(batch_size=batch_size)\n", @@ -276,9 +276,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.9" + "version": "3.6.8" } }, "nbformat": 4, "nbformat_minor": 4 -} +} \ No newline at end of file diff --git a/setup.py b/setup.py index 349a45d214..a8a1c88849 100644 --- a/setup.py +++ b/setup.py @@ -114,7 +114,10 @@ def _setup_long_description() -> Tuple[str, str]: version=_VERSION, author="Neuralmagic, Inc.", author_email="support@neuralmagic.com", - description="CPU runtime that delivers unprecedented performance for sparse models", + description=( + "Neural network inference engine that delivers GPU-class performance " + "for sparsified models on CPUs" + ), long_description=_setup_long_description()[0], long_description_content_type=_setup_long_description()[1], keywords=( diff --git a/src/deepsparse/engine.py b/src/deepsparse/engine.py index dbe909fdf0..763526120f 100644 --- a/src/deepsparse/engine.py +++ b/src/deepsparse/engine.py @@ -21,9 +21,9 @@ from typing import Dict, Iterable, List, Optional, Tuple, Union import numpy +from tqdm.auto import tqdm from deepsparse.benchmark import BenchmarkResults -from tqdm.auto import tqdm try: