Skip to content

Releases: neuralmagic/deepsparse

DeepSparse v1.7.1 Patch Release

19 Mar 20:04
639c9f7
Compare
Choose a tag to compare

This is a patch release for 1.7.0 that contains the following changes:

  • Detokenization has been fixed for streaming outputs with models that use sentencepiece-based tokenizers. (#1635)

DeepSparse v1.7.0

15 Mar 02:14
5fc5f73
Compare
Choose a tag to compare

New Features:

  • DeepSparse Pipelines v2 was introduced, enabling more complex pipelines to be represented. Text Generation (compatible with Hugging Face Transformers) and Image Classification pipelines have been refactored to the v2 format. (#1324, #1385, #1460, #1596, #1502, #1460, #1626)
  • OpenAI Server compatibility added on top of Pipelines v2. (#1445, #1477)
  • deepsparse.evaluate APIs and CLIs added with plugins for perplexity and lm-eval-harness for LLM evaluations. (#1596)
  • An example was added demonstrating how to use LLMPerf for benchmarking DeepSparse LLM servers. (#1502)
  • Continuous batching support has been added for text generation pipelines and inference server pathways, enabling inference over multiple text streams at once. (#1569, #1571)

Changes:

  • Exposed sequence_length for greater control over text generation pipelines. (#1518)
  • deepsparse.analyze functionality has been updated to work properly with LLMs. (#1324)
  • The logging and timing infrastructure for Pipelines expanded to enable more thorough tracking and logging, in addition to furthering support for integrations with Prometheus and other standard logging platforms. (#1614)
  • UX improved for text generation pipelines to more closely match Hugging Face Transformers pipelines. (#1583, #1584, #1590, #1592, #1598)

Resolved Issues:

  • Compile time for dense LLMs is no longer very slow.
  • Text generation pipeline bug fixes: corrected sampling logic errors and inappropriate in-place logits mutation resulting in incorrect answers for LLMs when using sampling. (#1406, #1414)
  • KV cache was fixed for improper handling of the kv_cache input while using external KV cache management, which resulted in inaccurate model inference for ONNX Runtime comparison pathways. (#1337)
  • Benchmarking runs for LLMs with internal KV cache no longer crash or report inaccurate numbers. (#1512, #1514)
  • SciPy dependencies were removed to address issues for CV pipelines where they would fail on import of scipy and crash. (#1604, #1602)

Known Issues:

  • OPT models produce incorrect outputs and are no longer supported.
  • Streaming support is limited within the DeepSparse Pipeline v2 framework for tasks other than text generation.

DeepSparse v1.6.1 Patch Release

20 Dec 22:03
66e8e6f
Compare
Choose a tag to compare

This is a patch release for 1.6.0 that contains the following changes:

  • The filename of the Neural Magic DeepSparse Community License in the DeepSparse GitHub repository has been renamed from LICENSE-NEURALMAGIC to LICENSE for higher visibility in the DeepSparse GitHub repository and the C++ engine package tarball, deepsparse_api_demo.tar.gz. (#1485)

Known Issues:

  • The compile time for dense LLMs can be very slow. Compile time to be addressed in forthcoming release.
  • Docker images are not currently pushing. A resolution is forthcoming for functional Docker builds. [RESOLVED]

DeepSparse v1.6.0

11 Dec 21:22
e94dcac
Compare
Choose a tag to compare

New Features:

Changes:

  • DeepSparse upgraded for the SparseZoo V2 model file structure changes, which expands the number of supported files and reduces the number of bytes that need to be downloaded for model checkpoints, folders, and files. (#1233, #1234, #1303, #1318)

  • YOLOv5 deployment pipelines migrated to install from nm-yolov5 on PyPI and remove the autoinstall from the nm-yolov5 GitHub repository that would happen on invocation of the relevant pathways, enabling more predictable environments. (#1030, #1101, #1129, #1111, #1167)

  • Docker builds are updated to consistently rebuild for new releases and nightlies. ( #1012, #1068, #1069, #1113, #1144)

  • Torchvision deployment pipelines have been upgraded to support 0.14.x. (#1034)

  • README and documentation updated to include: Slack Community name change, Contact Us form introduction, Python version changes; corrections for YOLOv5 torchvision, transformers, and SparseZoo broken links; and installation command. (#1041, #1042, #1043, #1039, #1048, #931, #960, #1279, #1282, #1280, #1313)

  • Python 3.7 is now deprecated. (#1060, #1148)

  • ONNX utilities are updated so that ONNX model arguments can be passed as either a model file path (past behavior) or an ONNX ModelProto Python object. (#1089)

  • Deployment directories containing a model.onnx will now load properly for all pipelines supported by DeepSparse Server. Before, specific paths needed to be supplied to the exact model.onnx file rather than a deployment directory. (#1131)

  • Flake8 updated to 6.1 to enable the latest standards for running make quality. (#1156)

  • Automatic link checking has been added to GitHub actions. (#1226)

  • DeepSparse Pipeline has been changed to make it printable, such that __str__ and __repr__ is implemented and will show useful information when a pipeline is printed. (#1298)

  • nm-transformers package has been fully removed and replaced with the native transformers package that works with DeepSparse. (#1302)

Performance and Compression Improvements:

  • The memory footprint used during model compilation for models with external weights has been greatly reduced.
  • The memory footprint has been reduced by sharing weights between compiled engines, for example, when using bucketing.
  • Matrix-Vector Multiplication (GEVM) with a sparse weight matrix is now supported for both performance and reduced memory footprint.
  • Matrix-Matrix Multiplication (GEMM) with a sparse weight matrix is further optimized for performance and reduced memory footprint.
  • AVX2-VNNI instructions are now used to improve the performance of DeepSparse.
  • Grouped Query Attention (GQA) in transformers is now optimized.
  • Improved performance of Gathers with constant data and dynamic indices, like the ones used for embeddings in transformers and recommendation models.
  • The InstanceNormalization operator is now supported for performance.
  • The Where operator has improved performance in some cases by fusing it onto other operators.
  • The CLIP operator is now supported for performance with operands of any data type.

Resolved Issues:

  • Assertion failures for GEMM operations with broadcast-stacked dimensions have been resolved.
  • Updated unit and integration tests to remove temporary test files and limit test file creation, which were not being properly deleted. (#1058)
  • deepsparse.benchmark was failing with AttributeError when the -shapes argument was supplied, causing no benchmarks to be measured. (#1071)
  • Deepsparse Server with a model.onnx file in the model directory was causing the server to raise an exception for image classification pipelines. (#1070)
  • Generate_random_inputs function no longer creates random data with shapes 0 when ONNX files containing dynamic dimensions were given. (#1086)
  • Pydantic version pinned to <2.0 preventing NameErrors from being raised anytime pipelines are constructed. (#1104)
  • AWS Lambda serverless examples and implementations updated to avoid exceptions being thrown while running inference in AWS Lambda. (#1115)
  • DeepSparse Pipelines: if num_cores was not supplied as an explicit kwarg for a bucketing pipeline, it would trigger a key error. This is now updated to ensure the pipeline works correctly without num_cores being explicitly supplied as an kwarg. (#1152)
  • eval_downstream for Transformers pathways no longer fails due to a PyTorch requirement not being installed. The fix now removes the PyTorch support dependency, and it runs correctly through. (#1187)
  • Reliability for unit test test_pipeline_call_is_async has been improved to produce consistent test results. (#1251, #1264, #1267)
  • Torchvision previously needed to be installed for any tests to pass, including transformers and other unrelated pipelines. If it was not installed, then the tests would fail with an import error. (#1251)

Known Issues:

  • The compile time for dense LLMs can be very slow. Compile time to be addressed in forthcoming release.
  • Docker images are not currently pushing. A resolution is forthcoming for functional Docker builds. [RESOLVED]

DeepSparse v1.5.3 Patch Release

23 Aug 18:38
373e041
Compare
Choose a tag to compare

This is a patch release for 1.5.0 that contains the following changes:

  • A rare segmentation fault on AVX2 systems has been fixed. This could have happened when an input to the network is quantized.

DeepSparse v1.5.2 Patch Release

06 Jul 03:28
42c857c
Compare
Choose a tag to compare

This is a patch release for 1.5.0 that contains the following changes:

  • Pinned dependency Pydantic, a data validation library for Python, to < v2.0, to prevent current workflows from breaking. Pydantic upgrade planned for future release. (#1107)

DeepSparse v1.5.1 Patch Release

21 Jun 20:22
9d4bc0b
Compare
Choose a tag to compare

This is a patch release for 1.5.0 that contains the following changes:

  • Latest 1.5-supported transformers datasets are incompatible with pandas 2.0. Future releases will support later datasets versions so this is to restrict pandas to < 2.0. (#1074)

DeepSparse v1.5.0

07 Jun 05:18
22208e5
Compare
Choose a tag to compare

New Features:

  • ONNX evaluation pipeline for OpenPifPaf (#915)
  • YOLOv8 segmentation pipelines and validation (#924)
  • deepsparse.benchmark_sweep CLI to enable sweeps of benchmarks across different settings such as cores and batch sizes (#860)
  • Engine.generate_random_inputs() API (#966)
  • Example data logging configurations for pipelines/server (#867)
  • Expanded built-in functions for NLP and CV pipeline logging to enable better monitoring (#865) (#862)
  • Product usage analytics tracking in DeepSparse Community edition (documentation)

Performance Improvements:

  • Inference latency for unstructured sparse-quantized CNNs has been improved by up to 2x.
  • Inference throughput and latency for dense CNNs has been improved by up to 20%.
  • Inference throughput and latency for dense transformers has been improved by up to 30%.
  • The following operators are now supported for performance:
    • Neg, Unsqueeze with non-constant inputs
    • MatMulInteger with two non-constant inputs
    • GEMM with constant weights and 4D or 5D inputs

Changes:

  • Transformers and YOLOv5 integrations migrated from auto install to install from PyPI packages. Going forward, pip install deepsparse[transformers] and pip install deepsparse[yolov5] will need to be used.
  • DeepSparse now uses hwloc to determine CPU topology. This fixes a bug where DeepSparse could not be used performantly inside of a Kubernetes cluster with a static CPU manager policy.
  • When users pass in a num_streams parameter that is smaller than the number of cores, multi-stream and elastic scheduler behaviors have been improved. Previously, DeepSparse would divide the system into num_streams chunks and fill each chunk until it ran out of threads. Now, each stream will use a number of threads equal to num_cores divided by num_streams, with the remainder distributed in a round-robin fashion.

Resolved Issues:

  • In networks with a Clip operator where min isn't equal to zero, performance bugs no longer occurs.

  • Crashing eliminated:

    • Pipeline conll eval using ignore_labels. (#903)
    • YOLOv8 pipelines handling models with dynamic inputs. (#967)
    • QA pipelines with sequence lengths equal to or less than 128. (#889)
    • Image classification pipelines handling PNG images. (#870)
    • ONNX overriding of shapes if a list was not passed in; this now automatically wraps in a list. (#914)
  • Assertion errors/failures removed:

    • Networks with both Convolutions and GEMM operations.
    • YOLOv8 model compilation.
    • Slice and Unsqueeze operators with a negative axis.
    • OPT models involving a constant tensor that is broadcast in two different ways.

Known Issues:

  • None

DeepSparse v1.4.2 Patch Release

31 Mar 20:01
22208e5
Compare
Choose a tag to compare

This is a patch release for 1.4.0 that contains the following changes:

  • Fallback support for YOLOv5 models with dynamic input shapes provided (not recommended pathway). (#971)
  • Loading of system logging configuration now addressed. (#858)

DeepSparse v1.4.1 Patch Release

23 Mar 16:37
22208e5
Compare
Choose a tag to compare

This is a patch release for 1.4.0 that contains the following changes:

  • The bounding boxes for YOLOv5 pipelines now scales with correct detection boxes. (#881)