From 1a07871d38a3f7a0c603f5361f0575a4901a0299 Mon Sep 17 00:00:00 2001
From: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>
Date: Thu, 25 Feb 2021 15:02:59 -0500
Subject: [PATCH 1/3] Update README.md
Temporarily removing references to 7x and blog. Will return after it's live.
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index c0800485a5..bdb9ac2e12 100644
--- a/README.md
+++ b/README.md
@@ -56,7 +56,7 @@ This repository includes package APIs along with examples to quickly get started
Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model.
Techniques for sparsification are all encompassing including everything from inducing sparsity using [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to enabling naturally occurring sparsity using [activation sparsity](http://proceedings.mlr.press/v119/kurtz20a.html) or [winograd/FFT](https://arxiv.org/abs/1509.09308).
When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics.
-For example, pruning plus quantization can give over [7x improvements in performance](https://neuralmagic.com/blog/benchmark-resnet50-with-deepsparse) while recovering to nearly the same baseline accuracy.
+For example, pruning plus quantization can give over noticeable improvements in performance while recovering to nearly the same baseline accuracy.
The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches.
Recipes encode the directions for how to sparsify a model into a simple, easily editable format.
From 5b98acc44639113adc0cb1d10088c526e18da6b3 Mon Sep 17 00:00:00 2001
From: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>
Date: Thu, 25 Feb 2021 15:04:09 -0500
Subject: [PATCH 2/3] Update index.rst
Temporarily removing references to 7x and blog. Will return after it's live.
---
docs/source/index.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/source/index.rst b/docs/source/index.rst
index 0a2344b29b..ef78b48a2a 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -62,7 +62,7 @@ Sparsification
Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model.
Techniques for sparsification are all encompassing including everything from inducing sparsity using `pruning `_ and `quantization `_ to enabling naturally occurring sparsity using `activation sparsity `_ or `winograd/FFT `_.
When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics.
-For example, pruning plus quantization can give over `7x improvements in performance `_ while recovering to nearly the same baseline accuracy.
+For example, pruning plus quantization can give noticeable improvements in performance while recovering to nearly the same baseline accuracy.
The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches.
Recipes encode the directions for how to sparsify a model into a simple, easily editable format.
@@ -131,4 +131,4 @@ Additionally, more information can be found via
Bugs, Feature Requests
Support, General Q&A
- Neural Magic Docs
\ No newline at end of file
+ Neural Magic Docs
From 61a21cc7cead503f6eda4a98ee95270ad021b27a Mon Sep 17 00:00:00 2001
From: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>
Date: Fri, 26 Feb 2021 09:31:30 -0500
Subject: [PATCH 3/3] Update README.md
remediated typo
---
README.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/README.md b/README.md
index bdb9ac2e12..a63c937031 100644
--- a/README.md
+++ b/README.md
@@ -56,7 +56,7 @@ This repository includes package APIs along with examples to quickly get started
Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model.
Techniques for sparsification are all encompassing including everything from inducing sparsity using [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to enabling naturally occurring sparsity using [activation sparsity](http://proceedings.mlr.press/v119/kurtz20a.html) or [winograd/FFT](https://arxiv.org/abs/1509.09308).
When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics.
-For example, pruning plus quantization can give over noticeable improvements in performance while recovering to nearly the same baseline accuracy.
+For example, pruning plus quantization can give noticeable improvements in performance while recovering to nearly the same baseline accuracy.
The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches.
Recipes encode the directions for how to sparsify a model into a simple, easily editable format.