From fe69c507b11192c87f23ab4f829e4906210f1a43 Mon Sep 17 00:00:00 2001 From: Mark Kurtz Date: Wed, 2 Feb 2022 15:08:09 -0700 Subject: [PATCH 1/2] Update README.md --- integrations/huggingface-transformers/README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/integrations/huggingface-transformers/README.md b/integrations/huggingface-transformers/README.md index 637d4a3a375..a78e96ff450 100644 --- a/integrations/huggingface-transformers/README.md +++ b/integrations/huggingface-transformers/README.md @@ -116,3 +116,5 @@ python transformers/examples/pytorch/question-answering/run_qa.py \ The DeepSparse Engine [accepts ONNX formats](https://docs.neuralmagic.com/sparseml/source/onnx_export.html) and is engineered to significantly speed up inference on CPUs for the sparsified models from this integration. Examples for loading, benchmarking, and deploying can be found in the [DeepSparse repository here](https://github.com/neuralmagic/deepsparse). + +Note, there is currently a known issue where conversion of the BERT models from PyTorch into ONNX is not preserving the accuracy of the model for some tasks and datasets. If you encounter this issue, try rolling back to the 0.9.0 release. From 290c49f866a0756fac6ac82084d1228df74ad5fa Mon Sep 17 00:00:00 2001 From: Mark Kurtz Date: Wed, 2 Feb 2022 15:28:21 -0700 Subject: [PATCH 2/2] Update integrations/huggingface-transformers/README.md Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com> --- integrations/huggingface-transformers/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/integrations/huggingface-transformers/README.md b/integrations/huggingface-transformers/README.md index a78e96ff450..4485107b65a 100644 --- a/integrations/huggingface-transformers/README.md +++ b/integrations/huggingface-transformers/README.md @@ -117,4 +117,4 @@ python transformers/examples/pytorch/question-answering/run_qa.py \ The DeepSparse Engine [accepts ONNX formats](https://docs.neuralmagic.com/sparseml/source/onnx_export.html) and is engineered to significantly speed up inference on CPUs for the sparsified models from this integration. Examples for loading, benchmarking, and deploying can be found in the [DeepSparse repository here](https://github.com/neuralmagic/deepsparse). -Note, there is currently a known issue where conversion of the BERT models from PyTorch into ONNX is not preserving the accuracy of the model for some tasks and datasets. If you encounter this issue, try rolling back to the 0.9.0 release. +**Note: there is currently a known issue where conversion of the BERT models from PyTorch into ONNX is not preserving the accuracy of the model for some tasks and datasets. If you encounter this issue, try rolling back to the 0.9.0 release. As a resolution is being actively investigated, this note will be removed when the issue has been remediated.**