Skip to content

Commit

Permalink
Fix invalid link to TensorFlow quantization doc (#5483)
Browse files Browse the repository at this point in the history
Summary:
Quantization explanation in TensorFlow is now provided as a post-training quantization technique in TensorFlow Lite.

Pull Request resolved: #5483

Reviewed By: jackm321

Differential Revision: D27598118

Pulled By: jfix71

fbshipit-source-id: 274bf9e4b67098d3fea3adfed568e3bce5abc796
  • Loading branch information
Lewuathe authored and facebook-github-bot committed Apr 6, 2021
1 parent 353e97c commit d524a7f
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/Quantization.md
Expand Up @@ -11,8 +11,8 @@ arithmetic to integer arithmetic. Arithmetic using small integers is more
efficient than the computation of full-width floating-point numbers, and
additionally decreases memory usage.

This is an external [link](https://www.tensorflow.org/performance/quantization)
that explains how quantization is done in TensorFlow.
This is an external [link](https://www.tensorflow.org/lite/performance/post_training_quantization)
that explains how post-training quantization is done in TensorFlow Lite.

Glow is able to convert floating-point-based networks into signed 8-bit integer
networks. The canonical quantization representation is using signed integers,
Expand Down

0 comments on commit d524a7f

Please sign in to comment.