Skip to content

Commit

Permalink
Update on "[docs][ao] Add missing documentation for torch.quantized_b…
Browse files Browse the repository at this point in the history
…atch_norm"

Summary:
Op is exposed via torch.quantized_batch_norm to the end user without any existing documentation

Test Plan:
CI
Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D30316431](https://our.internmc.facebook.com/intern/diff/D30316431)

[ghstack-poisoned]
  • Loading branch information
supriyar committed Aug 14, 2021
1 parent 860dfb0 commit 3f292c5
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions torch/_torch_docs.py
Expand Up @@ -11237,10 +11237,10 @@ def merge_dicts(*dicts):
Arguments:
input (Tensor): quantized tensor
weight (Tensor): tensor that corresponds to the gamma, size C
bias (Tensor): tensor that corresponds to the beta, size C
mean (Tensor): mean value in batch normalization, size C
var (Tensor): variance value, size C
weight (Tensor): float tensor that corresponds to the gamma, size C
bias (Tensor): float tensor that corresponds to the beta, size C
mean (Tensor): float mean value in batch normalization, size C
var (Tensor): float tensor for variance, size C
eps (float): a value added to the denominator for numerical stability.
output_scale (float): output quantized tensor scale
output_zero_point (int): output quantized tensor zero_point
Expand Down

0 comments on commit 3f292c5

Please sign in to comment.