Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[not for land, ci only] fake_quant: add a more memory efficient version #50849

Closed
wants to merge 1 commit into from

Conversation

vkuzo
Copy link
Contributor

@vkuzo vkuzo commented Jan 20, 2021

Summary:

Not for review yet, a bunch of TODOs need finalizing.

tl;dr; add an alternative implementation of fake_quantize which saves
a ask during the forward pass and uses it to calculate the backward.

There are two benefits:

  1. the backward function no longer needs the input Tensor, and it can be
    gc'ed earlier by autograd. On MobileNetV2, this reduces QAT overhead
    by ~15% (TODO: link, and absolute numbers). We add an additional mask Tensor
    to pass around, but its size is 4x smaller than the input tensor. A
    future optimization would be to pack the mask bitwise and unpack in the
    backward.

  2. the computation of qval can be done only once in the forward and
    reused in the backward. No perf change observed, TODO verify with better
    matrics.

TODO: describe in more detail

Test Plan:

OSS / torchvision / MobileNetV2

python references/classification/train_quantization.py
  --print-freq 1
  --data-path /data/local/packages/ai-group.imagenet-256-smallest-side/prod/
  --output-dir ~/nfs/pytorch_vision_tests/
  --backend qnnpack
  --epochs 5
TODO paste results here

TODO more

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: f932055ee57b6a4e419d3896fb605c58fc063668
Pull Request resolved: #50561

Fixes #{issue number}

Summary:

Not for review yet, a bunch of TODOs need finalizing.

tl;dr; add an alternative implementation of `fake_quantize` which saves
a ask during the forward pass and uses it to calculate the backward.

There are two benefits:

1. the backward function no longer needs the input Tensor, and it can be
gc'ed earlier by autograd.  On MobileNetV2, this reduces QAT overhead
by ~15% (TODO: link, and absolute numbers).  We add an additional mask Tensor
to pass around, but its size is 4x smaller than the input tensor. A
future optimization would be to pack the mask bitwise and unpack in the
backward.

2. the computation of `qval` can be done only once in the forward and
reused in the backward. No perf change observed, TODO verify with better
matrics.

TODO: describe in more detail

Test Plan:

OSS / torchvision / MobileNetV2
```
python references/classification/train_quantization.py
  --print-freq 1
  --data-path /data/local/packages/ai-group.imagenet-256-smallest-side/prod/
  --output-dir ~/nfs/pytorch_vision_tests/
  --backend qnnpack
  --epochs 5
TODO paste results here
```

TODO more

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: f932055ee57b6a4e419d3896fb605c58fc063668
Pull Request resolved: #50561
vkuzo added a commit that referenced this pull request Jan 21, 2021
Summary:

tl;dr; add an alternative implementation of `fake_quantize` which saves
a mask of whether the input was clamped during the forward pass and uses it to calculate the backward.  The math:

```
# before - forward (pseudocode)
def fq_forward(x, scale, zp, qmin, qmax):
    q_val = clamp(nearby_int(x / scale) + zp, qmin, qmax)
    fq_val = (q_val - zp) * scale
    return fq_val

# before - backward (pseudocode)
def fq_backward(dy, x, scale, zp, qmin, qmax):
    q_val_unclamped = nearby_int(x / scale) + zp
    mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax
    return dy * mask

# after - forward (pseudocode)
def fq_forward(x, scale, zp, qmin, qmax):
    q_val_unclamped = nearby_int(x / scale) + zp
    mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax
    q_val = clamp(q_val_unclamped, qmin, qmax)
    fq_val = (q_val - zp) * scale
    return fq_val, mask

# after - backward (pseudocode)
def fq_backward(dy, mask):
    return dy * mask
```

This way the backward function no longer needs the input Tensor, and it can be
gc'ed earlier by autograd.  Instead of passing `x: FloatTensor`, we pass a `mask: BoolTensor`
with the same number of elements.  `BoolTensor` uses 1 byte per element, 
so we expect an upper bound of a 75% memory overhead reduction.  We observe a 73% memory 
overhead reduction on torchvision's MobileNetV2 in real world tests.  Packing the bools
into a custom storage format to take 1 bit per element is an optimization left for the future.

Performance impact of this seems negligible, I observed a 1% to 5% regression on MobileNetV2 but it's unclear if it's real.

Adding this as a new function (as opposed to replacing the old implementation) for easy testing, but
might be worth deleting the old fake_quant backward in a future PR.  We can adjust the signature
of this function to take `model.training` as an additional parameter, and skip the mask computation for eval.

Test Plan:

QAT on MobileNetV2 on FB infra, with `opt` build flags, batch_size = 32.  Results for fbgemm settings, qnnpack results are similar.
```
# qat_fp32: model with fake_quants turned off (baseline)
# qat_1: step 2 of qat, with observers disabled and fake_quants enabled (all of the overhead is the fake_quants)

# before: fbgemm - qat_fp32 -> qat_1
max memory usage (mib): 3299 -> 4170 (overhead: 26.4%)
latency (ms):  147 -> 181

# after: fbgemm - qat_fp32 -> qat_1
max memory usage (mib): 3302 -> 3528 (overhead: 7.1%)
latency (ms):  147 -> 183
```

Note: similar metrics are observed in an OSS / torchvision / MobileNetV2 setup, with this command:
```
python references/classification/train_quantization.py
  --print-freq 1
  --data-path /data/local/packages/ai-group.imagenet-256-smallest-side/prod/
  --output-dir ~/nfs/pytorch_vision_tests/
  --backend qnnpack
  --epochs 5
```

All CI tests here: #50849

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D25918519](https://our.internmc.facebook.com/intern/diff/D25918519)

[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Jan 26, 2021
Summary:

tl;dr; add an alternative implementation of `fake_quantize` which saves
a mask of whether the input was clamped during the forward pass and uses it to calculate the backward.  The math:

```
# before - forward (pseudocode)
def fq_forward(x, scale, zp, qmin, qmax):
    q_val = clamp(nearby_int(x / scale) + zp, qmin, qmax)
    fq_val = (q_val - zp) * scale
    return fq_val

# before - backward (pseudocode)
def fq_backward(dy, x, scale, zp, qmin, qmax):
    q_val_unclamped = nearby_int(x / scale) + zp
    mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax
    return dy * mask

# after - forward (pseudocode)
def fq_forward(x, scale, zp, qmin, qmax):
    q_val_unclamped = nearby_int(x / scale) + zp
    mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax
    q_val = clamp(q_val_unclamped, qmin, qmax)
    fq_val = (q_val - zp) * scale
    return fq_val, mask

# after - backward (pseudocode)
def fq_backward(dy, mask):
    return dy * mask
```

This way the backward function no longer needs the input Tensor, and it can be
gc'ed earlier by autograd.  Instead of passing `x: FloatTensor`, we pass a `mask: BoolTensor`
with the same number of elements.  `BoolTensor` uses 1 byte per element, 
so we expect an upper bound of a 75% memory overhead reduction.  We observe a 73% memory 
overhead reduction on torchvision's MobileNetV2 in real world tests.  Packing the bools
into a custom storage format to take 1 bit per element is an optimization left for the future.

Performance impact of this seems negligible, I observed a 1% to 5% regression on MobileNetV2 but it's unclear if it's real.

Adding this as a new function (as opposed to replacing the old implementation) for easy testing, but
might be worth deleting the old fake_quant backward in a future PR.  We can adjust the signature
of this function to take `model.training` as an additional parameter, and skip the mask computation for eval.

Test Plan:

QAT on MobileNetV2 on FB infra, with `opt` build flags, batch_size = 32.  Results for fbgemm settings, qnnpack results are similar.
```
# qat_fp32: model with fake_quants turned off (baseline)
# qat_1: step 2 of qat, with observers disabled and fake_quants enabled (all of the overhead is the fake_quants)

# before: fbgemm - qat_fp32 -> qat_1
max memory usage (mib): 3299 -> 4170 (overhead: 26.4%)
latency (ms):  147 -> 181

# after: fbgemm - qat_fp32 -> qat_1
max memory usage (mib): 3302 -> 3528 (overhead: 7.1%)
latency (ms):  147 -> 183
```

Note: similar metrics are observed in an OSS / torchvision / MobileNetV2 setup, with this command:
```
python references/classification/train_quantization.py
  --print-freq 1
  --data-path /data/local/packages/ai-group.imagenet-256-smallest-side/prod/
  --output-dir ~/nfs/pytorch_vision_tests/
  --backend qnnpack
  --epochs 5
```

All CI tests here: #50849

PyTorch microbenchmarks (CUDA performance about the same: https://gist.github.com/vkuzo/11a7bed73fe60e340862d37e7975e9cd)

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D25918519](https://our.internmc.facebook.com/intern/diff/D25918519)

[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Jan 26, 2021
Summary:

tl;dr; add an alternative implementation of `fake_quantize` which saves
a mask of whether the input was clamped during the forward pass and uses it to calculate the backward.  The math:

```
# before - forward (pseudocode)
def fq_forward(x, scale, zp, qmin, qmax):
    q_val = clamp(nearby_int(x / scale) + zp, qmin, qmax)
    fq_val = (q_val - zp) * scale
    return fq_val

# before - backward (pseudocode)
def fq_backward(dy, x, scale, zp, qmin, qmax):
    q_val_unclamped = nearby_int(x / scale) + zp
    mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax
    return dy * mask

# after - forward (pseudocode)
def fq_forward(x, scale, zp, qmin, qmax):
    q_val_unclamped = nearby_int(x / scale) + zp
    mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax
    q_val = clamp(q_val_unclamped, qmin, qmax)
    fq_val = (q_val - zp) * scale
    return fq_val, mask

# after - backward (pseudocode)
def fq_backward(dy, mask):
    return dy * mask
```

This way the backward function no longer needs the input Tensor, and it can be
gc'ed earlier by autograd.  Instead of passing `x: FloatTensor`, we pass a `mask: BoolTensor`
with the same number of elements.  `BoolTensor` uses 1 byte per element, 
so we expect an upper bound of a 75% memory overhead reduction.  We observe a 73% memory 
overhead reduction on torchvision's MobileNetV2 in real world tests.  Packing the bools
into a custom storage format to take 1 bit per element is an optimization left for the future.

Performance impact of this seems negligible, I observed a 1% to 5% regression on MobileNetV2 but it's unclear if it's real.

Adding this as a new function (as opposed to replacing the old implementation) for easy testing, but
might be worth deleting the old fake_quant backward in a future PR.  We can adjust the signature
of this function to take `model.training` as an additional parameter, and skip the mask computation for eval.

Test Plan:

QAT on MobileNetV2 on FB infra, with `opt` build flags, batch_size = 32.  Results for fbgemm settings, qnnpack results are similar.
```
# qat_fp32: model with fake_quants turned off (baseline)
# qat_1: step 2 of qat, with observers disabled and fake_quants enabled (all of the overhead is the fake_quants)

# before: fbgemm - qat_fp32 -> qat_1
max memory usage (mib): 3299 -> 4170 (overhead: 26.4%)
latency (ms):  147 -> 181

# after: fbgemm - qat_fp32 -> qat_1
max memory usage (mib): 3302 -> 3528 (overhead: 7.1%)
latency (ms):  147 -> 183
```

Note: similar metrics are observed in an OSS / torchvision / MobileNetV2 setup, with this command:
```
python references/classification/train_quantization.py
  --print-freq 1
  --data-path /data/local/packages/ai-group.imagenet-256-smallest-side/prod/
  --output-dir ~/nfs/pytorch_vision_tests/
  --backend qnnpack
  --epochs 5
```

All CI tests here: #50849

PyTorch microbenchmarks (CUDA performance about the same: https://gist.github.com/vkuzo/11a7bed73fe60e340862d37e7975e9cd)

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D25918519](https://our.internmc.facebook.com/intern/diff/D25918519)

[ghstack-poisoned]
vkuzo added a commit that referenced this pull request Jan 27, 2021
Summary:

tl;dr; add an alternative implementation of `fake_quantize` which saves
a mask of whether the input was clamped during the forward pass and uses it to calculate the backward.  The math:

```
# before - forward (pseudocode)
def fq_forward(x, scale, zp, qmin, qmax):
    q_val = clamp(nearby_int(x / scale) + zp, qmin, qmax)
    fq_val = (q_val - zp) * scale
    return fq_val

# before - backward (pseudocode)
def fq_backward(dy, x, scale, zp, qmin, qmax):
    q_val_unclamped = nearby_int(x / scale) + zp
    mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax
    return dy * mask

# after - forward (pseudocode)
def fq_forward(x, scale, zp, qmin, qmax):
    q_val_unclamped = nearby_int(x / scale) + zp
    mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax
    q_val = clamp(q_val_unclamped, qmin, qmax)
    fq_val = (q_val - zp) * scale
    return fq_val, mask

# after - backward (pseudocode)
def fq_backward(dy, mask):
    return dy * mask
```

This way the backward function no longer needs the input Tensor, and it can be
gc'ed earlier by autograd.  Instead of passing `x: FloatTensor`, we pass a `mask: BoolTensor`
with the same number of elements.  `BoolTensor` uses 1 byte per element, 
so we expect an upper bound of a 75% memory overhead reduction.  We observe a 73% memory 
overhead reduction on torchvision's MobileNetV2 in real world tests.  Packing the bools
into a custom storage format to take 1 bit per element is an optimization left for the future.

Performance impact of this seems negligible, I observed a 1% to 5% regression on MobileNetV2 but it's unclear if it's real.

Adding this as a new function (as opposed to replacing the old implementation) for easy testing, but
might be worth deleting the old fake_quant backward in a future PR.  We can adjust the signature
of this function to take `model.training` as an additional parameter, and skip the mask computation for eval.

Test Plan:

QAT on MobileNetV2 on FB infra, with `opt` build flags, batch_size = 32.  Results for fbgemm settings, qnnpack results are similar.
```
# qat_fp32: model with fake_quants turned off (baseline)
# qat_1: step 2 of qat, with observers disabled and fake_quants enabled (all of the overhead is the fake_quants)

# before: fbgemm - qat_fp32 -> qat_1
max memory usage (mib): 3299 -> 4170 (overhead: 26.4%)
latency (ms):  147 -> 181

# after: fbgemm - qat_fp32 -> qat_1
max memory usage (mib): 3302 -> 3528 (overhead: 7.1%)
latency (ms):  147 -> 183
```

Note: similar metrics are observed in an OSS / torchvision / MobileNetV2 setup, with this command:
```
python references/classification/train_quantization.py
  --print-freq 1
  --data-path /data/local/packages/ai-group.imagenet-256-smallest-side/prod/
  --output-dir ~/nfs/pytorch_vision_tests/
  --backend qnnpack
  --epochs 5
```

All CI tests here: #50849

PyTorch microbenchmarks (CUDA performance about the same: 
```
cd benchmarks/operator_benchmark
python -m pt.quantization_test
```
results: https://gist.github.com/vkuzo/11a7bed73fe60e340862d37e7975e9cd)

Unit tests:

```
python test/test_quantization.py TestFakeQuantize
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D25918519](https://our.internmc.facebook.com/intern/diff/D25918519)

[ghstack-poisoned]
@vkuzo vkuzo closed this Feb 8, 2021
@github-actions github-actions bot deleted the ci-all/vkuzo/fake_quant_test_20210120 branch February 10, 2024 01:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants