-
Notifications
You must be signed in to change notification settings - Fork 21.4k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
add cuda fq bench on "fake_quant: add a more memory efficient backward"
Summary: tl;dr; add an alternative implementation of `fake_quantize` which saves a mask of whether the input was clamped during the forward pass and uses it to calculate the backward. The math: ``` # before - forward (pseudocode) def fq_forward(x, scale, zp, qmin, qmax): q_val = clamp(nearby_int(x / scale) + zp, qmin, qmax) fq_val = (q_val - zp) * scale return fq_val # before - backward (pseudocode) def fq_backward(dy, x, scale, zp, qmin, qmax): q_val_unclamped = nearby_int(x / scale) + zp mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax return dy * mask # after - forward (pseudocode) def fq_forward(x, scale, zp, qmin, qmax): q_val_unclamped = nearby_int(x / scale) + zp mask = qmin <= q_val_unclamped and q_val_unclamped <= qmax q_val = clamp(q_val_unclamped, qmin, qmax) fq_val = (q_val - zp) * scale return fq_val, mask # after - backward (pseudocode) def fq_backward(dy, mask): return dy * mask ``` This way the backward function no longer needs the input Tensor, and it can be gc'ed earlier by autograd. Instead of passing `x: FloatTensor`, we pass a `mask: BoolTensor` with the same number of elements. `BoolTensor` uses 1 byte per element, so we expect an upper bound of a 75% memory overhead reduction. We observe a 73% memory overhead reduction on torchvision's MobileNetV2 in real world tests. Packing the bools into a custom storage format to take 1 bit per element is an optimization left for the future. Performance impact of this seems negligible, I observed a 1% to 5% regression on MobileNetV2 but it's unclear if it's real. Adding this as a new function (as opposed to replacing the old implementation) for easy testing, but might be worth deleting the old fake_quant backward in a future PR. We can adjust the signature of this function to take `model.training` as an additional parameter, and skip the mask computation for eval. Test Plan: QAT on MobileNetV2 on FB infra, with `opt` build flags, batch_size = 32. Results for fbgemm settings, qnnpack results are similar. ``` # qat_fp32: model with fake_quants turned off (baseline) # qat_1: step 2 of qat, with observers disabled and fake_quants enabled (all of the overhead is the fake_quants) # before: fbgemm - qat_fp32 -> qat_1 max memory usage (mib): 3299 -> 4170 (overhead: 26.4%) latency (ms): 147 -> 181 # after: fbgemm - qat_fp32 -> qat_1 max memory usage (mib): 3302 -> 3528 (overhead: 7.1%) latency (ms): 147 -> 183 ``` Note: similar metrics are observed in an OSS / torchvision / MobileNetV2 setup, with this command: ``` python references/classification/train_quantization.py --print-freq 1 --data-path /data/local/packages/ai-group.imagenet-256-smallest-side/prod/ --output-dir ~/nfs/pytorch_vision_tests/ --backend qnnpack --epochs 5 ``` All CI tests here: #50849 PyTorch microbenchmarks (CUDA performance about the same: https://gist.github.com/vkuzo/11a7bed73fe60e340862d37e7975e9cd) Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D25918519](https://our.internmc.facebook.com/intern/diff/D25918519) [ghstack-poisoned]
- Loading branch information
Showing
340 changed files
with
8,057 additions
and
3,247 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.