Skip to content

Commit

Permalink
Update on "[wip] quantization: store input_qrange_le_128 flag on quan…
Browse files Browse the repository at this point in the history
…tized conv"


Summary:

This is a start of fixing the problems surfaced in #46749.
This particular PR only fixes a small part of this:
1. if a conv module is unsafe to run in fbgemm, we now persist this
information with a `input_qrange_le_128` boolean flag stored on `ConvPackedParams{n}d` set to False.
2. if we are in an fbgemm kernel and we detect that the current conv
packed params are tagged as unsafe, we throw an error.

For now, this PR is a WIP to get some early feedback if this is the
right direction, since iteration cost on this is high. In particular,
missing things here are:
* testing serialization of saving v3 and loading it back
* getting all the conv callsites (currently just module + conv2d is handled)

Note: there were some potential improvements discussed on dynamically
dispatching to qnnpack if it is available and the flag is set.  This PR
does not attempt to solve this issue - it can be solved by future PRs.

Test Plan:

```
# test that the error gets thrown when we are trying to run an operation which could
# saturate, and does not get thrown otherwise
python test/test_quantization.py TestQuantizedOps.test_conv_reduce_range

# test that loading older versions of conv packed params works as expected
# TODO(before land): extend these tests with the v3 files
python test/test_quantization.py TestSerialization
```

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D29175285](https://our.internmc.facebook.com/intern/diff/D29175285)

[ghstack-poisoned]
  • Loading branch information
vkuzo committed Jun 29, 2021
2 parents cb65524 + 8431c1e commit 6f6edf1
Show file tree
Hide file tree
Showing 179 changed files with 3,183 additions and 1,803 deletions.
1 change: 1 addition & 0 deletions .circleci/docker/common/install_conda.sh
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,7 @@ if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
boto3==1.16.34 \
coverage==5.5 \
hypothesis==4.53.2 \
expecttest==0.1.3 \
mypy==0.812 \
tb-nightly

Expand Down
2 changes: 1 addition & 1 deletion .circleci/scripts/binary_populate_env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ if [[ -z "$DOCKER_IMAGE" ]]; then
if [[ "$PACKAGE_TYPE" == conda ]]; then
export DOCKER_IMAGE="pytorch/conda-cuda"
elif [[ "$DESIRED_CUDA" == cpu ]]; then
export DOCKER_IMAGE="pytorch/manylinux-cuda100"
export DOCKER_IMAGE="pytorch/manylinux-cpu"
else
export DOCKER_IMAGE="pytorch/manylinux-cuda${DESIRED_CUDA:2}"
fi
Expand Down
5 changes: 5 additions & 0 deletions .github/scale-config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,11 @@ runner_types:
os: linux
max_available: 50
disk_size: 150
linux.16xlarge.nvidia.gpu:
instance_type: g3.16xlarge
os: linux
max_available: 10
disk_size: 150
windows.4xlarge:
instance_type: c5d.4xlarge
os: windows
Expand Down
8 changes: 8 additions & 0 deletions .github/scripts/generate_ci_workflows.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,15 +137,23 @@ def generate_workflow_file(
# docker_image_base=f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-xenial-py3-clang7-onnx",
# test_runner_type=LINUX_CPU_TEST_RUNNER,
# ),
PyTorchLinuxWorkflow(
build_environment="pytorch-linux-bionic-cuda10.2-cudnn7-py3.9-gcc7",
docker_image_base=f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-bionic-cuda10.2-cudnn7-py3.9-gcc7",
test_runner_type=LINUX_CUDA_TEST_RUNNER,
num_test_shards=2,
),
PyTorchLinuxWorkflow(
build_environment="pytorch-linux-xenial-cuda10.2-cudnn7-py3.6-gcc7",
docker_image_base=f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7",
test_runner_type=LINUX_CUDA_TEST_RUNNER,
num_test_shards=2,
),
PyTorchLinuxWorkflow(
build_environment="pytorch-linux-xenial-cuda11.1-cudnn8-py3.6-gcc7",
docker_image_base=f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-xenial-cuda11.1-cudnn8-py3-gcc7",
test_runner_type=LINUX_CUDA_TEST_RUNNER,
num_test_shards=2,
),
# PyTorchLinuxWorkflow(
# build_environment="pytorch-libtorch-linux-xenial-cuda11.1-cudnn8-py3.6-gcc7",
Expand Down
2 changes: 1 addition & 1 deletion .github/templates/linux_ci_workflow.yml.j2
Original file line number Diff line number Diff line change
Expand Up @@ -461,7 +461,7 @@ jobs:
env:
PR_NUMBER: ${{ github.event.pull_request.number }}
run: |
echo "See rendered docs at https://d28slxzaq48q8t.cloudfront.net/$PR_NUMBER/"
echo "See rendered docs at https://docs-preview.pytorch.org/$PR_NUMBER/"
- name: Archive artifacts into zip
run: |
zip -r pytorch_github_io.zip "${GITHUB_WORKSPACE}/pytorch.github.io"
Expand Down
12 changes: 7 additions & 5 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,8 @@ jobs:
echo 'Running setup.py with Python 2 did not give the expected error message.'
false
fi
- name: Keep torch.utils.collect_env python2 compliant
run: python2 -m py_compile torch/utils/collect_env.py

shellcheck:
runs-on: ubuntu-18.04
Expand Down Expand Up @@ -267,10 +269,10 @@ jobs:
clang-tidy:
if: github.event_name == 'pull_request'
runs-on: ubuntu-18.04
runs-on: linux.2xlarge
container:
# ubuntu18.04-cuda10.2-py3.6-tidy11
image: ghcr.io/pytorch/cilint-clang-tidy:7f0b4616100071a4813318bfdbd5b06ae36c5272
image: ghcr.io/pytorch/cilint-clang-tidy:b5a795a1165938adc6ccdab36bfa59bb3829ad47
steps:
- name: Checkout PyTorch
uses: actions/checkout@v2
Expand All @@ -288,8 +290,6 @@ jobs:
run: |
cd "${GITHUB_WORKSPACE}"
set -eux
git remote add upstream https://github.com/pytorch/pytorch
git fetch upstream "$GITHUB_BASE_REF"
if [ ! -d build ]; then
git submodule update --init --recursive
Expand Down Expand Up @@ -329,9 +329,11 @@ jobs:
# deploy/interpreter files are excluded due to using macros and other techniquies
# that are not easily converted to accepted c++
python3 tools/linter/clang_tidy.py \
--parallel \
--verbose \
--paths torch/csrc/ \
--diff-file pr.diff \
--include-dir /usr/lib/llvm-11/include/openmp \
-g"-torch/csrc/jit/passes/onnx/helper.cpp" \
-g"-torch/csrc/jit/passes/onnx/shape_type_inference.cpp" \
-g"-torch/csrc/jit/serialization/onnx.cpp" \
Expand Down Expand Up @@ -401,7 +403,7 @@ jobs:
set -eux
pip install -r requirements.txt
pip install numpy==1.20 # https://github.com/pytorch/pytorch/pull/60472
pip install mypy==0.812
pip install expecttest==0.1.3 mypy==0.812
# Needed to check tools/render_junit.py
pip install junitparser rich
- name: Run autogen
Expand Down

0 comments on commit 6f6edf1

Please sign in to comment.