Skip to content

Commit

Permalink
Update on "[ONNX] Update documentation (#58712)"
Browse files Browse the repository at this point in the history
* Add introductory paragraph explaining what ONNX is and what the
  torch.onnx module does.
* In "Tracing vs Scripting" and doc-string for torch.onnx.export(),
  clarify that exporting always happens on ScriptModules and that
  tracing and scripting are the two ways to produce a ScriptModule.
* Remove examples of using Caffe2 to run exported models.
  Caffe2's website says it's deprecated, so it's probably best not to
  encourage people to use it by including it in examples.
* Remove a lot of content that's redundant:
  * The example of how to mix tracing and scripting, and instead
    link to Introduction to TorchScript, which includes very similar
    content.
  * "Type annotations" section. Link to TorchScript docs which explain
    that in more detail.
  * "Using dictionaries to handle Named Arguments as model inputs"
    section. It's redundant with the description of the `args` argument
    to `export()`, which appears on the same page once the HTML
    is generated.
  * Remove the list of supported Tensor indexing patterns. If it's not
    in the list of unsupported patterns, users can assume it's
    supported, so having both is redundant.
  * Remove the list of supported operators and models.
    I think the list of supported operators is not very useful.
    A list of supported model architectures may be useful, but in
    reality it's already very out of date. We should add it back if
    / when we have a system for keeping it up to date.
  * "Operator Export Type" section. It's redundant with the description
    of the `operator_export_type` arg to to `export()`, which appears on
    the same page once the HTML is generated.
  * "Use external data format" section. It's redundant with the
    description of the `use_external_data_format` arg to `export()`.
  * "Training" section.  It's redundant with the
    description of the `training` arg to `export()`.
* Move the content about different operator implementations producing
  different results from the "Limitations" section into the doc for the
  `operator_export_type` arg.
* Document "quantized" -> "caffe2" behavior of
  OperatorExportTypes.ONNX_ATEN_FALLBACK.
* Combing the text about using torch.Tensor.item() and the text about
  using NumPy types into a section titled
  "Avoid NumPy and built-in Python types", since they're both
  fundamentally about the same issue.
* Rename "Write PyTorch model in Torch way" to "Avoiding Pitfalls".
* Lots of minor fixes: spelling, grammar, brevity, fixing links, adding
  links.
* Clarify limitation on input and output types. Phrasing it in terms of
  PyTorch types is much more accessible than in terms of TorchScript
  types. Also clarify what actually happens when dict and str are used
  as inputs and outputs.
* In Supported operators, use torch function and class names and link
  to them. This is more user friendly than using the internal aten
  op names.
* Remove references to VariableType.h, which doesn't appear to contain
  the information that it once did. Instead refer to the generated
  .pyi files.
* Remove the text in the FAQ about appending to lists within loops.
  I think this limitation is no longer present
  (perhaps since #51577).
* Minor fixes to some code I read along the way.
* Explain the current rationale for the weird ::prim_PythonOp op name.

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>

[ghstack-poisoned]
  • Loading branch information
BowenBao committed Jun 25, 2021
2 parents 5001336 + 9183dcc commit e388c52
Show file tree
Hide file tree
Showing 422 changed files with 10,261 additions and 6,549 deletions.
4 changes: 4 additions & 0 deletions .circleci/cimodel/data/pytorch_build_definitions.py
Expand Up @@ -31,6 +31,7 @@ class Conf:
is_libtorch: bool = False
is_important: bool = False
parallel_backend: Optional[str] = None
build_only: bool = False

@staticmethod
def is_test_phase(phase):
Expand Down Expand Up @@ -112,6 +113,8 @@ def gen_workflow_params(self, phase):
parameters["resource_class"] = "xlarge"
if hasattr(self, 'filters'):
parameters['filters'] = self.filters
if self.build_only:
parameters['build_only'] = miniutils.quote(str(int(True)))
return parameters

def gen_workflow_job(self, phase):
Expand Down Expand Up @@ -369,6 +372,7 @@ def instantiate_configs(only_slow_gradcheck):
is_libtorch=is_libtorch,
is_important=is_important,
parallel_backend=parallel_backend,
build_only=build_only,
)

# run docs builds on "pytorch-linux-xenial-py3.6-gcc5.4". Docs builds
Expand Down
17 changes: 11 additions & 6 deletions .circleci/config.yml
Expand Up @@ -496,7 +496,7 @@ jobs:
docker cp /home/circleci/project/. $id:/var/lib/jenkins/workspace
export COMMAND='((echo "sudo chown -R jenkins workspace && export CIRCLE_JOB="$CIRCLE_JOB" && cd workspace && .jenkins/pytorch/build.sh && find ${BUILD_ROOT} -type f -name "*.a" -or -name "*.o" -delete") | docker exec -u jenkins -i "$id" bash) 2>&1'
export COMMAND='((echo "sudo chown -R jenkins workspace && export JOB_BASE_NAME="$CIRCLE_JOB" && cd workspace && .jenkins/pytorch/build.sh && find ${BUILD_ROOT} -type f -name "*.a" -or -name "*.o" -delete") | docker exec -u jenkins -i "$id" bash) 2>&1'
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
Expand Down Expand Up @@ -635,7 +635,7 @@ jobs:
# =================== The following code will be executed inside Docker container ===================
set -ex
export SCRIBE_GRAPHQL_ACCESS_TOKEN="${SCRIBE_GRAPHQL_ACCESS_TOKEN}"
export CIRCLE_JOB="$CIRCLE_JOB"
export JOB_BASE_NAME="$CIRCLE_JOB"
${PARALLEL_FLAGS}
cd workspace
EOL
Expand Down Expand Up @@ -682,11 +682,11 @@ jobs:
export CIRCLE_SHA1="$CIRCLE_SHA1"
export CIRCLE_PR_NUMBER="${CIRCLE_PR_NUMBER:-}"
export CIRCLE_BRANCH="$CIRCLE_BRANCH"
export CIRCLE_JOB="$CIRCLE_JOB"
export JOB_BASE_NAME="$CIRCLE_JOB"
export CIRCLE_WORKFLOW_ID="$CIRCLE_WORKFLOW_ID"
cd workspace
export PYTHONPATH="\${PWD}"
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
python tools/stats/print_test_stats.py --upload-to-s3 --compare-with-s3 test
EOL
echo "(cat docker_commands.sh | docker exec -u jenkins -e LANG=C.UTF-8 -i "$id" bash) 2>&1" > command.sh
unbuffer bash command.sh | ts
Expand Down Expand Up @@ -840,7 +840,7 @@ jobs:
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_WIN_BUILD_V1}
export PYTHONPATH="$PWD"
pip install typing_extensions boto3
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
python tools/stats/print_test_stats.py --upload-to-s3 --compare-with-s3 test
when: always
- store_test_results:
path: test/test-reports
Expand Down Expand Up @@ -1455,7 +1455,7 @@ jobs:
# Using the same IAM user to write stats to our OSS bucket
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4}
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4}
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
python tools/stats/print_test_stats.py --upload-to-s3 --compare-with-s3 test
when: always
- store_test_results:
path: test/test-reports
Expand Down Expand Up @@ -7063,6 +7063,7 @@ workflows:
- /release\/.*/
build_environment: "pytorch-libtorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7-build"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7"
build_only: "1"
- pytorch_linux_build:
name: pytorch_linux_xenial_cuda11_1_cudnn8_py3_gcc7_build
requires:
Expand Down Expand Up @@ -7097,6 +7098,7 @@ workflows:
- /release\/.*/
build_environment: "pytorch-libtorch-linux-xenial-cuda11.1-cudnn8-py3-gcc7-build"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.1-cudnn8-py3-gcc7"
build_only: "1"
- pytorch_linux_build:
name: pytorch_linux_bionic_py3_6_clang9_noarch_build
requires:
Expand Down Expand Up @@ -7185,6 +7187,7 @@ workflows:
build_environment: "pytorch-linux-bionic-rocm3.9-py3.6-build"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-rocm3.9-py3.6"
resource_class: xlarge
build_only: "1"
- pytorch_macos_10_15_py3_build:
name: pytorch_macos_10_15_py3_build
- pytorch_macos_10_13_py3_build:
Expand Down Expand Up @@ -9078,12 +9081,14 @@ workflows:
- "docker-pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7"
build_environment: "pytorch-libtorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7-build"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7"
build_only: "1"
- pytorch_linux_build:
name: pytorch_libtorch_linux_xenial_cuda11_1_cudnn8_py3_gcc7_build
requires:
- "docker-pytorch-linux-xenial-cuda11.1-cudnn8-py3-gcc7"
build_environment: "pytorch-libtorch-linux-xenial-cuda11.1-cudnn8-py3-gcc7-build"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.1-cudnn8-py3-gcc7"
build_only: "1"
- pytorch_linux_build:
build_environment: "pytorch-linux-xenial-py3-clang5-android-ndk-r19c-x86_32-build"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c"
Expand Down
2 changes: 1 addition & 1 deletion .circleci/verbatim-sources/job-specs/job-specs-custom.yml
Expand Up @@ -213,7 +213,7 @@
# Using the same IAM user to write stats to our OSS bucket
export AWS_ACCESS_KEY_ID=${CIRCLECI_AWS_ACCESS_KEY_FOR_SCCACHE_S3_BUCKET_V4}
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_SCCACHE_S3_BUCKET_V4}
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
python tools/stats/print_test_stats.py --upload-to-s3 --compare-with-s3 test
when: always
- store_test_results:
path: test/test-reports
Expand Down
10 changes: 5 additions & 5 deletions .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Expand Up @@ -34,7 +34,7 @@ jobs:
docker cp /home/circleci/project/. $id:/var/lib/jenkins/workspace
export COMMAND='((echo "sudo chown -R jenkins workspace && export CIRCLE_JOB="$CIRCLE_JOB" && cd workspace && .jenkins/pytorch/build.sh && find ${BUILD_ROOT} -type f -name "*.a" -or -name "*.o" -delete") | docker exec -u jenkins -i "$id" bash) 2>&1'
export COMMAND='((echo "sudo chown -R jenkins workspace && export JOB_BASE_NAME="$CIRCLE_JOB" && cd workspace && .jenkins/pytorch/build.sh && find ${BUILD_ROOT} -type f -name "*.a" -or -name "*.o" -delete") | docker exec -u jenkins -i "$id" bash) 2>&1'
echo ${COMMAND} > ./command.sh && unbuffer bash ./command.sh | ts
Expand Down Expand Up @@ -173,7 +173,7 @@ jobs:
# =================== The following code will be executed inside Docker container ===================
set -ex
export SCRIBE_GRAPHQL_ACCESS_TOKEN="${SCRIBE_GRAPHQL_ACCESS_TOKEN}"
export CIRCLE_JOB="$CIRCLE_JOB"
export JOB_BASE_NAME="$CIRCLE_JOB"
${PARALLEL_FLAGS}
cd workspace
EOL
Expand Down Expand Up @@ -220,11 +220,11 @@ jobs:
export CIRCLE_SHA1="$CIRCLE_SHA1"
export CIRCLE_PR_NUMBER="${CIRCLE_PR_NUMBER:-}"
export CIRCLE_BRANCH="$CIRCLE_BRANCH"
export CIRCLE_JOB="$CIRCLE_JOB"
export JOB_BASE_NAME="$CIRCLE_JOB"
export CIRCLE_WORKFLOW_ID="$CIRCLE_WORKFLOW_ID"
cd workspace
export PYTHONPATH="\${PWD}"
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
python tools/stats/print_test_stats.py --upload-to-s3 --compare-with-s3 test
EOL
echo "(cat docker_commands.sh | docker exec -u jenkins -e LANG=C.UTF-8 -i "$id" bash) 2>&1" > command.sh
unbuffer bash command.sh | ts
Expand Down Expand Up @@ -378,7 +378,7 @@ jobs:
export AWS_SECRET_ACCESS_KEY=${CIRCLECI_AWS_SECRET_KEY_FOR_WIN_BUILD_V1}
export PYTHONPATH="$PWD"
pip install typing_extensions boto3
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
python tools/stats/print_test_stats.py --upload-to-s3 --compare-with-s3 test
when: always
- store_test_results:
path: test/test-reports
Expand Down
1 change: 0 additions & 1 deletion .github/scripts/generate_ci_workflows.py
Expand Up @@ -88,7 +88,6 @@ def generate_workflow_file(
cuda_version="10.1",
test_runner_type=WINDOWS_CUDA_TEST_RUNNER,
on_pull_request=True,
only_build_on_pull_request=True,
num_test_shards=2,
),
PyTorchWindowsWorkflow(
Expand Down
24 changes: 17 additions & 7 deletions .github/templates/linux_ci_workflow.yml.j2
Expand Up @@ -145,7 +145,7 @@ jobs:
sh -c 'sudo chown -R jenkins . && .jenkins/pytorch/build.sh'
- name: Display and upload binary build size statistics (Click Me)
# temporary hack: set CIRCLE_* vars, until we update
# tools/print_test_stats.py to natively support GitHub Actions
# tools/stats/print_test_stats.py to natively support GitHub Actions
env:
SCRIBE_GRAPHQL_ACCESS_TOKEN: ${{ secrets.SCRIBE_GRAPHQL_ACCESS_TOKEN }}
CIRCLE_BRANCH: ${{ steps.parse-ref.outputs.branch }}
Expand Down Expand Up @@ -306,6 +306,12 @@ jobs:
run: |
# Ensure the working directory gets chowned back to the current user
docker run --rm -v "$(pwd)":/v -w /v "${ALPINE_IMAGE}" chown -R "$(id -u):$(id -g)" .
- name: Zip test reports for upload
if: always()
run: |
# Remove any previous test reports if they exist
rm -f test-reports-*.zip
zip -r "test-reports-${TEST_CONFIG}.zip" test -i '*.xml'
- uses: actions/upload-artifact@v2
name: Store PyTorch Test Reports
if: always()
Expand All @@ -314,7 +320,7 @@ jobs:
retention-days: 14
if-no-files-found: error
path:
test/**/*.xml
test-reports-*.zip
- name: Clean up docker images
if: always()
run: |
Expand All @@ -337,13 +343,17 @@ jobs:
- name: Checkout PyTorch
uses: actions/checkout@v2
with:
# deep clone, to allow tools/print_test_stats.py to use Git commands
# deep clone, to allow tools/stats/print_test_stats.py to use Git commands
fetch-depth: 0
- uses: actions/download-artifact@v2
name: Download PyTorch Test Reports
with:
name: test-reports
path: test/test-reports
path: .
- name: Unzip test reports
run: |
# Should preserve paths so reports should still be in test/test-reports
unzip -o 'test-reports-*.zip'
- uses: actions/setup-python@v2
with:
python-version: 3.9
Expand All @@ -360,20 +370,20 @@ jobs:
run: .github/scripts/parse_ref.py
- name: Display and upload test statistics (Click Me)
# temporary hack: set CIRCLE_* vars, until we update
# tools/print_test_stats.py to natively support GitHub Actions
# tools/stats/print_test_stats.py to natively support GitHub Actions
env:
SCRIBE_GRAPHQL_ACCESS_TOKEN: ${{ secrets.SCRIBE_GRAPHQL_ACCESS_TOKEN }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_OSSCI_METRICS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_OSSCI_METRICS_SECRET_ACCESS_KEY }}
CIRCLE_BRANCH: ${{ steps.parse-ref.outputs.branch }}
CIRCLE_JOB: !{{ build_environment }}
JOB_BASE_NAME: !{{ build_environment }}-test
CIRCLE_PR_NUMBER: ${{ github.event.pull_request.number }}
CIRCLE_SHA1: ${{ github.event.pull_request.head.sha || github.sha }}
CIRCLE_TAG: ${{ steps.parse-ref.outputs.tag }}
CIRCLE_WORKFLOW_ID: '${{ github.run_id }}_${{ github.run_number }}'
run: |
export PYTHONPATH=$PWD
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
python tools/stats/print_test_stats.py --upload-to-s3 --compare-with-s3 test
{%- if enable_doc_jobs %}

pytorch_python_doc_build:
Expand Down
24 changes: 18 additions & 6 deletions .github/templates/windows_ci_workflow.yml.j2
Expand Up @@ -170,7 +170,16 @@ jobs:
else
export SHARD_NUMBER=0
fi
if [[ -n $GITHUB_HEAD_REF && "$USE_CUDA" == 1 ]]; then
export RUN_SMOKE_TESTS_ONLY=1
fi
.jenkins/pytorch/win-test.sh
- name: Zip test reports for upload
if: always()
shell: powershell
run: |
# -ir => recursive include all files in pattern
7z a "test-reports-$Env:TEST_CONFIG.zip" -ir'!test\*.xml'
- uses: actions/upload-artifact@v2
name: Store PyTorch Test Reports
if: always()
Expand All @@ -179,7 +188,7 @@ jobs:
retention-days: 14
if-no-files-found: error
path:
test/**/*.xml
test-reports-*.zip

# this is a separate step from test because the log files from test are too
# long: basically, GitHub tries to render all of the log files when you click
Expand All @@ -200,13 +209,16 @@ jobs:
- name: Checkout PyTorch
uses: actions/checkout@v2
with:
# deep clone, to allow tools/print_test_stats.py to use Git commands
# deep clone, to allow tools/stats/print_test_stats.py to use Git commands
fetch-depth: 0
- uses: actions/download-artifact@v2
name: Download PyTorch Test Reports
with:
name: test-reports
path: test/test-reports
path: .
- name: Unzip test reports
run: |
unzip -o 'test-reports-*.zip'
- uses: actions/setup-python@v2
with:
python-version: 3.9
Expand All @@ -223,17 +235,17 @@ jobs:
run: .github/scripts/parse_ref.py
- name: Display and upload test statistics (Click Me)
# temporary hack: set CIRCLE_* vars, until we update
# tools/print_test_stats.py to natively support GitHub Actions
# tools/stats/print_test_stats.py to natively support GitHub Actions
env:
SCRIBE_GRAPHQL_ACCESS_TOKEN: ${{ secrets.SCRIBE_GRAPHQL_ACCESS_TOKEN }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_OSSCI_METRICS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_OSSCI_METRICS_SECRET_ACCESS_KEY }}
CIRCLE_BRANCH: ${{ steps.parse-ref.outputs.branch }}
CIRCLE_JOB: !{{ build_environment }}
JOB_BASE_NAME: !{{ build_environment }}-test
CIRCLE_PR_NUMBER: ${{ github.event.pull_request.number }}
CIRCLE_SHA1: ${{ github.event.pull_request.head.sha || github.sha }}
CIRCLE_TAG: ${{ steps.parse-ref.outputs.tag }}
CIRCLE_WORKFLOW_ID: '${{ github.run_id }}_${{ github.run_number }}'
run: |
export PYTHONPATH=$PWD
python tools/print_test_stats.py --upload-to-s3 --compare-with-s3 test
python tools/stats/print_test_stats.py --upload-to-s3 --compare-with-s3 test
2 changes: 1 addition & 1 deletion .github/workflows/build_linux_conda.yml
Expand Up @@ -93,7 +93,7 @@ jobs:
path: /remote/**/*.bz2
- name: Display and upload binary build size statistics (Click Me)
# temporary hack: set CIRCLE_* vars, until we update
# tools/print_test_stats.py to natively support GitHub Actions
# tools/stats/print_test_stats.py to natively support GitHub Actions
env:
SCRIBE_GRAPHQL_ACCESS_TOKEN: ${{ secrets.SCRIBE_GRAPHQL_ACCESS_TOKEN }}
CIRCLE_BRANCH: ${{ steps.parse-ref.outputs.branch }}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_linux_libtorch.yml
Expand Up @@ -92,7 +92,7 @@ jobs:
path: /remote/**/*.zip
- name: Display and upload binary build size statistics (Click Me)
# temporary hack: set CIRCLE_* vars, until we update
# tools/print_test_stats.py to natively support GitHub Actions
# tools/stats/print_test_stats.py to natively support GitHub Actions
env:
SCRIBE_GRAPHQL_ACCESS_TOKEN: ${{ secrets.SCRIBE_GRAPHQL_ACCESS_TOKEN }}
CIRCLE_BRANCH: ${{ steps.parse-ref.outputs.branch }}
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_linux_wheels.yml
Expand Up @@ -91,7 +91,7 @@ jobs:
path: /remote/**/*.whl
- name: Display and upload binary build size statistics (Click Me)
# temporary hack: set CIRCLE_* vars, until we update
# tools/print_test_stats.py to natively support GitHub Actions
# tools/stats/print_test_stats.py to natively support GitHub Actions
env:
SCRIBE_GRAPHQL_ACCESS_TOKEN: ${{ secrets.SCRIBE_GRAPHQL_ACCESS_TOKEN }}
CIRCLE_BRANCH: ${{ steps.parse-ref.outputs.branch }}
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/clang_format.yml
Expand Up @@ -29,12 +29,12 @@ jobs:
# only run clang-format on allowlisted files
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "| clang-format failures found! Run: "
echo "| tools/clang_format_ci.sh ${BASE_SHA} "
echo "| tools/linter/clang_format_ci.sh ${BASE_SHA} "
echo "| to fix this error. "
echo "| For more info, see: https://github.com/pytorch/pytorch/wiki/clang-format "
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
tools/clang_format_ci.sh "${BASE_SHA}"
tools/linter/clang_format_ci.sh "${BASE_SHA}"
GIT_DIFF=$(git diff)
if [[ -z $GIT_DIFF ]]; then
Expand Down

0 comments on commit e388c52

Please sign in to comment.