Skip to content

Commit

Permalink
Update on "[ONNX] Update documentation (#58712)"
Browse files Browse the repository at this point in the history
* Add introductory paragraph explaining what ONNX is and what the
  torch.onnx module does.
* In "Tracing vs Scripting" and doc-string for torch.onnx.export(),
  clarify that exporting always happens on ScriptModules and that
  tracing and scripting are the two ways to produce a ScriptModule.
* Remove examples of using Caffe2 to run exported models.
  Caffe2's website says it's deprecated, so it's probably best not to
  encourage people to use it by including it in examples.
* Remove a lot of content that's redundant:
  * The example of how to mix tracing and scripting, and instead
    link to Introduction to TorchScript, which includes very similar
    content.
  * "Type annotations" section. Link to TorchScript docs which explain
    that in more detail.
  * "Using dictionaries to handle Named Arguments as model inputs"
    section. It's redundant with the description of the `args` argument
    to `export()`, which appears on the same page once the HTML
    is generated.
  * Remove the list of supported Tensor indexing patterns. If it's not
    in the list of unsupported patterns, users can assume it's
    supported, so having both is redundant.
  * Remove the list of supported operators and models.
    I think the list of supported operators is not very useful.
    A list of supported model architectures may be useful, but in
    reality it's already very out of date. We should add it back if
    / when we have a system for keeping it up to date.
  * "Operator Export Type" section. It's redundant with the description
    of the `operator_export_type` arg to to `export()`, which appears on
    the same page once the HTML is generated.
  * "Use external data format" section. It's redundant with the
    description of the `use_external_data_format` arg to `export()`.
  * "Training" section.  It's redundant with the
    description of the `training` arg to `export()`.
* Move the content about different operator implementations producing
  different results from the "Limitations" section into the doc for the
  `operator_export_type` arg.
* Document "quantized" -> "caffe2" behavior of
  OperatorExportTypes.ONNX_ATEN_FALLBACK.
* Combing the text about using torch.Tensor.item() and the text about
  using NumPy types into a section titled
  "Avoid NumPy and built-in Python types", since they're both
  fundamentally about the same issue.
* Rename "Write PyTorch model in Torch way" to "Avoiding Pitfalls".
* Lots of minor fixes: spelling, grammar, brevity, fixing links, adding
  links.
* Clarify limitation on input and output types. Phrasing it in terms of
  PyTorch types is much more accessible than in terms of TorchScript
  types. Also clarify what actually happens when dict and str are used
  as inputs and outputs.
* In Supported operators, use torch function and class names and link
  to them. This is more user friendly than using the internal aten
  op names.
* Remove references to VariableType.h, which doesn't appear to contain
  the information that it once did. Instead refer to the generated
  .pyi files.
* Remove the text in the FAQ about appending to lists within loops.
  I think this limitation is no longer present
  (perhaps since #51577).
* Minor fixes to some code I read along the way.
* Explain the current rationale for the weird ::prim_PythonOp op name.

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>

Differential Revision: [D29494912](https://our.internmc.facebook.com/intern/diff/D29494912)

[ghstack-poisoned]
  • Loading branch information
BowenBao committed Jul 6, 2021
2 parents e290d2b + d56ea19 commit 560935f
Show file tree
Hide file tree
Showing 346 changed files with 8,827 additions and 1,753 deletions.
11 changes: 0 additions & 11 deletions .circleci/cimodel/data/pytorch_build_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,17 +80,6 @@
]),
]),
]),
("gcc", [
("9", [
("3.8", [
("coverage", [
(True, [
("shard_test", [XImportant(True)]),
]),
]),
]),
]),
]),
("rocm", [
("3.9", [
("3.6", [
Expand Down
61 changes: 3 additions & 58 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ commands:
cd ~/project
export ANDROID_BUILD_TYPE="<< parameters.build_type >>"
export COMMIT_TIME=$(git log --max-count=1 --format=%ct || echo 0)
python3 .circleci/scripts/upload_binary_size_to_scuba.py android
python3 tools/stats/upload_binary_size_to_scuba.py android
##############################################################################
# Binary build (nightlies nightly build) defaults
Expand Down Expand Up @@ -547,7 +547,7 @@ jobs:
cd /pytorch && export COMMIT_TIME=$(git log --max-count=1 --format=%ct || echo 0)
python3 -mpip install requests && \
SCRIBE_GRAPHQL_ACCESS_TOKEN=${SCRIBE_GRAPHQL_ACCESS_TOKEN} \
python3 .circleci/scripts/upload_binary_size_to_scuba.py || exit 0
python3 tools/stats/upload_binary_size_to_scuba.py || exit 0
- store_artifacts:
path: /home/circleci/project/dist

Expand Down Expand Up @@ -881,7 +881,7 @@ jobs:
cd /pytorch && export COMMIT_TIME=$(git log --max-count=1 --format=%ct || echo 0)
python3 -mpip install requests && \
SCRIBE_GRAPHQL_ACCESS_TOKEN=${SCRIBE_GRAPHQL_ACCESS_TOKEN} \
python3 /pytorch/.circleci/scripts/upload_binary_size_to_scuba.py || exit 0
python3 /pytorch/tools/stats/upload_binary_size_to_scuba.py || exit 0
- persist_to_workspace:
root: /
paths: final_pkgs
Expand Down Expand Up @@ -7164,26 +7164,6 @@ workflows:
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-cuda10.2-cudnn7-py3.9-gcc7"
use_cuda_docker_runtime: "1"
resource_class: gpu.medium
- pytorch_linux_build:
name: pytorch_linux_bionic_py3_8_gcc9_coverage_build
requires:
- "docker-pytorch-linux-bionic-py3.8-gcc9"
build_environment: "pytorch-linux-bionic-py3.8-gcc9-coverage-build"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-py3.8-gcc9"
- pytorch_linux_test:
name: pytorch_linux_bionic_py3_8_gcc9_coverage_test1
requires:
- pytorch_linux_bionic_py3_8_gcc9_coverage_build
build_environment: "pytorch-linux-bionic-py3.8-gcc9-coverage-test1"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-py3.8-gcc9"
resource_class: large
- pytorch_linux_test:
name: pytorch_linux_bionic_py3_8_gcc9_coverage_test2
requires:
- pytorch_linux_bionic_py3_8_gcc9_coverage_build
build_environment: "pytorch-linux-bionic-py3.8-gcc9-coverage-test2"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-py3.8-gcc9"
resource_class: large
- pytorch_linux_build:
name: pytorch_linux_bionic_rocm3_9_py3_6_build
requires:
Expand Down Expand Up @@ -9273,41 +9253,6 @@ workflows:
- "docker-pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7"
build_environment: "pytorch-libtorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7-build"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7"
- pytorch_windows_build:
build_environment: pytorch-win-vs2019-cuda11-cudnn8-py3
cuda_version: "11.3"
name: periodic_pytorch_windows_cuda11.3_build
python_version: "3.8"
use_cuda: "1"
vc_product: BuildTools
vc_version: "14.28.29333"
vc_year: "2019"
- pytorch_windows_test:
build_environment: pytorch-win-vs2019-cuda11-cudnn8-py3
cuda_version: "11.3"
executor: windows-with-nvidia-gpu
name: periodic_pytorch_windows_cuda11.3_test1
python_version: "3.8"
requires:
- periodic_pytorch_windows_cuda11.3_build
test_name: pytorch-windows-test1
use_cuda: "1"
vc_product: BuildTools
vc_version: "14.28.29333"
vc_year: "2019"
- pytorch_windows_test:
build_environment: pytorch-win-vs2019-cuda11-cudnn8-py3
cuda_version: "11.3"
executor: windows-with-nvidia-gpu
name: periodic_pytorch_windows_cuda11.3_test2
python_version: "3.8"
requires:
- periodic_pytorch_windows_cuda11.3_build
test_name: pytorch-windows-test2
use_cuda: "1"
vc_product: BuildTools
vc_version: "14.28.29333"
vc_year: "2019"

# The following allows these jobs to run on ci-all and release branches
debuggable-scheduled-ci:
Expand Down
2 changes: 1 addition & 1 deletion .circleci/docker/common/install_rocm.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ install_magma() {
# "install" hipMAGMA into /opt/rocm/magma by copying after build
git clone https://bitbucket.org/icl/magma.git
pushd magma
git checkout 878b1ce02e9cfe4a829be22c8f911e9c0b6bd88f
git checkout aed4e285084763113ce5757393d4008e27b5194b
cp make.inc-examples/make.inc.hip-gcc-mkl make.inc
echo 'LIBDIR += -L$(MKLROOT)/lib' >> make.inc
echo 'LIB += -Wl,--enable-new-dtags -Wl,--rpath,/opt/rocm/lib -Wl,--rpath,$(MKLROOT)/lib -Wl,--rpath,/opt/rocm/magma/lib' >> make.inc
Expand Down
6 changes: 3 additions & 3 deletions .circleci/scripts/binary_windows_build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ if [[ "${DESIRED_CUDA}" == "cu111" || "${DESIRED_CUDA}" == "cu113" ]]; then

echo "Free Space for CUDA DEBUG BUILD"
if [[ "$CIRCLECI" == 'true' ]]; then
if [[ -d "C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Commnuity" ]]; then
rm -rf "C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Commnuity"
if [[ -d "C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community" ]]; then
rm -rf "C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community"
fi

if [[ -d "C:\\Program Files (x86)\\Microsoft Visual Studio 14.0" ]]; then
Expand Down Expand Up @@ -67,7 +67,7 @@ if [[ "$CIRCLECI" == 'true' && -d "C:\\ProgramData\\Microsoft\\VisualStudio\\Pac
fi

if [[ "$CIRCLECI" == 'true' && -d "C:\\Microsoft" ]]; then
# don't use quota here
# don't use quotes here
rm -rf /c/Microsoft/AndroidNDK*
fi

Expand Down
2 changes: 1 addition & 1 deletion .circleci/verbatim-sources/commands.yml
Original file line number Diff line number Diff line change
Expand Up @@ -171,4 +171,4 @@ commands:
cd ~/project
export ANDROID_BUILD_TYPE="<< parameters.build_type >>"
export COMMIT_TIME=$(git log --max-count=1 --format=%ct || echo 0)
python3 .circleci/scripts/upload_binary_size_to_scuba.py android
python3 tools/stats/upload_binary_size_to_scuba.py android
2 changes: 1 addition & 1 deletion .circleci/verbatim-sources/job-specs/binary-job-specs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
cd /pytorch && export COMMIT_TIME=$(git log --max-count=1 --format=%ct || echo 0)
python3 -mpip install requests && \
SCRIBE_GRAPHQL_ACCESS_TOKEN=${SCRIBE_GRAPHQL_ACCESS_TOKEN} \
python3 /pytorch/.circleci/scripts/upload_binary_size_to_scuba.py || exit 0
python3 /pytorch/tools/stats/upload_binary_size_to_scuba.py || exit 0
- persist_to_workspace:
root: /
paths: final_pkgs
Expand Down
2 changes: 1 addition & 1 deletion .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ jobs:
cd /pytorch && export COMMIT_TIME=$(git log --max-count=1 --format=%ct || echo 0)
python3 -mpip install requests && \
SCRIBE_GRAPHQL_ACCESS_TOKEN=${SCRIBE_GRAPHQL_ACCESS_TOKEN} \
python3 .circleci/scripts/upload_binary_size_to_scuba.py || exit 0
python3 tools/stats/upload_binary_size_to_scuba.py || exit 0
- store_artifacts:
path: /home/circleci/project/dist

Expand Down
35 changes: 0 additions & 35 deletions .circleci/verbatim-sources/workflows/workflows-scheduled-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,41 +31,6 @@
- "docker-pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7"
build_environment: "pytorch-libtorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7-build"
docker_image: "308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-cuda11.3-cudnn8-py3-gcc7"
- pytorch_windows_build:
build_environment: pytorch-win-vs2019-cuda11-cudnn8-py3
cuda_version: "11.3"
name: periodic_pytorch_windows_cuda11.3_build
python_version: "3.8"
use_cuda: "1"
vc_product: BuildTools
vc_version: "14.28.29333"
vc_year: "2019"
- pytorch_windows_test:
build_environment: pytorch-win-vs2019-cuda11-cudnn8-py3
cuda_version: "11.3"
executor: windows-with-nvidia-gpu
name: periodic_pytorch_windows_cuda11.3_test1
python_version: "3.8"
requires:
- periodic_pytorch_windows_cuda11.3_build
test_name: pytorch-windows-test1
use_cuda: "1"
vc_product: BuildTools
vc_version: "14.28.29333"
vc_year: "2019"
- pytorch_windows_test:
build_environment: pytorch-win-vs2019-cuda11-cudnn8-py3
cuda_version: "11.3"
executor: windows-with-nvidia-gpu
name: periodic_pytorch_windows_cuda11.3_test2
python_version: "3.8"
requires:
- periodic_pytorch_windows_cuda11.3_build
test_name: pytorch-windows-test2
use_cuda: "1"
vc_product: BuildTools
vc_version: "14.28.29333"
vc_year: "2019"

# The following allows these jobs to run on ci-all and release branches
debuggable-scheduled-ci:
Expand Down
35 changes: 27 additions & 8 deletions .github/scripts/generate_ci_workflows.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,23 @@
#!/usr/bin/env python3

from pathlib import Path
from typing import Any, Dict
from typing import Any, Dict, Optional

import jinja2
from typing_extensions import Literal

DOCKER_REGISTRY = "308535385114.dkr.ecr.us-east-1.amazonaws.com"

GITHUB_DIR = Path(__file__).parent.parent
GITHUB_DIR = Path(__file__).resolve().parent.parent


# it would be nice to statically specify that build_environment must be
# present, but currently Python has no easy way to do that
# https://github.com/python/mypy/issues/4617
PyTorchWorkflow = Dict[str, Any]

YamlShellBool = Literal["''", 1]

WINDOWS_CPU_TEST_RUNNER = "windows.4xlarge"
WINDOWS_CUDA_TEST_RUNNER = "windows.8xlarge.nvidia.gpu"

Expand All @@ -27,13 +30,15 @@ def PyTorchWindowsWorkflow(
on_pull_request: bool = False,
only_build_on_pull_request: bool = False,
num_test_shards: int = 1,
is_scheduled: Optional[str] = None,
) -> PyTorchWorkflow:
return {
"build_environment": build_environment,
"test_runner_type": test_runner_type,
"cuda_version": cuda_version,
"on_pull_request": on_pull_request,
"only_build_on_pull_request": only_build_on_pull_request and on_pull_request,
"is_scheduled": is_scheduled,
"num_test_shards": num_test_shards,
}

Expand All @@ -49,14 +54,18 @@ def PyTorchLinuxWorkflow(
test_runner_type: str,
on_pull_request: bool = False,
enable_doc_jobs: bool = False,
enable_multigpu_test: YamlShellBool = "''",
num_test_shards: int = 1,
is_scheduled: Optional[str] = None,
) -> PyTorchWorkflow:
return {
"build_environment": build_environment,
"docker_image_base": docker_image_base,
"test_runner_type": test_runner_type,
"on_pull_request": on_pull_request,
"is_scheduled": is_scheduled,
"enable_doc_jobs": enable_doc_jobs,
"enable_multigpu_test": enable_multigpu_test,
"num_test_shards": num_test_shards,
}

Expand Down Expand Up @@ -95,7 +104,14 @@ def generate_workflow_file(
cuda_version="11.1",
test_runner_type=WINDOWS_CUDA_TEST_RUNNER,
num_test_shards=2,
)
),
PyTorchWindowsWorkflow(
build_environment="periodic-pytorch-win-vs2019-cuda11-cudnn8-py3",
cuda_version="11.3",
test_runner_type=WINDOWS_CUDA_TEST_RUNNER,
num_test_shards=2,
is_scheduled="45 0,4,8,12,16,20 * * *",
),
]

LINUX_WORKFLOWS = [
Expand Down Expand Up @@ -147,6 +163,7 @@ def generate_workflow_file(
build_environment="pytorch-linux-xenial-cuda10.2-cudnn7-py3.6-gcc7",
docker_image_base=f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-xenial-cuda10.2-cudnn7-py3-gcc7",
test_runner_type=LINUX_CUDA_TEST_RUNNER,
enable_multigpu_test=1,
num_test_shards=2,
),
PyTorchLinuxWorkflow(
Expand Down Expand Up @@ -175,11 +192,13 @@ def generate_workflow_file(
# docker_image_base=f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-bionic-py3.6-clang9",
# test_runner_type=LINUX_CPU_TEST_RUNNER,
# ),
# PyTorchLinuxWorkflow(
# build_environment="pytorch-linux-bionic-py3.8-gcc9-coverage",
# docker_image_base=f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-bionic-py3.8-gcc9",
# test_runner_type=LINUX_CPU_TEST_RUNNER,
# ),
PyTorchLinuxWorkflow(
build_environment="pytorch-linux-bionic-py3.8-gcc9-coverage",
docker_image_base=f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-bionic-py3.8-gcc9",
test_runner_type=LINUX_CPU_TEST_RUNNER,
on_pull_request=True,
num_test_shards=2,
),
# PyTorchLinuxWorkflow(
# build_environment="pytorch-linux-bionic-rocm3.9-py3.6",
# docker_image_base=f"{DOCKER_REGISTRY}/pytorch/pytorch-linux-bionic-rocm3.9-py3.6",
Expand Down
45 changes: 35 additions & 10 deletions .github/scripts/generate_pytorch_test_matrix.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,22 +9,47 @@

import json
import os
from typing import List
from typing import Dict

from typing_extensions import TypedDict

NUM_TEST_SHARDS = int(os.getenv('NUM_TEST_SHARDS', '1'))

def generate_sharding_list() -> List[int]:
return list(range(1, NUM_TEST_SHARDS + 1))
class Config(TypedDict):
num_shards: int
runner: str


def main() -> None:
print(json.dumps(
{
'test_config': generate_sharding_list()
},
sort_keys=True,
))
TEST_RUNNER_TYPE = os.getenv('TEST_RUNNER_TYPE')
NUM_TEST_SHARDS = int(os.getenv('NUM_TEST_SHARDS', '1'))
MULTIGPU_RUNNER_TYPE = os.getenv('MULTIGPU_RUNNER_TYPE')
configs: Dict[str, Config] = {}
if MULTIGPU_RUNNER_TYPE is not None and os.getenv('ENABLE_MULTIGPU_TEST'):
configs['multigpu'] = {'num_shards': 1, 'runner': MULTIGPU_RUNNER_TYPE}
matrix = {
'include': [
{
'config': 'default',
'shard': shard,
'num_shards': NUM_TEST_SHARDS,
'runner': TEST_RUNNER_TYPE,
}
for shard in range(1, NUM_TEST_SHARDS + 1)
] + [
{
'config': name,
'shard': shard,
'num_shards': config['num_shards'],
'runner': config['runner'],
}
for name, config in configs.items()
for shard in range(1, config['num_shards'] + 1)
]
}
render_matrix = {'config': list(dict.fromkeys(x['config'] for x in matrix['include']))}
print(json.dumps({'matrix': matrix, 'render-matrix': render_matrix}, indent=2))
print(f'::set-output name=matrix::{json.dumps(matrix)}')
print(f'::set-output name=render-matrix::{json.dumps(render_matrix)}')


if __name__ == "__main__":
Expand Down

0 comments on commit 560935f

Please sign in to comment.