Skip to content

Conversation

albanD
Copy link
Collaborator

@albanD albanD commented Feb 17, 2022

No description provided.

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 17, 2022

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/albanD/pytorch/blob/799e70247de42ce93681bafb99510c4492764a3c/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default
Add ciflow labels to this PR to trigger more builds:

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
linux-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
linux-binary-manywheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
linux-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/trunk ✅ triggered
linux-bionic-rocm4.5-py3.7 ciflow/all, ciflow/default, ciflow/linux, ciflow/rocm, ciflow/trunk ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux, ciflow/trunk ✅ triggered
linux-vulkan-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7-no-ops ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
macos-arm64-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
macos-arm64-binary-wheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
macos-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
macos-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
macos-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
macos-binary-wheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
windows-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
windows-binary-wheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
docker-builds ciflow/all, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow, ciflow/trunk 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
linux-xenial-cuda11.3-py3.7-gcc7-no-ops ciflow/all, ciflow/cuda, ciflow/linux, ciflow/trunk 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
parallelnative-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.7-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
periodic-win-vs2019-cuda11.5-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build ciflow/all, ciflow/android, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
pytorch-xla-linux-bionic-py3.7-clang8 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk, ciflow/xla 🚫 skipped

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Feb 17, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 009a049 (more details on the Dr. CI page):



🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build win-vs2019-cpu-py3 / build (1/2)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-02-18T17:17:42.5522850Z FAILED: bin/torch_cpu.dll lib/torch_cpu.lib
2022-02-18T17:14:34.7453176Z     BUILD_WHEEL
2022-02-18T17:14:34.7453351Z 
2022-02-18T17:14:34.7453507Z 
2022-02-18T17:14:34.7668927Z -- Build files have been written to: C:/actions-runner/_work/pytorch/pytorch/build
2022-02-18T17:14:35.8110295Z [1/75] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch.dir\__\empty.cpp.obj /Fdcaffe2\CMakeFiles\torch.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\build\empty.cpp
2022-02-18T17:14:35.8117658Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2022-02-18T17:14:35.8118262Z Copyright (C) Microsoft Corporation.  All rights reserved.
2022-02-18T17:14:35.8118603Z 
2022-02-18T17:14:35.9469948Z [2/75] cmd.exe /C "cd . && C:\Jenkins\Miniconda3\Library\bin\cmake.exe -E vs_link_dll --intdir=caffe2\CMakeFiles\torch_global_deps.dir --rc=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\rc.exe --mt=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\mt.exe --manifests  -- C:\PROGRA~2\MICROS~2\2019\BUILDT~1\VC\Tools\MSVC\1428~1.293\bin\Hostx64\x64\link.exe  caffe2\CMakeFiles\torch_global_deps.dir\__\torch\csrc\empty.c.obj  /out:bin\torch_global_deps.dll /implib:lib\torch_global_deps.lib /pdb:bin\torch_global_deps.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /INCREMENTAL:NO  win_tmp\mkl\lib\mkl_intel_lp64.lib  win_tmp\mkl\lib\mkl_intel_thread.lib  win_tmp\mkl\lib\mkl_core.lib  win_tmp\mkl\lib\libiomp5md.lib  kernel32.lib user32.lib gdi32.lib winspool.lib shell32.lib ole32.lib oleaut32.lib uuid.lib comdlg32.lib advapi32.lib  && cd ."
2022-02-18T17:17:42.5520793Z [3/75] cmd.exe /C "cd . && C:\Jenkins\Miniconda3\Library\bin\cmake.exe -E vs_link_dll --intdir=caffe2\CMakeFiles\torch_cpu.dir --rc=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\rc.exe --mt=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\mt.exe --manifests  -- C:\PROGRA~2\MICROS~2\2019\BUILDT~1\VC\Tools\MSVC\1428~1.293\bin\Hostx64\x64\link.exe  @CMakeFiles\torch_cpu.rsp  /out:bin\torch_cpu.dll /implib:lib\torch_cpu.lib /pdb:bin\torch_cpu.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /INCREMENTAL:NO /NODEFAULTLIB:vcomp  -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/caffe2_protos.lib -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/onnx.lib  && cd ."
2022-02-18T17:17:42.5522850Z FAILED: bin/torch_cpu.dll lib/torch_cpu.lib 
2022-02-18T17:17:42.5525063Z cmd.exe /C "cd . && C:\Jenkins\Miniconda3\Library\bin\cmake.exe -E vs_link_dll --intdir=caffe2\CMakeFiles\torch_cpu.dir --rc=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\rc.exe --mt=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\mt.exe --manifests  -- C:\PROGRA~2\MICROS~2\2019\BUILDT~1\VC\Tools\MSVC\1428~1.293\bin\Hostx64\x64\link.exe  @CMakeFiles\torch_cpu.rsp  /out:bin\torch_cpu.dll /implib:lib\torch_cpu.lib /pdb:bin\torch_cpu.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /INCREMENTAL:NO /NODEFAULTLIB:vcomp  -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/caffe2_protos.lib -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/onnx.lib  && cd ."
2022-02-18T17:17:42.5529405Z LINK: command "C:\PROGRA~2\MICROS~2\2019\BUILDT~1\VC\Tools\MSVC\1428~1.293\bin\Hostx64\x64\link.exe @CMakeFiles\torch_cpu.rsp /out:bin\torch_cpu.dll /implib:lib\torch_cpu.lib /pdb:bin\torch_cpu.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /INCREMENTAL:NO /NODEFAULTLIB:vcomp -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/caffe2_protos.lib -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/onnx.lib /MANIFEST /MANIFESTFILE:bin\torch_cpu.dll.manifest" failed (exit code 1120) with the following output:
2022-02-18T17:17:42.5532465Z Microsoft (R) Incremental Linker Version 14.28.29337.0
2022-02-18T17:17:42.5533275Z Copyright (C) Microsoft Corporation.  All rights reserved.
2022-02-18T17:17:42.5533766Z 
2022-02-18T17:17:42.5534514Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\AccumulateType.cpp.obj 
2022-02-18T17:17:42.5535652Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\BatchedFallback.cpp.obj 
2022-02-18T17:17:42.5536716Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\BatchedTensorImpl.cpp.obj 
2022-02-18T17:17:42.5537713Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\BatchingRegistrations.cpp.obj 
2022-02-18T17:17:42.5538969Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\CPUGeneratorImpl.cpp.obj 

See GitHub Actions build win-vs2019-cuda11.3-py3 / build (2/2)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-02-18T17:20:36.2866685Z FAILED: bin/torch_cpu.dll lib/torch_cpu.lib
2022-02-18T17:20:13.2856254Z MultiLabelMarginCriterion.cu
2022-02-18T17:20:17.9544906Z [52/694] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\randomtemp.exe C:/actions-runner/_work/pytorch/pytorch/build/win_tmp\bin\sccache.exe C:\PROGRA~1\NVIDIA~2\CUDA\v11.3\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -isystem=C:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\MaxUnpooling.cu.obj -MF caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\MaxUnpooling.cu.obj.d -x cu -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\MaxUnpooling.cu -o caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\MaxUnpooling.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\,-FS
2022-02-18T17:20:17.9559960Z MaxUnpooling.cu
2022-02-18T17:20:20.1053480Z [53/694] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\randomtemp.exe C:/actions-runner/_work/pytorch/pytorch/build/win_tmp\bin\sccache.exe C:\PROGRA~1\NVIDIA~2\CUDA\v11.3\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -isystem=C:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\Loss.cu.obj -MF caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\Loss.cu.obj.d -x cu -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\Loss.cu -o caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\Loss.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\,-FS
2022-02-18T17:20:20.1066778Z Loss.cu
2022-02-18T17:20:24.0774369Z [54/694] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\randomtemp.exe C:/actions-runner/_work/pytorch/pytorch/build/win_tmp\bin\sccache.exe C:\PROGRA~1\NVIDIA~2\CUDA\v11.3\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -isystem=C:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\Indexing.cu.obj -MF caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\Indexing.cu.obj.d -x cu -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\Indexing.cu -o caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\Indexing.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\,-FS
2022-02-18T17:20:24.0791206Z Indexing.cu
2022-02-18T17:20:33.9496332Z [55/694] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\randomtemp.exe C:/actions-runner/_work/pytorch/pytorch/build/win_tmp\bin\sccache.exe C:\PROGRA~1\NVIDIA~2\CUDA\v11.3\bin\nvcc.exe -forward-unknown-to-host-compiler -DAT_PER_OPERATOR_HEADERS -DBUILD_SPLIT_CUDA -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTORCH_CUDA_CU_BUILD_MAIN_LIB -DUSE_C10D_GLOO -DUSE_CUDA -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cuda_cu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\THC -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\cuda -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\cuda\..\.. -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -isystem=C:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -isystem=C:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -isystem="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\third_party\oneDNN\include -isystem=C:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -isystem="C:\Program Files\NVIDIA Corporation\NvToolsExt\include" -isystem=C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -Xcompiler /w -w -Xfatbin -compress-all -DONNX_NAMESPACE=onnx_torch --use-local-env -gencode arch=compute_70,code=sm_70 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda  -Xcompiler=/wd4819,/wd4503,/wd4190,/wd4244,/wd4251,/wd4275,/wd4522 -Wno-deprecated-gpu-targets --expt-extended-lambda -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -Xcompiler="-MD -O2 -Ob2" -DNDEBUG -Xcompiler /MD -DCAFFE2_USE_GLOO -DTH_HAVE_THREAD -Xcompiler= -DTORCH_CUDA_CU_BUILD_MAIN_LIB -std=c++14 -MD -MT caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\MultiMarginLoss.cu.obj -MF caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\MultiMarginLoss.cu.obj.d -x cu -c C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\native\cuda\MultiMarginLoss.cu -o caffe2\CMakeFiles\torch_cuda_cu.dir\__\aten\src\ATen\native\cuda\MultiMarginLoss.cu.obj -Xcompiler=-Fdcaffe2\CMakeFiles\torch_cuda_cu.dir\,-FS
2022-02-18T17:20:33.9513501Z MultiMarginLoss.cu
2022-02-18T17:20:36.2862838Z [56/694] cmd.exe /C "cd . && C:\Jenkins\Miniconda3\Library\bin\cmake.exe -E vs_link_dll --intdir=caffe2\CMakeFiles\torch_cpu.dir --rc=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\rc.exe --mt=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\mt.exe --manifests  -- C:\PROGRA~2\MICROS~2\2019\BUILDT~1\VC\Tools\MSVC\1428~1.293\bin\Hostx64\x64\link.exe  @CMakeFiles\torch_cpu.rsp  /out:bin\torch_cpu.dll /implib:lib\torch_cpu.lib /pdb:bin\torch_cpu.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /INCREMENTAL:NO /NODEFAULTLIB:vcomp  -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/caffe2_protos.lib -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/onnx.lib  && cd ."
2022-02-18T17:20:36.2866685Z FAILED: bin/torch_cpu.dll lib/torch_cpu.lib 
2022-02-18T17:20:36.2869973Z cmd.exe /C "cd . && C:\Jenkins\Miniconda3\Library\bin\cmake.exe -E vs_link_dll --intdir=caffe2\CMakeFiles\torch_cpu.dir --rc=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\rc.exe --mt=C:\PROGRA~2\WI3CF2~1\10\bin\100190~1.0\x64\mt.exe --manifests  -- C:\PROGRA~2\MICROS~2\2019\BUILDT~1\VC\Tools\MSVC\1428~1.293\bin\Hostx64\x64\link.exe  @CMakeFiles\torch_cpu.rsp  /out:bin\torch_cpu.dll /implib:lib\torch_cpu.lib /pdb:bin\torch_cpu.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /INCREMENTAL:NO /NODEFAULTLIB:vcomp  -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/caffe2_protos.lib -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/onnx.lib  && cd ."
2022-02-18T17:20:36.2875014Z LINK: command "C:\PROGRA~2\MICROS~2\2019\BUILDT~1\VC\Tools\MSVC\1428~1.293\bin\Hostx64\x64\link.exe @CMakeFiles\torch_cpu.rsp /out:bin\torch_cpu.dll /implib:lib\torch_cpu.lib /pdb:bin\torch_cpu.pdb /dll /version:0.0 /machine:x64 /ignore:4049 /ignore:4217 /ignore:4099 /INCREMENTAL:NO /NODEFAULTLIB:vcomp -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/caffe2_protos.lib -WHOLEARCHIVE:C:/actions-runner/_work/pytorch/pytorch/build/lib/onnx.lib /MANIFEST /MANIFESTFILE:bin\torch_cpu.dll.manifest" failed (exit code 1120) with the following output:
2022-02-18T17:20:36.2877731Z Microsoft (R) Incremental Linker Version 14.28.29337.0
2022-02-18T17:20:36.2878325Z Copyright (C) Microsoft Corporation.  All rights reserved.
2022-02-18T17:20:36.2878677Z 
2022-02-18T17:20:36.2879284Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\AccumulateType.cpp.obj 
2022-02-18T17:20:36.2880254Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\BatchedFallback.cpp.obj 
2022-02-18T17:20:36.2881059Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\BatchedTensorImpl.cpp.obj 
2022-02-18T17:20:36.2881939Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\BatchingRegistrations.cpp.obj 
2022-02-18T17:20:36.2882810Z caffe2\CMakeFiles\torch_cpu.dir\__\aten\src\ATen\CPUGeneratorImpl.cpp.obj 

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@albanD albanD requested review from bdhirsh and zou3519 February 17, 2022 22:48
@facebook-github-bot
Copy link
Contributor

@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks pretty reasonable, I don't understand how the test case tests this though

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is a "Direct Subclass" and who shouldn't do that?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a direct subclass in the sense that the Tensor that we send to the backend and the subclass are the same.
It is ok to do it, but not composable and tricky in general. Since a lot of people use these tests as example, I prefered to warn here that this is not a good example to copy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^ you should probably put that into the comment in a code, as a user I would be more even confused haha

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

related question: does the bug with the non-stack-based version only show up when you do a "Direct Subclass" (and not show up when you do a wrapper subclass)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The direct subclass is the simplest repro I could come up with.
@Chillee managed to hit the same problem with some of the functorch tests. In that case, it was while going down a clamp_min_ op, an extra clone was done before executing the function itself. So I guess functionalization was involved? But I don't know how to enable that easily in a small repro. Hence the use of direct subclass and conjugate fallback.

@facebook-github-bot
Copy link
Contributor

@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potentially a stupid question, but, why does call_clone go to PythonTLSSnapshot? I would have expected the redispatch from Conjugate to go into the Python dispatch key?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not a redispatch, this is a call. Namely:

auto resolved_tensor = at::clone(tensor);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I think that's because the conjugate fallback ends up calling at::clone(), which enters the dispatcher through Dispatcher::call(). PythonTLSSnapshot is set up to always run first every time you call Dispatcher::call())

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the PythonTLSSnapshot key in the local exclude key set at this point? So I would expect the at::clone call to skip past PythonTLSSnapshot, AutogradCPU, and Conjugate

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see, you don't use ExcludeDispatchKeyGuards to do the redispatch...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redispatch here means actually using redispatch vs clone.
The ExcludeDispatchKeyGuards is not used anymore except by autograd.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ugh does this mean on the at::clone call, AutogradCPU doesn't get hit? AutogradCPU is in the local exclude set (because VariableType::blah both uses an exclude key guard AND at::redispatch)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is correct! My bad!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed comment and explanations, I understand what's going on now. I agree that this test is testing the above code now.

On an orthogonal note, I'm a bit concerned that AutogradCPU doesn't get hit, but maybe that's not a problem because that's how things work even without the Python keys

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what happens when your key is "below" autograd: autogad is already handled above so you don't need to care about it here :)

@facebook-github-bot
Copy link
Contributor

@albanD has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

facebook-github-bot pushed a commit that referenced this pull request Feb 22, 2022
…73045)

Summary: Pull Request resolved: #73045

Reviewed By: zou3519

Differential Revision: D34318185

Pulled By: albanD

fbshipit-source-id: abc30fe69176ba474e28bb045406a410e17cfd79
@github-actions
Copy link
Contributor

Hey @albanD.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@malfet
Copy link
Contributor

malfet commented Feb 22, 2022

Reverted, as it clearly broke OSS CI

@facebook-github-bot
Copy link
Contributor

This pull request has been reverted by 9a96604. To re-land this change, please open another pull request, assignthe same reviewers, fix the CI failures that caused the revert and make sure that the failing CI runs on the PR by applying the proper ciflow label (e.g., ciflow/trunk).

@albanD albanD mentioned this pull request Feb 22, 2022
@facebook-github-bot
Copy link
Contributor

This pull request has been reverted by 9a96604. To re-land this change, please open another pull request, assignthe same reviewers, fix the CI failures that caused the revert and make sure that the failing CI runs on the PR by applying the proper ciflow label (e.g., ciflow/trunk).

facebook-github-bot pushed a commit that referenced this pull request Feb 23, 2022
Summary:
Reland of #73045

Tweak class visibility to avoid windows linking issues.

Pull Request resolved: #73231

Reviewed By: bdhirsh

Differential Revision: D34402767

Pulled By: albanD

fbshipit-source-id: 50aaadf5389ca516fa6a5034d42eee56abe3c7f7
pytorchmergebot pushed a commit that referenced this pull request Feb 23, 2022
Summary:
Reland of #73045

Tweak class visibility to avoid windows linking issues.

Pull Request resolved: #73231

Reviewed By: bdhirsh

Differential Revision: D34402767

Pulled By: albanD

fbshipit-source-id: 50aaadf5389ca516fa6a5034d42eee56abe3c7f7
(cherry picked from commit 0fe53bd)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 3, 2022
…73045)

Summary: Pull Request resolved: pytorch/pytorch#73045

Reviewed By: zou3519

Differential Revision: D34318185

Pulled By: albanD

fbshipit-source-id: abc30fe69176ba474e28bb045406a410e17cfd79
(cherry picked from commit 4d9a305d3a2688e0d6264193f5dd692a2d44aedb)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 3, 2022
Summary:
Reland of pytorch/pytorch#73045

Tweak class visibility to avoid windows linking issues.

Pull Request resolved: pytorch/pytorch#73231

Reviewed By: bdhirsh

Differential Revision: D34402767

Pulled By: albanD

fbshipit-source-id: 50aaadf5389ca516fa6a5034d42eee56abe3c7f7
(cherry picked from commit 0fe53bdfb770f8e39eb3017f98e1d210f85eb78b)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 3, 2022
…73045)

Summary: Pull Request resolved: pytorch/pytorch#73045

Reviewed By: zou3519

Differential Revision: D34318185

Pulled By: albanD

fbshipit-source-id: abc30fe69176ba474e28bb045406a410e17cfd79
(cherry picked from commit 4d9a305d3a2688e0d6264193f5dd692a2d44aedb)
cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 3, 2022
Summary:
Reland of pytorch/pytorch#73045

Tweak class visibility to avoid windows linking issues.

Pull Request resolved: pytorch/pytorch#73231

Reviewed By: bdhirsh

Differential Revision: D34402767

Pulled By: albanD

fbshipit-source-id: 50aaadf5389ca516fa6a5034d42eee56abe3c7f7
(cherry picked from commit 0fe53bdfb770f8e39eb3017f98e1d210f85eb78b)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants