Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[JIT][WIP] memorization memory planning #63873

Closed
wants to merge 30 commits into from

Conversation

makslevental
Copy link
Contributor

@makslevental makslevental commented Aug 24, 2021

Stack from ghstack:

This PR extends memory planning strategies to support memory allocations and frees collected using the MemoryTracingAllocator (which follows the pattern from kineto). These plans can then be deployed using MemoryPlanningAllocator in combination with prim::PreAllocateTensor ops (inserted into the graph) to appropriately give out slices of the initially allocated region.

Differential Revision: D30769097

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 24, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 20114a8 (more details on the Dr. CI page):


  • 23/23 failures possibly* introduced in this PR
    • 1/23 non-scanned failure(s)

🕵️ 19 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build win-vs2019-cuda11.3-py3 / build (1/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:39:32.6980511Z C:\actions-runner\...eWithFirstGap': function does not take 1 arguments
2021-11-03T19:39:32.6956825Z             _Ty=size_t,
2021-11-03T19:39:32.6957787Z             _Pr=torch::jit::liveRangeStartCmp,
2021-11-03T19:39:32.6959173Z             _Alloc=std::allocator<std::pair<const torch::jit::UniqueLiveRange,size_t>>
2021-11-03T19:39:32.6960274Z         ]
2021-11-03T19:39:32.6962099Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning.cpp(585): error C2660: 'torch::jit::greedyBySizeWithSmallestGap': function does not take 1 arguments
2021-11-03T19:39:32.6964966Z C:\actions-runner\_work\pytorch\pytorch\torch/csrc/jit/passes/memory_planning/greedy_by_size.h(10): note: see declaration of 'torch::jit::greedyBySizeWithSmallestGap'
2021-11-03T19:39:32.6967799Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning.cpp(589): error C2660: 'torch::jit::greedyBySizeWithFirstGap': function does not take 1 arguments
2021-11-03T19:39:32.6970650Z C:\actions-runner\_work\pytorch\pytorch\torch/csrc/jit/passes/memory_planning/greedy_by_size.h(14): note: see declaration of 'torch::jit::greedyBySizeWithFirstGap'
2021-11-03T19:39:32.6973726Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning.cpp(593): error C2660: 'torch::jit::greedyByLongestAndSizeWithSmallestGap': function does not take 1 arguments
2021-11-03T19:39:32.6977126Z C:\actions-runner\_work\pytorch\pytorch\torch/csrc/jit/passes/memory_planning/greedy_by_size.h(22): note: see declaration of 'torch::jit::greedyByLongestAndSizeWithSmallestGap'
2021-11-03T19:39:32.6980511Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning.cpp(597): error C2660: 'torch::jit::greedyByLongestAndSizeWithFirstGap': function does not take 1 arguments
2021-11-03T19:39:32.6983711Z C:\actions-runner\_work\pytorch\pytorch\torch/csrc/jit/passes/memory_planning/greedy_by_size.h(18): note: see declaration of 'torch::jit::greedyByLongestAndSizeWithFirstGap'
2021-11-03T19:39:32.6985911Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2021-11-03T19:39:32.6987199Z Copyright (C) Microsoft Corporation.  All rights reserved.
2021-11-03T19:39:32.6988071Z 
2021-11-03T19:39:35.0964177Z [3738/4778] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMAGMA_V2 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\third_party -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\pybind11\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2 /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\jit\passes\lower_tuples.cpp.obj /Fdcaffe2\CMakeF
2021-11-03T19:39:35.2883410Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2021-11-03T19:39:35.4262719Z Copyright (C) Microsoft Corporation.  All rights reserved.
2021-11-03T19:39:35.5537003Z 
2021-11-03T19:39:36.3476863Z [3739/4778] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMAGMA_V2 -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\cudnn_frontend\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\third_party -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\pybind11\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\include" -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\magma\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2 /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DCUDA_HAS_FP16=1 -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14 /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\jit\passes\lower_grad_of.cpp.obj /Fdcaffe2\CMake
2021-11-03T19:39:36.5451879Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64

See GitHub Actions build linux-xenial-py3.6-clang7-onnx / build (2/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:10:46.5066508Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2021-11-03T19:10:46.5060334Z �[36;1mfi�[0m
2021-11-03T19:10:46.5060779Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2021-11-03T19:10:46.5061887Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2021-11-03T19:10:46.5062566Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2021-11-03T19:10:46.5063270Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2021-11-03T19:10:46.5063845Z �[36;1m  exit 1�[0m
2021-11-03T19:10:46.5064118Z �[36;1mfi�[0m
2021-11-03T19:10:46.5064554Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2021-11-03T19:10:46.5065223Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2021-11-03T19:10:46.5065832Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2021-11-03T19:10:46.5066508Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2021-11-03T19:10:46.5067232Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2021-11-03T19:10:46.5067675Z �[36;1m  exit 1�[0m
2021-11-03T19:10:46.5067948Z �[36;1mfi�[0m
2021-11-03T19:10:46.5068341Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2021-11-03T19:10:46.5079290Z shell: /usr/bin/bash -e {0}
2021-11-03T19:10:46.5079616Z env:
2021-11-03T19:10:46.5080127Z   BUILD_ENVIRONMENT: linux-xenial-py3.6-clang7-onnx
2021-11-03T19:10:46.5081160Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang7-onnx
2021-11-03T19:10:46.5082219Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2021-11-03T19:10:46.5083115Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build linux-xenial-py3.6-clang7-asan / build (3/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:12:05.2588778Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2021-11-03T19:12:05.2582135Z �[36;1mfi�[0m
2021-11-03T19:12:05.2582634Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2021-11-03T19:12:05.2583414Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2021-11-03T19:12:05.2584163Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2021-11-03T19:12:05.2584939Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2021-11-03T19:12:05.2585790Z �[36;1m  exit 1�[0m
2021-11-03T19:12:05.2586118Z �[36;1mfi�[0m
2021-11-03T19:12:05.2586610Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2021-11-03T19:12:05.2587356Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2021-11-03T19:12:05.2588030Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2021-11-03T19:12:05.2588778Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2021-11-03T19:12:05.2589574Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2021-11-03T19:12:05.2590076Z �[36;1m  exit 1�[0m
2021-11-03T19:12:05.2590401Z �[36;1mfi�[0m
2021-11-03T19:12:05.2590806Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2021-11-03T19:12:05.2600291Z shell: /usr/bin/bash -e {0}
2021-11-03T19:12:05.2600644Z env:
2021-11-03T19:12:05.2601217Z   BUILD_ENVIRONMENT: linux-xenial-py3.6-clang7-asan
2021-11-03T19:12:05.2602364Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang7-asan
2021-11-03T19:12:05.2603541Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2021-11-03T19:12:05.2604515Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build linux-xenial-py3.6-gcc5.4 / build (4/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:11:02.1860827Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2021-11-03T19:11:02.1855260Z �[36;1mfi�[0m
2021-11-03T19:11:02.1855688Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2021-11-03T19:11:02.1856366Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2021-11-03T19:11:02.1857015Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2021-11-03T19:11:02.1857695Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2021-11-03T19:11:02.1858257Z �[36;1m  exit 1�[0m
2021-11-03T19:11:02.1858527Z �[36;1mfi�[0m
2021-11-03T19:11:02.1858950Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2021-11-03T19:11:02.1859602Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2021-11-03T19:11:02.1860188Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2021-11-03T19:11:02.1860827Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2021-11-03T19:11:02.1861531Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2021-11-03T19:11:02.1861964Z �[36;1m  exit 1�[0m
2021-11-03T19:11:02.1862234Z �[36;1mfi�[0m
2021-11-03T19:11:02.1862577Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2021-11-03T19:11:02.1874869Z shell: /usr/bin/bash -e {0}
2021-11-03T19:11:02.1875186Z env:
2021-11-03T19:11:02.1875637Z   BUILD_ENVIRONMENT: linux-xenial-py3.6-gcc5.4
2021-11-03T19:11:02.1876551Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.6-gcc5.4
2021-11-03T19:11:02.1877538Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2021-11-03T19:11:02.1878418Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build linux-xenial-py3.6-gcc7 / build (5/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:11:34.6748601Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2021-11-03T19:11:34.6742659Z �[36;1mfi�[0m
2021-11-03T19:11:34.6743106Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2021-11-03T19:11:34.6743807Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2021-11-03T19:11:34.6744576Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2021-11-03T19:11:34.6745305Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2021-11-03T19:11:34.6745893Z �[36;1m  exit 1�[0m
2021-11-03T19:11:34.6746167Z �[36;1mfi�[0m
2021-11-03T19:11:34.6746630Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2021-11-03T19:11:34.6747315Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2021-11-03T19:11:34.6747917Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2021-11-03T19:11:34.6748601Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2021-11-03T19:11:34.6749336Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2021-11-03T19:11:34.6749780Z �[36;1m  exit 1�[0m
2021-11-03T19:11:34.6750065Z �[36;1mfi�[0m
2021-11-03T19:11:34.6750440Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2021-11-03T19:11:34.6763089Z shell: /usr/bin/bash -e {0}
2021-11-03T19:11:34.6763420Z env:
2021-11-03T19:11:34.6763873Z   BUILD_ENVIRONMENT: linux-xenial-py3.6-gcc7
2021-11-03T19:11:34.6764790Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3.6-gcc7
2021-11-03T19:11:34.6765806Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2021-11-03T19:11:34.6766820Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build linux-xenial-py3.6-gcc7-bazel-test / build-and-test (6/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:17:00.4558474Z torch/csrc/jit/pas...ch::jit::UniqueLiveRange, long unsigned int> > >}'
2021-11-03T19:17:00.4531933Z ./torch/csrc/jit/passes/memory_planning/greedy_by_size.h:14:28: note: in passing argument 1 of 'std::vector<torch::jit::MemAllocation> torch::jit::greedyBySizeWithFirstGap(const LivenessMap&, torch::jit::SortedLiveRangeMap<long unsigned int>&)'
2021-11-03T19:17:00.4533981Z  std::vector<MemAllocation> greedyBySizeWithFirstGap(
2021-11-03T19:17:00.4534844Z                             ^~~~~~~~~~~~~~~~~~~~~~~~
2021-11-03T19:17:00.4539635Z torch/csrc/jit/passes/memory_planning.cpp:593:78: error: invalid initialization of reference of type 'const LivenessMap& {aka const std::unordered_map<const torch::jit::Value*, std::unordered_set<const torch::jit::Value*>, std::hash<const torch::jit::Value*>, std::equal_to<const torch::jit::Value*>, std::allocator<std::pair<const torch::jit::Value* const, std::unordered_set<const torch::jit::Value*> > > >&}' from expression of type 'torch::jit::SortedLiveRangeMap<long unsigned int> {aka std::map<torch::jit::UniqueLiveRange, long unsigned int, torch::jit::liveRangeStartCmp, std::allocator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> > >}'
2021-11-03T19:17:00.4543969Z        allocations = greedyByLongestAndSizeWithSmallestGap(managed_live_ranges);
2021-11-03T19:17:00.4545320Z                                                                               ^
2021-11-03T19:17:00.4546178Z In file included from torch/csrc/jit/passes/memory_planning.cpp:3:0:
2021-11-03T19:17:00.4549451Z ./torch/csrc/jit/passes/memory_planning/greedy_by_size.h:22:28: note: in passing argument 1 of 'std::vector<torch::jit::MemAllocation> torch::jit::greedyByLongestAndSizeWithSmallestGap(const LivenessMap&, torch::jit::SortedLiveRangeMap<long unsigned int>&)'
2021-11-03T19:17:00.4552332Z  std::vector<MemAllocation> greedyByLongestAndSizeWithSmallestGap(
2021-11-03T19:17:00.4553555Z                             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2021-11-03T19:17:00.4558474Z torch/csrc/jit/passes/memory_planning.cpp:597:75: error: invalid initialization of reference of type 'const LivenessMap& {aka const std::unordered_map<const torch::jit::Value*, std::unordered_set<const torch::jit::Value*>, std::hash<const torch::jit::Value*>, std::equal_to<const torch::jit::Value*>, std::allocator<std::pair<const torch::jit::Value* const, std::unordered_set<const torch::jit::Value*> > > >&}' from expression of type 'torch::jit::SortedLiveRangeMap<long unsigned int> {aka std::map<torch::jit::UniqueLiveRange, long unsigned int, torch::jit::liveRangeStartCmp, std::allocator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> > >}'
2021-11-03T19:17:00.4562671Z        allocations = greedyByLongestAndSizeWithFirstGap(managed_live_ranges);
2021-11-03T19:17:00.4563974Z                                                                            ^
2021-11-03T19:17:00.4564778Z In file included from torch/csrc/jit/passes/memory_planning.cpp:3:0:
2021-11-03T19:17:00.4688595Z ./torch/csrc/jit/passes/memory_planning/greedy_by_size.h:18:28: note: in passing argument 1 of 'std::vector<torch::jit::MemAllocation> torch::jit::greedyByLongestAndSizeWithFirstGap(const LivenessMap&, torch::jit::SortedLiveRangeMap<long unsigned int>&)'
2021-11-03T19:17:00.4691174Z  std::vector<MemAllocation> greedyByLongestAndSizeWithFirstGap(
2021-11-03T19:17:00.4692349Z                             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2021-11-03T19:17:00.5968567Z Target //:torch failed to build
2021-11-03T19:17:00.6008811Z Use --verbose_failures to see the command lines of failed build steps.
2021-11-03T19:17:00.6756550Z �[32mINFO: �[0mElapsed time: 234.811s, Critical Path: 38.19s
2021-11-03T19:17:00.6771725Z �[32mINFO: �[0m774 processes: 61 internal, 713 processwrapper-sandbox.

See GitHub Actions build linux-bionic-py3.6-clang9 / build (7/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:11:12.1247880Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2021-11-03T19:11:12.1239323Z �[36;1mfi�[0m
2021-11-03T19:11:12.1239743Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2021-11-03T19:11:12.1240419Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2021-11-03T19:11:12.1241132Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2021-11-03T19:11:12.1241802Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2021-11-03T19:11:12.1242347Z �[36;1m  exit 1�[0m
2021-11-03T19:11:12.1242613Z �[36;1mfi�[0m
2021-11-03T19:11:12.1243034Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2021-11-03T19:11:12.1246658Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2021-11-03T19:11:12.1247235Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2021-11-03T19:11:12.1247880Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2021-11-03T19:11:12.1248564Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2021-11-03T19:11:12.1248990Z �[36;1m  exit 1�[0m
2021-11-03T19:11:12.1249257Z �[36;1mfi�[0m
2021-11-03T19:11:12.1249596Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2021-11-03T19:11:12.1259534Z shell: /usr/bin/bash -e {0}
2021-11-03T19:11:12.1259835Z env:
2021-11-03T19:11:12.1260275Z   BUILD_ENVIRONMENT: linux-bionic-py3.6-clang9
2021-11-03T19:11:12.1261169Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-py3.6-clang9
2021-11-03T19:11:12.1262140Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2021-11-03T19:11:12.1263001Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build win-vs2019-cpu-py3 / build (8/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:43:17.9303552Z C:\actions-runner\...81): error C3861: 'greedyBy': identifier not found
2021-11-03T19:43:17.9281110Z cl -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\third_party -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2 /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14
2021-11-03T19:43:17.9294071Z 
2021-11-03T19:43:17.9294745Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning\greedy_by_size.cpp(37): error C2061: syntax error: identifier 'pair'
2021-11-03T19:43:17.9295787Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning\greedy_by_size.cpp(41): error C2065: 'Cmp': undeclared identifier
2021-11-03T19:43:17.9296939Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning\greedy_by_size.cpp(41): error C2146: syntax error: missing ')' before identifier 'cmp'
2021-11-03T19:43:17.9298015Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning\greedy_by_size.cpp(44): error C2143: syntax error: missing ';' before '{'
2021-11-03T19:43:17.9299272Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning\greedy_by_size.cpp(44): error C2447: '{': missing function header (old-style formal list?)
2021-11-03T19:43:17.9300363Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning\greedy_by_size.cpp(63): error C3861: 'greedyBy': identifier not found
2021-11-03T19:43:17.9301402Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning\greedy_by_size.cpp(69): error C3861: 'greedyBy': identifier not found
2021-11-03T19:43:17.9302508Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning\greedy_by_size.cpp(75): error C3861: 'greedyBy': identifier not found
2021-11-03T19:43:17.9303552Z C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\memory_planning\greedy_by_size.cpp(81): error C3861: 'greedyBy': identifier not found
2021-11-03T19:43:20.4814613Z [3708/4392] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP @caffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\jit\passes\liveness.cpp.obj.rsp /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\jit\passes\liveness.cpp.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\liveness.cpp
2021-11-03T19:43:20.4817078Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2021-11-03T19:43:20.4817973Z Copyright (C) Microsoft Corporation.  All rights reserved.
2021-11-03T19:43:20.4818488Z 
2021-11-03T19:43:20.4840489Z cl -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\third_party -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2 /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14
2021-11-03T19:43:20.4862722Z 
2021-11-03T19:43:21.0530055Z [3709/4392] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP @caffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\jit\passes\lower_tuples.cpp.obj.rsp /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\jit\passes\lower_tuples.cpp.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\torch\csrc\jit\passes\lower_tuples.cpp
2021-11-03T19:43:21.0533014Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2021-11-03T19:43:21.0534038Z Copyright (C) Microsoft Corporation.  All rights reserved.
2021-11-03T19:43:21.0534659Z 

See GitHub Actions build Lint / clang-tidy (9/19)

Step: "Check for warnings" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:14:11.2871659Z /__w/pytorch/pytor...ngestAndSizeWithFirstGap' [clang-diagnostic-error]
2021-11-03T19:14:11.2860784Z                     ^
2021-11-03T19:14:11.2861732Z torch/csrc/jit/passes/memory_planning/greedy_by_size.h:14:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
2021-11-03T19:14:11.2863063Z std::vector<MemAllocation> greedyBySizeWithFirstGap(
2021-11-03T19:14:11.2863932Z                            ^
2021-11-03T19:14:11.2865736Z /__w/pytorch/pytorch/torch/csrc/jit/passes/memory_planning.cpp:593:21: error: no matching function for call to 'greedyByLongestAndSizeWithSmallestGap' [clang-diagnostic-error]
2021-11-03T19:14:11.2867266Z       allocations = greedyByLongestAndSizeWithSmallestGap(managed_live_ranges);
2021-11-03T19:14:11.2867903Z                     ^
2021-11-03T19:14:11.2868631Z torch/csrc/jit/passes/memory_planning/greedy_by_size.h:22:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
2021-11-03T19:14:11.2869721Z std::vector<MemAllocation> greedyByLongestAndSizeWithSmallestGap(
2021-11-03T19:14:11.2870403Z                            ^
2021-11-03T19:14:11.2871659Z /__w/pytorch/pytorch/torch/csrc/jit/passes/memory_planning.cpp:597:21: error: no matching function for call to 'greedyByLongestAndSizeWithFirstGap' [clang-diagnostic-error]
2021-11-03T19:14:11.2872993Z       allocations = greedyByLongestAndSizeWithFirstGap(managed_live_ranges);
2021-11-03T19:14:11.2873646Z                     ^
2021-11-03T19:14:11.2874338Z torch/csrc/jit/passes/memory_planning/greedy_by_size.h:18:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
2021-11-03T19:14:11.2875500Z std::vector<MemAllocation> greedyByLongestAndSizeWithFirstGap(
2021-11-03T19:14:11.2876130Z                            ^
2021-11-03T19:14:11.2876439Z Warnings detected!
2021-11-03T19:14:11.2877126Z Summary:
2021-11-03T19:14:11.2877692Z [clang-diagnostic-error] occurred 5 times
2021-11-03T19:14:11.2878250Z     /__w/pytorch/pytorch/torch/csrc/jit/passes/memory_planning.cpp:523
2021-11-03T19:14:11.2878923Z     /__w/pytorch/pytorch/torch/csrc/jit/passes/memory_planning.cpp:585

See GitHub Actions build linux-xenial-py3-clang5-mobile-custom-build-dynamic / build (10/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:22:44.3836779Z FAILED: caffe2/CMa...dir/__/torch/csrc/jit/passes/memory_planning.cpp.o
2021-11-03T19:22:38.7631209Z [2967/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_breadth.cpp.o�[K
2021-11-03T19:22:38.7633249Z [2968/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/lift_closures.cpp.o�[K
2021-11-03T19:22:39.3801386Z [2968/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/MemoryPlanningAllocator.cpp.o�[K
2021-11-03T19:22:39.3803616Z [2969/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/lower_grad_of.cpp.o�[K
2021-11-03T19:22:40.8936311Z [2969/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_size.cpp.o�[K
2021-11-03T19:22:40.8938413Z [2970/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/loop_unrolling.cpp.o�[K
2021-11-03T19:22:40.9251929Z [2970/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_util.cpp.o�[K
2021-11-03T19:22:40.9254124Z [2971/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/lower_tuples.cpp.o�[K
2021-11-03T19:22:44.3834630Z [2971/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/linear_scan.cpp.o�[K
2021-11-03T19:22:44.3835874Z [2972/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o�[K
2021-11-03T19:22:44.3836779Z FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o 
2021-11-03T19:22:44.3852937Z /opt/cache/bin/sccache /usr/lib/llvm-5.0/bin/clang++  -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -Iaten/src -I../../../aten/src -I. -I../../../ -isystem third_party/gloo -isystem ../../../cmake/../third_party/gloo -isystem ../../../third_party/XNNPACK/include -isystem ../../../cmake/../third_party/eigen -isystem ../../../cmake/../third_party/pybind11/include -I../../../third_party/pocketfft -I../../../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -I../../../caffe2/../third_party -I../../../caffe2/../third_party/breakpad/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../../../torch/csrc -I../../../third_party/miniz-2.0.8 -I../../../torch/csrc/distributed -I../../../aten/src/TH -I../../../aten/../third_party/catch/single_include -I../../../aten/src/ATen/.. -Icaffe2/aten/src/ATen -isystem ../../../caffe2 -I../../../third_party/FXdiv/include -I../../../c10/.. -I../../../third_party/pthreadpool/include -I../../../third_party/cpuinfo/include -I../../../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../../../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../../../third_party/cpuinfo/deps/clog/include -I../../../third_party/NNPACK/include -I../../../third_party/FP16/include -I../../../third_party/fmt/include -S -emit-llvm -DSTRIP_ERROR_MESSAGES -DC10_MOBILE -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -O3 -DNDEBUG -fPIC   -DCAFFE2_USE_GLOO -Wall -Wextra -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -DCAFFE2_BUILD_MAIN_LIB -pthread -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o -c ../../../torch/csrc/jit/passes/memory_planning.cpp
2021-11-03T19:22:44.3864528Z �[1m../../../torch/csrc/jit/passes/memory_planning.cpp:523:11: �[0m�[0;1;31merror: �[0m�[1mno viable conversion from 'std::__cxx11::string' (aka 'basic_string<char>') to 'const torch::jit::Value *'�[0m
2021-11-03T19:22:44.3865406Z           std::to_string(trace_hasher(mem_event.stack_trace.value()))};
2021-11-03T19:22:44.3865991Z �[0;1;32m          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2021-11-03T19:22:44.3866890Z �[0m�[1m../../../torch/csrc/jit/passes/memory_planning.cpp:585:21: �[0m�[0;1;31merror: �[0m�[1mno matching function for call to 'greedyBySizeWithSmallestGap'�[0m
2021-11-03T19:22:44.3867853Z       allocations = greedyBySizeWithSmallestGap(managed_live_ranges);
2021-11-03T19:22:44.3868530Z �[0;1;32m                    ^~~~~~~~~~~~~~~~~~~~~~~~~~~
2021-11-03T19:22:44.3869373Z �[0m�[1m../../../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:10:28: �[0m�[0;1;30mnote: �[0mcandidate function not viable: requires 2 arguments, but 1 was provided�[0m
2021-11-03T19:22:44.3870284Z std::vector<MemAllocation> greedyBySizeWithSmallestGap(
2021-11-03T19:22:44.3870894Z �[0;1;32m                           ^

See GitHub Actions build linux-vulkan-bionic-py3.6-clang9 / build (11/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:10:49.2481933Z �[36;1m echo "ERR...t available for the merge-base of your branch"�[0m
2021-11-03T19:10:49.2476051Z �[36;1mfi�[0m
2021-11-03T19:10:49.2476491Z �[36;1m# Covers the case where a previous tag doesn't exist for the tree�[0m
2021-11-03T19:10:49.2477201Z �[36;1m# this is only really applicable on trees that don't have `.circleci/docker` at its merge base, i.e. nightly�[0m
2021-11-03T19:10:49.2477940Z �[36;1mif ! git rev-parse "$MERGE_BASE:.circleci/docker"; then�[0m
2021-11-03T19:10:49.2478671Z �[36;1m  echo "Directory '.circleci/docker' not found in commit $MERGE_BASE, you should probably rebase onto a more recent commit"�[0m
2021-11-03T19:10:49.2479255Z �[36;1m  exit 1�[0m
2021-11-03T19:10:49.2479517Z �[36;1mfi�[0m
2021-11-03T19:10:49.2479967Z �[36;1mPREVIOUS_DOCKER_TAG=$(git rev-parse "$MERGE_BASE:.circleci/docker")�[0m
2021-11-03T19:10:49.2480656Z �[36;1m# If no image exists but the hash is the same as the previous hash then we should error out here�[0m
2021-11-03T19:10:49.2481252Z �[36;1mif [[ "${PREVIOUS_DOCKER_TAG}" = "${DOCKER_TAG}" ]]; then�[0m
2021-11-03T19:10:49.2481933Z �[36;1m  echo "ERROR: Something has gone wrong and the previous image isn't available for the merge-base of your branch"�[0m
2021-11-03T19:10:49.2482870Z �[36;1m  echo "       contact the PyTorch team to restore the original images"�[0m
2021-11-03T19:10:49.2483326Z �[36;1m  exit 1�[0m
2021-11-03T19:10:49.2483588Z �[36;1mfi�[0m
2021-11-03T19:10:49.2483951Z �[36;1mecho ::set-output name=rebuild::yes�[0m
2021-11-03T19:10:49.2494889Z shell: /usr/bin/bash -e {0}
2021-11-03T19:10:49.2495208Z env:
2021-11-03T19:10:49.2495762Z   BUILD_ENVIRONMENT: linux-vulkan-bionic-py3.6-clang9
2021-11-03T19:10:49.2496813Z   DOCKER_IMAGE_BASE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-bionic-py3.6-clang9
2021-11-03T19:10:49.2497869Z   SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
2021-11-03T19:10:49.2498799Z   XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / build (12/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:37:01.6445218Z �[0m�[1m�[31mERROR...eof ((socklen_t)))\n ^\n" }
2021-11-03T19:37:01.6431733Z �[0m�[1m�[31mERROR�[0m 2021-11-03T19:19:11Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:332:2: error: \'struct sockaddr\' has no member named \'sa_len\'\n x.sa_len = 0;\n  ^\n" }
2021-11-03T19:37:01.6432635Z 
2021-11-03T19:37:01.6434429Z �[0m�[1m�[31mERROR�[0m 2021-11-03T19:19:13Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:366:10: error: \'RTLD_MEMBER\' undeclared (first use in this function); did you mean \'RTLD_NEXT\'?\n   (void) RTLD_MEMBER;\n          ^~~~~~~~~~~\n          RTLD_NEXT\nconftest.c:366:10: note: each undeclared identifier is reported only once for each function it appears in\n" }
2021-11-03T19:37:01.6435696Z 
2021-11-03T19:37:01.6437484Z �[0m�[1m�[31mERROR�[0m 2021-11-03T19:19:14Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c:361:9: error: unknown type name \'not\'\n         not a universal capable compiler\n         ^~~\nconftest.c:361:15: error: expected \'=\', \',\', \';\', \'asm\' or \'__attribute__\' before \'universal\'\n         not a universal capable compiler\n               ^~~~~~~~~\nconftest.c:361:15: error: unknown type name \'universal\'\n" }
2021-11-03T19:37:01.6438706Z 
2021-11-03T19:37:01.6440325Z �[0m�[1m�[31mERROR�[0m 2021-11-03T19:19:14Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:367:4: error: unknown type name \'not\'; did you mean \'ino_t\'?\n    not big endian\n    ^~~\n    ino_t\nconftest.c:367:12: error: expected \'=\', \',\', \';\', \'asm\' or \'__attribute__\' before \'endian\'\n    not big endian\n            ^~~~~~\n" }
2021-11-03T19:37:01.6441429Z 
2021-11-03T19:37:01.6442835Z �[0m�[1m�[31mERROR�[0m 2021-11-03T19:19:15Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:378:4: error: \'struct stat\' has no member named \'st_mtimespec\'; did you mean \'st_mtim\'?\n st.st_mtimespec.tv_nsec = 1;\n    ^~~~~~~~~~~~\n    st_mtim\n" }
2021-11-03T19:37:01.6443916Z 
2021-11-03T19:37:01.6445218Z �[0m�[1m�[31mERROR�[0m 2021-11-03T19:19:16Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "conftest.c: In function \'main\':\nconftest.c:402:24: error: expected expression before \')\' token\n if (sizeof ((socklen_t)))\n                        ^\n" }
2021-11-03T19:37:01.6446075Z 
2021-11-03T19:37:01.6522586Z �[0m�[1m�[31mERROR�[0m 2021-11-03T19:36:56Z: sccache::server: Compilation failed: Output { status: ExitStatus(ExitStatus(256)), stdout: "", stderr: "\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/jit/passes/memory_planning.cpp:\u{1b}[m\u{1b}[K In function \'\u{1b}[01m\u{1b}[Kstd::pair<std::map<torch::jit::UniqueLiveRange, long unsigned int, torch::jit::liveRangeStartCmp, std::allocator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> > >, std::vector<std::pair<torch::jit::UniqueLiveRange, torch::jit::FrameNodeId> > > torch::jit::getManagedLiveRangesFromMemoryEvents(std::vector<torch::jit::MemoryEvent>, std::shared_ptr<torch::jit::Graph>)\u{1b}[m\u{1b}[K\':\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/jit/passes/memory_planning.cpp:523:70:\u{1b}[m\u{1b}[K \u{1b}[01;31m\u{1b}[Kerror: \u{1b}[m\u{1b}[Kcannot convert \'\u{1b}[01m\u{1b}[Kstd::__cxx11::string {aka std::__cxx11::basic_string<char>}\u{1b}[m\u{1b}[K\' to \'\u{1b}[01m\u{1b}[Kconst torch::jit::Value*\u{1b}[m\u{1b}[K\' in initialization\n           std::to_string(trace_hasher(mem_event.stack_trace.value()))\u{1b}[01;31m\u{1b}[K}\u{1b}[m\u{1b}[K;\n                                                                      \u{1b}[01;31m\u{1b}[K^\u{1b}[m\u{1b}[K\n\u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/jit/passes/memory_planning.cpp:524:51:\u{1b}[m\u{1b}[K \u{1b}[01;31m\u{1b}[Kerror: \u{1b}[m\u{1b}[Kno matching function for call to \'\u{1b}[01m\u{1b}[Kstd::map<torch::jit::UniqueLiveRange, long unsigned int, torch::jit::liveRangeStartCmp, std::allocator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> > >::insert(<brace-enclosed initializer list>)\u{1b}[m\u{1b}[K\'\n       managed_live_ranges.insert({lvr, alloc.size}\u{1b}[01;31m\u{1b}[K)\u{1b}[m\u{1b}[K;\n                                                   \u{1b}[01;31m\u{1b}[K^\u{1b}[m\u{1b}[K\nIn file included from \u{1b}[01m\u{1b}[K/usr/include/c++/7/map:61:0\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/c10/util/logging_is_not_google_glog.h:9\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/c10/util/Logging.h:28\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/c10/core/TensorImpl.h:19\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/build/aten/src/ATen/core/TensorBody.h:14\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/aten/src/ATen/core/ivalue.h:3\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/aten/src/ATen/record_function.h:3\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/jit/passes/memory_planning/memory_observer.h:3\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/jit/passes/memory_planning.h:3\u{1b}[m\u{1b}[K,\n                 from \u{1b}[01m\u{1b}[K/var/lib/jenkins/workspace/torch/csrc/jit/passes/memory_planning.cpp:1\u{1b}[m\u{1b}[K:\n\u{1b}[01m\u{1b}[K/usr/include/c++/7/bits/stl_map.h:795:7:\u{1b}[m\u{1b}[K \u{1b}[01;36m\u{1b}[Knote: \u{1b}[m\u{1b}[Kcandidate: std::pair<typename std::_Rb_tree<_Key, std::pair<const _Key, _Tp>, std::_Select1st<std::pair<const _Key, _Tp> >, _Compare, typename __gnu_cxx::__alloc_traits<_Allocator>::rebind<std::pair<const _Key, _Tp> >::other>::iterator, bool> std::map<_Key, _Tp, _Compare, _Alloc>::insert(const value_type&) [with _Key = torch::jit::UniqueLiveRange; _Tp = long unsigned int; _Compare = torch::jit::liveRangeStartCmp; _Alloc = std::allocator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> >; typename std::_Rb_tree<_Key, std::pair<const _Key, _Tp>, std::_Select1st<std::pair<const _Key, _Tp> >, _Compare, typename __gnu_cxx::__alloc_traits<_Allocator>::rebind<std::pair<const _Key, _Tp> >::other>::iterator = std::_Rb_tree_iterator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> >; std::map<_Key, _Tp, _Compare, _Alloc>::value_type = std::pair<const torch::jit::UniqueLiveRange, long unsigned int>]\n       \u{1b}[01;36m\u{1b}[Kinsert\u{1b}[m\u{1b}[K(const value_type& __x)\n       \u{1b}[01;36m\u{1b}[K^~~~~~\u{1b}[m\u{1b}[K\n\u{1b}[01m\u{1b}[K/usr/include/c++/7/bits/stl_map.h:795:7:\u{1b}[m\u{1b}[K \u{1b}[01;36m\u{1b}[Knote: \u{1b}[m\u{1b}[K  no known conversion for argument 1 from \'\u{1b}[01m\u{1b}[K<brace-enclosed initializer list>\u{1b}[m\u{1b}[K\' to \'\u{1b}[01m\u{1b}[Kconst value_type& {aka const std::pair<const torch::jit::UniqueLiveRange, long unsigned int>&}\u{1b}[m\u{1b}[K\'\n\u{1b}[01m\u{1b}[K/usr/include/c++/7/bits/stl_map.h:802:7:\u{1b}[m\u{1b}[K \u{1b}[01;36m\u{1b}[Knote: \u{1b}[m\u{1b}[Kcandidate: std::pair<typename std::_Rb_tree<_Key, std::pair<const _Key, _Tp>, std::_Select1st<std::pair<const _Key, _Tp> >, _Compare, typename __gnu_cxx::__alloc_traits<_Allocator>::rebind<std::pair<const _Key, _Tp> >::other>::iter
2021-11-03T19:37:01.6577263Z 
2021-11-03T19:37:01.6577852Z =========== If your build fails, please take a look at the log above for possible reasons ===========
2021-11-03T19:37:01.6578384Z Compile requests                   7843
2021-11-03T19:37:01.6578803Z Compile requests executed          5956
2021-11-03T19:37:01.6579180Z Cache hits                         4982
2021-11-03T19:37:01.6579526Z Cache hits (C/C++)                 4982
2021-11-03T19:37:01.6579873Z Cache misses                        899
2021-11-03T19:37:01.6580214Z Cache misses (C/C++)                899

See GitHub Actions build linux-xenial-py3-clang5-mobile-custom-build-static / build (13/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:22:48.3222373Z FAILED: caffe2/CMa...dir/__/torch/csrc/jit/passes/memory_planning.cpp.o
2021-11-03T19:22:42.8717073Z [2969/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_breadth.cpp.o�[K
2021-11-03T19:22:42.8719028Z [2970/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/lower_tuples.cpp.o�[K
2021-11-03T19:22:43.1947862Z [2970/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_size.cpp.o�[K
2021-11-03T19:22:43.1954217Z [2971/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/liveness.cpp.o�[K
2021-11-03T19:22:43.2278769Z [2971/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_util.cpp.o�[K
2021-11-03T19:22:43.2280589Z [2972/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/lower_grad_of.cpp.o�[K
2021-11-03T19:22:48.1928982Z [2972/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/linear_scan.cpp.o�[K
2021-11-03T19:22:48.1957450Z [2973/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_size.cpp.o�[K
2021-11-03T19:22:48.3218869Z [2973/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/memory_observer.cpp.o�[K
2021-11-03T19:22:48.3220843Z [2974/3116] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o�[K
2021-11-03T19:22:48.3222373Z FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o 
2021-11-03T19:22:48.3243813Z /opt/cache/bin/sccache /opt/cache/bin/c++  -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -Iaten/src -I../../aten/src -I. -I../../ -isystem third_party/gloo -isystem ../../cmake/../third_party/gloo -isystem ../../third_party/XNNPACK/include -isystem ../../cmake/../third_party/eigen -isystem ../../cmake/../third_party/pybind11/include -I../../third_party/pocketfft -I../../caffe2/aten/src/TH -Icaffe2/aten/src/TH -Icaffe2/aten/src -I../../caffe2/../third_party -I../../caffe2/../third_party/breakpad/src -Icaffe2/../aten/src -Icaffe2/../aten/src/ATen -I../../torch/csrc -I../../third_party/miniz-2.0.8 -I../../torch/csrc/distributed -I../../aten/src/TH -I../../aten/../third_party/catch/single_include -I../../aten/src/ATen/.. -Icaffe2/aten/src/ATen -isystem ../../caffe2 -I../../third_party/FXdiv/include -I../../c10/.. -I../../third_party/pthreadpool/include -I../../third_party/cpuinfo/include -I../../aten/src/ATen/native/quantized/cpu/qnnpack/include -I../../aten/src/ATen/native/quantized/cpu/qnnpack/src -I../../third_party/cpuinfo/deps/clog/include -I../../third_party/NNPACK/include -I../../third_party/FP16/include -I../../third_party/fmt/include -DSTRIP_ERROR_MESSAGES -DC10_MOBILE -DNO_EXPORT -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -O3 -DNDEBUG -fPIC   -DCAFFE2_USE_GLOO -Wall -Wextra -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-type-limits -Wno-array-bounds -Wno-sign-compare -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-missing-braces -Wno-maybe-uninitialized -fvisibility=hidden -O2 -DCAFFE2_BUILD_MAIN_LIB -pthread -std=gnu++14 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning.cpp.o -c ../../torch/csrc/jit/passes/memory_planning.cpp
2021-11-03T19:22:48.3265577Z �[01m�[K../../torch/csrc/jit/passes/memory_planning.cpp:�[m�[K In function '�[01m�[Kstd::pair<std::map<torch::jit::UniqueLiveRange, long unsigned int, torch::jit::liveRangeStartCmp, std::allocator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> > >, std::vector<std::pair<torch::jit::UniqueLiveRange, torch::jit::FrameNodeId> > > torch::jit::getManagedLiveRangesFromMemoryEvents(std::vector<torch::jit::MemoryEvent>, std::shared_ptr<torch::jit::Graph>)�[m�[K':
2021-11-03T19:22:48.3270581Z �[01m�[K../../torch/csrc/jit/passes/memory_planning.cpp:523:70:�[m�[K �[01;31m�[Kerror: �[m�[Kcannot convert '�[01m�[Kstd::__cxx11::string {aka std::__cxx11::basic_string<char>}�[m�[K' to '�[01m�[Kconst torch::jit::Value*�[m�[K' in initialization
2021-11-03T19:22:48.3272494Z            std::to_string(trace_hasher(mem_event.stack_trace.value()))};
2021-11-03T19:22:48.3273640Z �[01;32m�[K                                                                      ^�[m�[K
2021-11-03T19:22:48.3276800Z �[01m�[K../../torch/csrc/jit/passes/memory_planning.cpp:524:51:�[m�[K �[01;31m�[Kerror: �[m�[Kno matching function for call to '�[01m�[Kstd::map<torch::jit::UniqueLiveRange, long unsigned int, torch::jit::liveRangeStartCmp, std::allocator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> > >::insert(<brace-enclosed initializer list>)�[m�[K'
2021-11-03T19:22:48.3279281Z        managed_live_ranges.insert({lvr, alloc.size});
2021-11-03T19:22:48.3280327Z �[01;32m�[K                                                   ^�[m�[K
2021-11-03T19:22:48.3281536Z In file included from �[01m�[K/usr/include/c++/5/map:61:0�[m�[K,
2021-11-03T19:22:48.3282696Z                  from �[01m�[K../../c10/util/logging_is_not_google_glog.h:9�[m�[K,

See GitHub Actions build linux-xenial-py3-clang5-mobile-build / build (14/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:11:45.8141396Z /var/lib/jenkins/w...s/pytorch/common.sh: line 57: $3: unbound variable
2021-11-03T19:11:45.8130759Z ++ export IN_CI=1
2021-11-03T19:11:45.8131291Z ++ IN_CI=1
2021-11-03T19:11:45.8132044Z ++ declare -f -t trap_add
2021-11-03T19:11:45.8132574Z ++ trap_add cleanup EXIT
2021-11-03T19:11:45.8132931Z ++ trap_add_cmd=cleanup
2021-11-03T19:11:45.8133227Z ++ shift
2021-11-03T19:11:45.8133636Z ++ for trap_add_name in '"$@"'
2021-11-03T19:11:45.8137427Z ++++ trap -p EXIT
2021-11-03T19:11:45.8140297Z +++ eval 'extract_trap_cmd '
2021-11-03T19:11:45.8140727Z ++++ extract_trap_cmd
2021-11-03T19:11:45.8141396Z /var/lib/jenkins/workspace/.jenkins/pytorch/common.sh: line 57: $3: unbound variable
2021-11-03T19:11:45.8142071Z ++ trap -- '' EXIT
2021-11-03T19:11:45.8143905Z ++ [[ linux-xenial-py3-clang5-mobile-build != *win-* ]]
2021-11-03T19:11:45.8144511Z ++ which sccache
2021-11-03T19:11:45.8151252Z ++ sccache --stop-server
2021-11-03T19:11:45.8172575Z ++ rm -f /var/lib/jenkins/sccache_error.log
2021-11-03T19:11:45.8178473Z ++ [[ -n 1 ]]
2021-11-03T19:11:45.8179578Z ++ echo 'Skipping sccache server initialization, setting environment variables'
2021-11-03T19:11:45.8180353Z Skipping sccache server initialization, setting environment variables
2021-11-03T19:11:45.8180941Z ++ export SCCACHE_IDLE_TIMEOUT=1200
2021-11-03T19:11:45.8181338Z ++ SCCACHE_IDLE_TIMEOUT=1200

See GitHub Actions build Lint / quick-checks (15/19)

Step: "Ensure correct trailing newlines" (full log | diagnosis details | 🔁 rerun)

2021-11-03T19:06:53.1250517Z python: can't open..._launches.py': [Errno 2] No such file or directory
2021-11-03T19:06:53.0870244Z ##[group]Run set -eux
2021-11-03T19:06:53.0870808Z �[36;1mset -eux�[0m
2021-11-03T19:06:53.0871596Z �[36;1mpython torch/testing/_check_kernel_launches.py |& tee "${GITHUB_WORKSPACE}"/cuda_kernel_launch_checks.txt�[0m
2021-11-03T19:06:53.0913527Z shell: /bin/bash -e {0}
2021-11-03T19:06:53.0913944Z env:
2021-11-03T19:06:53.0914583Z   pythonLocation: /opt/hostedtoolcache/Python/3.10.0/x64
2021-11-03T19:06:53.0916419Z   LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.10.0/x64/lib
2021-11-03T19:06:53.0917027Z ##[endgroup]
2021-11-03T19:06:53.1003892Z + python torch/testing/_check_kernel_launches.py
2021-11-03T19:06:53.1022832Z + tee /home/runner/work/pytorch/pytorch/cuda_kernel_launch_checks.txt
2021-11-03T19:06:53.1250517Z python: can't open file '/home/runner/work/pytorch/pytorch/torch/testing/_check_kernel_launches.py': [Errno 2] No such file or directory
2021-11-03T19:06:53.1347957Z ##[group]Run (! git --no-pager grep -I -no $'#include <cub/' --  ./aten  ':(exclude)aten/src/ATen/cuda/cub*.cuh' || (echo "The above files have direct cub include; please include ATen/cuda/cub.cuh instead and wrap your cub calls in at::native namespace if necessary"; false))
2021-11-03T19:06:53.1350137Z �[36;1m(! git --no-pager grep -I -no $'#include <cub/' --  ./aten  ':(exclude)aten/src/ATen/cuda/cub*.cuh' || (echo "The above files have direct cub include; please include ATen/cuda/cub.cuh instead and wrap your cub calls in at::native namespace if necessary"; false))�[0m
2021-11-03T19:06:53.1391515Z shell: /bin/bash -e {0}
2021-11-03T19:06:53.1391930Z env:
2021-11-03T19:06:53.1392515Z   pythonLocation: /opt/hostedtoolcache/Python/3.10.0/x64
2021-11-03T19:06:53.1393699Z   LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.10.0/x64/lib
2021-11-03T19:06:53.1394283Z ##[endgroup]
2021-11-03T19:06:53.1878536Z ##[group]Run (! git --no-pager grep -I -no $'cudaStreamSynchronize' --  ./aten ./c10 ':(exclude)aten/src/ATen/test' ':(exclude)c10/cuda/CUDAFunctions.h' || (echo "The above files call raw cuda APIs directly; please use at::cuda wrappers instead"; false))
2021-11-03T19:06:53.1880842Z �[36;1m(! git --no-pager grep -I -no $'cudaStreamSynchronize' --  ./aten ./c10 ':(exclude)aten/src/ATen/test' ':(exclude)c10/cuda/CUDAFunctions.h' || (echo "The above files call raw cuda APIs directly; please use at::cuda wrappers instead"; false))�[0m
2021-11-03T19:06:53.1920946Z shell: /bin/bash -e {0}

See CircleCI build pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit (16/19)

Step: "pytorch android gradle custom build single architecture (for PR)" (full log | diagnosis details | 🔁 rerun)

Nov 03 19:35:41 ../../torch/csrc/jit/passes/mem...n for call to 'greedyByLongestAndSizeWithFirstGap'
Nov 03 19:35:41                     ^~~~~~~~~~~~~~~~~~~~~~~~
Nov 03 19:35:41 ../../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:14:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
Nov 03 19:35:41 std::vector<MemAllocation> greedyBySizeWithFirstGap(
Nov 03 19:35:41                            ^
Nov 03 19:35:41 ../../torch/csrc/jit/passes/memory_planning.cpp:593:21: error: no matching function for call to 'greedyByLongestAndSizeWithSmallestGap'
Nov 03 19:35:41       allocations = greedyByLongestAndSizeWithSmallestGap(managed_live_ranges);
Nov 03 19:35:41                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nov 03 19:35:41 ../../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:22:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
Nov 03 19:35:41 std::vector<MemAllocation> greedyByLongestAndSizeWithSmallestGap(
Nov 03 19:35:41                            ^
Nov 03 19:35:41 ../../torch/csrc/jit/passes/memory_planning.cpp:597:21: error: no matching function for call to 'greedyByLongestAndSizeWithFirstGap'
Nov 03 19:35:41       allocations = greedyByLongestAndSizeWithFirstGap(managed_live_ranges);
Nov 03 19:35:41                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nov 03 19:35:41 ../../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:18:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
Nov 03 19:35:41 std::vector<MemAllocation> greedyByLongestAndSizeWithFirstGap(
Nov 03 19:35:41                            ^
Nov 03 19:35:41 5 errors generated.
Nov 03 19:35:43 [2051/2194] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_breadth.cpp.o
Nov 03 19:35:43 [2052/2194] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/normalize_ops.cpp.o
Nov 03 19:35:46 [2053/2194] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_size.cpp.o
Nov 03 19:35:46 ninja: build stopped: subcommand failed.

See CircleCI build binary_linux_libtorch_3_7m_cpu_gcc5_4_cxx11-abi_shared-with-deps_build (17/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

../torch/csrc/jit/passes/memory_planning.cpp:59...::jit::UniqueLiveRange, long unsigned int> > >}’
../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:14:28: note: in passing argument 1 of ‘std::vector<torch::jit::MemAllocation> torch::jit::greedyBySizeWithFirstGap(const LivenessMap&, torch::jit::SortedLiveRangeMap<long unsigned int>&)’
 std::vector<MemAllocation> greedyBySizeWithFirstGap(
                            ^
../torch/csrc/jit/passes/memory_planning.cpp:593:78: error: invalid initialization of reference of type ‘const LivenessMap& {aka const std::unordered_map<const torch::jit::Value*, std::unordered_set<const torch::jit::Value*>, std::hash<const torch::jit::Value*>, std::equal_to<const torch::jit::Value*>, std::allocator<std::pair<const torch::jit::Value* const, std::unordered_set<const torch::jit::Value*> > > >&}’ from expression of type ‘torch::jit::SortedLiveRangeMap<long unsigned int> {aka std::map<torch::jit::UniqueLiveRange, long unsigned int, torch::jit::liveRangeStartCmp, std::allocator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> > >}’
       allocations = greedyByLongestAndSizeWithSmallestGap(managed_live_ranges);
                                                                              ^
In file included from ../torch/csrc/jit/passes/memory_planning.cpp:3:0:
../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:22:28: note: in passing argument 1 of ‘std::vector<torch::jit::MemAllocation> torch::jit::greedyByLongestAndSizeWithSmallestGap(const LivenessMap&, torch::jit::SortedLiveRangeMap<long unsigned int>&)’
 std::vector<MemAllocation> greedyByLongestAndSizeWithSmallestGap(
                            ^
../torch/csrc/jit/passes/memory_planning.cpp:597:75: error: invalid initialization of reference of type ‘const LivenessMap& {aka const std::unordered_map<const torch::jit::Value*, std::unordered_set<const torch::jit::Value*>, std::hash<const torch::jit::Value*>, std::equal_to<const torch::jit::Value*>, std::allocator<std::pair<const torch::jit::Value* const, std::unordered_set<const torch::jit::Value*> > > >&}’ from expression of type ‘torch::jit::SortedLiveRangeMap<long unsigned int> {aka std::map<torch::jit::UniqueLiveRange, long unsigned int, torch::jit::liveRangeStartCmp, std::allocator<std::pair<const torch::jit::UniqueLiveRange, long unsigned int> > >}’
       allocations = greedyByLongestAndSizeWithFirstGap(managed_live_ranges);
                                                                           ^
In file included from ../torch/csrc/jit/passes/memory_planning.cpp:3:0:
../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:18:28: note: in passing argument 1 of ‘std::vector<torch::jit::MemAllocation> torch::jit::greedyByLongestAndSizeWithFirstGap(const LivenessMap&, torch::jit::SortedLiveRangeMap<long unsigned int>&)’
 std::vector<MemAllocation> greedyByLongestAndSizeWithFirstGap(
                            ^
[4056/4724] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/peephole.cpp.o
[4057/4724] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/peephole_list_idioms.cpp.o
[4058/4724] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/metal_rewrite.cpp.o
[4059/4724] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/peephole_alias_sensitive.cpp.o

See CircleCI build pytorch_macos_10_13_py3_build (18/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Nov 03 19:53:35 ../torch/csrc/jit/passes/memory...n for call to 'greedyByLongestAndSizeWithFirstGap'
Nov 03 19:53:35                     ^~~~~~~~~~~~~~~~~~~~~~~~
Nov 03 19:53:35 ../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:14:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
Nov 03 19:53:35 std::vector<MemAllocation> greedyBySizeWithFirstGap(
Nov 03 19:53:35                            ^
Nov 03 19:53:35 ../torch/csrc/jit/passes/memory_planning.cpp:593:21: error: no matching function for call to 'greedyByLongestAndSizeWithSmallestGap'
Nov 03 19:53:35       allocations = greedyByLongestAndSizeWithSmallestGap(managed_live_ranges);
Nov 03 19:53:35                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nov 03 19:53:35 ../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:22:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
Nov 03 19:53:35 std::vector<MemAllocation> greedyByLongestAndSizeWithSmallestGap(
Nov 03 19:53:35                            ^
Nov 03 19:53:35 ../torch/csrc/jit/passes/memory_planning.cpp:597:21: error: no matching function for call to 'greedyByLongestAndSizeWithFirstGap'
Nov 03 19:53:35       allocations = greedyByLongestAndSizeWithFirstGap(managed_live_ranges);
Nov 03 19:53:35                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nov 03 19:53:35 ../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:18:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
Nov 03 19:53:35 std::vector<MemAllocation> greedyByLongestAndSizeWithFirstGap(
Nov 03 19:53:35                            ^
Nov 03 19:53:35 5 errors generated.
Nov 03 19:53:40 [4044/4739] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_size.cpp.o
Nov 03 19:53:42 [4045/4739] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_breadth.cpp.o
Nov 03 19:53:44 [4046/4739] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/memory_observer.cpp.o
Nov 03 19:53:45 [4047/4739] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_util.cpp.o

See CircleCI build pytorch_macos_10_15_py3_build (19/19)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

Nov 03 19:51:25 ../torch/csrc/jit/passes/memory...n for call to 'greedyByLongestAndSizeWithFirstGap'
Nov 03 19:51:25                     ^~~~~~~~~~~~~~~~~~~~~~~~
Nov 03 19:51:25 ../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:14:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
Nov 03 19:51:25 std::vector<MemAllocation> greedyBySizeWithFirstGap(
Nov 03 19:51:25                            ^
Nov 03 19:51:25 ../torch/csrc/jit/passes/memory_planning.cpp:593:21: error: no matching function for call to 'greedyByLongestAndSizeWithSmallestGap'
Nov 03 19:51:25       allocations = greedyByLongestAndSizeWithSmallestGap(managed_live_ranges);
Nov 03 19:51:25                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nov 03 19:51:25 ../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:22:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
Nov 03 19:51:25 std::vector<MemAllocation> greedyByLongestAndSizeWithSmallestGap(
Nov 03 19:51:25                            ^
Nov 03 19:51:25 ../torch/csrc/jit/passes/memory_planning.cpp:597:21: error: no matching function for call to 'greedyByLongestAndSizeWithFirstGap'
Nov 03 19:51:25       allocations = greedyByLongestAndSizeWithFirstGap(managed_live_ranges);
Nov 03 19:51:25                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nov 03 19:51:25 ../torch/csrc/jit/passes/memory_planning/greedy_by_size.h:18:28: note: candidate function not viable: requires 2 arguments, but 1 was provided
Nov 03 19:51:25 std::vector<MemAllocation> greedyByLongestAndSizeWithFirstGap(
Nov 03 19:51:25                            ^
Nov 03 19:51:25 5 errors generated.
Nov 03 19:51:26 [2665/3073] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/lower_tuples.cpp.o
Nov 03 19:51:27 [2666/3073] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/MemoryPlanningAllocator.cpp.o
Nov 03 19:51:31 [2667/3073] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_breadth.cpp.o
Nov 03 19:51:36 [2668/3073] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/jit/passes/memory_planning/greedy_by_size.cpp.o

3 failures not recognized by patterns:

Job Step Action
GitHub Actions Lint / clang-format Run clang-format 🔁 rerun
CircleCI pytorch_xla_linux_bionic_py3_6_clang9_build Build 🔁 rerun
CircleCI pytorch_linux_xenial_py3_6_gcc5_4_build Build 🔁 rerun

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

makslevental pushed a commit that referenced this pull request Aug 24, 2021
ghstack-source-id: 67d22a0884a989ae76169025879d8c8c50099ad7
Pull Request resolved: #63873
makslevental pushed a commit that referenced this pull request Aug 24, 2021
ghstack-source-id: 14e2f8ef811c0f24a617cf609218a10a3306aec4
Pull Request resolved: #63873
makslevental pushed a commit that referenced this pull request Sep 1, 2021
ghstack-source-id: 67b5e12ba523ca1e6199d0c77fbe0a9ad6811c02
Pull Request resolved: #63873

reorder

refactor profiling allocator

put profiling back

reconcile profiling

ghstack-source-id: 67b5e12ba523ca1e6199d0c77fbe0a9ad6811c02
Pull Request resolved: #64351
This PR extends memory planning strategies to support memory allocations and frees collected using the `MemoryTracingAllocator` (which follows the pattern from kineto). These plans can then be deployed using `MemoryPlanningAllocator` in combination with `prim::PreAllocateTensor` ops (inserted into the graph) to appropriately give out slices of the initially allocated region.


Differential Revision: [D30769097](https://our.internmc.facebook.com/intern/diff/D30769097)

[ghstack-poisoned]
This PR extends memory planning strategies to support memory allocations and frees collected using the `MemoryTracingAllocator` (which follows the pattern from kineto). These plans can then be deployed using `MemoryPlanningAllocator` in combination with `prim::PreAllocateTensor` ops (inserted into the graph) to appropriately give out slices of the initially allocated region.


Differential Revision: [D30769097](https://our.internmc.facebook.com/intern/diff/D30769097)

[ghstack-poisoned]
makslevental pushed a commit that referenced this pull request Sep 13, 2021
ghstack-source-id: 5256d94abbf80ddda00980dade0623674bb05299
Pull Request resolved: #63873

reorder

refactor profiling allocator

put profiling back

reconcile profiling

ghstack-source-id: 5256d94abbf80ddda00980dade0623674bb05299
Pull Request resolved: #64351

use make_pair instead of make_tuple

incorporate uniqueliverange

fix tests

whoops

rename stuff

size_t for memory tracing
This PR extends memory planning strategies to support memory allocations and frees collected using the `MemoryTracingAllocator` (which follows the pattern from kineto). These plans can then be deployed using `MemoryPlanningAllocator` in combination with `prim::PreAllocateTensor` ops (inserted into the graph) to appropriately give out slices of the initially allocated region.


Differential Revision: [D30769097](https://our.internmc.facebook.com/intern/diff/D30769097)

[ghstack-poisoned]
This was referenced Sep 20, 2021
This PR extends memory planning strategies to support memory allocations and frees collected using the `MemoryTracingAllocator` (which follows the pattern from kineto). These plans can then be deployed using `MemoryPlanningAllocator` in combination with `prim::PreAllocateTensor` ops (inserted into the graph) to appropriately give out slices of the initially allocated region.


Differential Revision: [D30769097](https://our.internmc.facebook.com/intern/diff/D30769097)

[ghstack-poisoned]
This PR extends memory planning strategies to support memory allocations and frees collected using the `MemoryTracingAllocator` (which follows the pattern from kineto). These plans can then be deployed using `MemoryPlanningAllocator` in combination with `prim::PreAllocateTensor` ops (inserted into the graph) to appropriately give out slices of the initially allocated region.


Differential Revision: [D30769097](https://our.internmc.facebook.com/intern/diff/D30769097)

[ghstack-poisoned]
This PR extends memory planning strategies to support memory allocations and frees collected using the `MemoryTracingAllocator` (which follows the pattern from kineto). These plans can then be deployed using `MemoryPlanningAllocator` in combination with `prim::PreAllocateTensor` ops (inserted into the graph) to appropriately give out slices of the initially allocated region.


Differential Revision: [D30769097](https://our.internmc.facebook.com/intern/diff/D30769097)

[ghstack-poisoned]
makslevental added a commit that referenced this pull request Sep 21, 2021
ghstack-source-id: 132dc0c37f92f5ad0c855da06adc6a41f82f6574
Pull Request resolved: #63873

reorder

refactor profiling allocator

put profiling back

reconcile profiling

ghstack-source-id: 132dc0c37f92f5ad0c855da06adc6a41f82f6574
Pull Request resolved: #64351

use make_pair instead of make_tuple

incorporate uniqueliverange

fix tests

whoops

rename stuff

size_t for memory tracing
This PR extends memory planning strategies to support memory allocations and frees collected using the `MemoryTracingAllocator` (which follows the pattern from kineto). These plans can then be deployed using `MemoryPlanningAllocator` in combination with `prim::PreAllocateTensor` ops (inserted into the graph) to appropriately give out slices of the initially allocated region.


Differential Revision: [D30769097](https://our.internmc.facebook.com/intern/diff/D30769097)

[ghstack-poisoned]
makslevental added a commit that referenced this pull request Sep 21, 2021
ghstack-source-id: b038aa015025f59110466e1326512379c97b9427
Pull Request resolved: #63873

reorder

refactor profiling allocator

put profiling back

reconcile profiling

ghstack-source-id: b038aa015025f59110466e1326512379c97b9427
Pull Request resolved: #64351

use make_pair instead of make_tuple

incorporate uniqueliverange

fix tests

whoops

rename stuff

size_t for memory tracing
@makslevental makslevental changed the title [JIT] memorization memory planning [JIT][WIP] memorization memory planning Sep 22, 2021
This PR extends memory planning strategies to support memory allocations and frees collected using the `MemoryTracingAllocator` (which follows the pattern from kineto). These plans can then be deployed using `MemoryPlanningAllocator` in combination with `prim::PreAllocateTensor` ops (inserted into the graph) to appropriately give out slices of the initially allocated region.


Differential Revision: [D30769097](https://our.internmc.facebook.com/intern/diff/D30769097)

[ghstack-poisoned]
makslevental added a commit that referenced this pull request Nov 3, 2021
ghstack-source-id: db94a55a0dd8eabf662ea9ae260ffb62f1c8bdb6
Pull Request resolved: #63873

reorder

refactor profiling allocator

put profiling back

reconcile profiling

ghstack-source-id: db94a55a0dd8eabf662ea9ae260ffb62f1c8bdb6
Pull Request resolved: #64351

use make_pair instead of make_tuple

incorporate uniqueliverange

fix tests

whoops

rename stuff

size_t for memory tracing
@pytorch-probot
Copy link

pytorch-probot bot commented Nov 3, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/pytorch/pytorch/blob/20114a8e1bc631c04ce3b9d122bd7145c98b1fc1/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-dynamic ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
docker-builds ciflow/all 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
ios-12-5-1-x86-64-full-jit ciflow/all, ciflow/ios, ciflow/macos 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-py3-clang5-mobile-code-analysis ciflow/all, ciflow/linux, ciflow/mobile 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@makslevental
Copy link
Contributor Author

@makslevental has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

Hi @makslevental!

Thank you for your pull request.

We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@github-actions
Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label May 21, 2022
@github-actions github-actions bot closed this Jun 20, 2022
@facebook-github-bot facebook-github-bot deleted the gh/makslevental/27/head branch July 21, 2022 14:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed oncall: jit Add this issue/PR to JIT oncall triage queue open source Stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants