Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are tensors on same device? #62653

Closed
wants to merge 1 commit into from
Closed

Conversation

r-barnes
Copy link
Contributor

@r-barnes r-barnes commented Aug 3, 2021

Summary:
This consolidates checks determining whether tensors live on the same device into a single line using template parameter packs to unroll the check code.

The advantage of using the new checking syntax is that it makes it easy to use static analysis to determine both if the check is present and whether or not it is comprehensive. D30072495 includes a linter which performs this action.

Note that this is especially useful for PyTorch extensions which don't receive this check automatically from codegen.

Test Plan:

buck test //caffe2/torch/fb/sparsenn:gpu_test
buck test //caffe2/torch/fb/sparsenn:test

Differential Revision: D29924464

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 3, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 858c752 (more details on the Dr. CI page):


  • 2/2 failures introduced in this PR

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-xenial-cuda11.3-py3.6-gcc7 / test (default, 2, 2, linux.8xlarge.nvidia.gpu) (1/2)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2021-10-11T20:14:08.3257550Z Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
2021-10-11T20:14:07.9949675Z Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
2021-10-11T20:14:07.9987909Z ok (0.168s)
2021-10-11T20:14:08.1238381Z   test_cond_cuda_float32 (__main__.TestLinalgCUDA) ... 
2021-10-11T20:14:08.1239872Z Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
2021-10-11T20:14:08.1240725Z 
2021-10-11T20:14:08.1241779Z Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
2021-10-11T20:14:08.1279658Z ok (0.129s)
2021-10-11T20:14:08.3254128Z   test_cond_cuda_float64 (__main__.TestLinalgCUDA) ... 
2021-10-11T20:14:08.3255628Z Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
2021-10-11T20:14:08.3256523Z 
2021-10-11T20:14:08.3257550Z Intel MKL ERROR: Parameter 5 was incorrect on entry to DLASCL.
2021-10-11T20:14:08.3293729Z ok (0.201s)
2021-10-11T20:14:08.3975309Z   test_cond_errors_and_warnings_cuda_complex128 (__main__.TestLinalgCUDA) ... ok (0.068s)
2021-10-11T20:14:08.4635488Z   test_cond_errors_and_warnings_cuda_complex64 (__main__.TestLinalgCUDA) ... ok (0.066s)
2021-10-11T20:14:08.5292549Z   test_cond_errors_and_warnings_cuda_float32 (__main__.TestLinalgCUDA) ... ok (0.066s)
2021-10-11T20:14:08.5960765Z   test_cond_errors_and_warnings_cuda_float64 (__main__.TestLinalgCUDA) ... ok (0.067s)
2021-10-11T20:14:08.5973473Z   test_cross_cuda_float32 (__main__.TestLinalgCUDA) ... skip (0.001s)
2021-10-11T20:14:08.6299529Z   test_cross_errors_cuda (__main__.TestLinalgCUDA) ... ok (0.032s)
2021-10-11T20:14:08.6312625Z   test_cross_with_and_without_dim_cuda_float32 (__main__.TestLinalgCUDA) ... skip (0.001s)
2021-10-11T20:14:08.6841214Z   test_det_cuda_complex128 (__main__.TestLinalgCUDA) ... ok (0.053s)
2021-10-11T20:14:08.7154081Z   test_det_cuda_float64 (__main__.TestLinalgCUDA) ... ok (0.031s)

See GitHub Actions build win-vs2019-cpu-py3 / build (2/2)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2021-10-11T19:16:57.8657927Z C:\actions-runner\...ror C2010: '.': unexpected in macro parameter list
2021-10-11T19:16:57.5340590Z [5045/5696] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP @caffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\distributed\c10d\quantization\quantization.cpp.obj.rsp /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\distributed\c10d\quantization\quantization.cpp.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed\c10d\quantization\quantization.cpp
2021-10-11T19:16:57.6688134Z FAILED: caffe2/CMakeFiles/torch_cpu.dir/__/torch/csrc/distributed/c10d/quantization/quantization.cpp.obj 
2021-10-11T19:16:57.6974097Z C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP @caffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\distributed\c10d\quantization\quantization.cpp.obj.rsp /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\distributed\c10d\quantization\quantization.cpp.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed\c10d\quantization\quantization.cpp
2021-10-11T19:16:57.7518317Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2021-10-11T19:16:57.8460377Z Copyright (C) Microsoft Corporation.  All rights reserved.
2021-10-11T19:16:57.8599799Z 
2021-10-11T19:16:57.8627238Z cl -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\caffe2\core\nomnigraph\include -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\third_party -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2 /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14
2021-10-11T19:16:57.8652712Z 
2021-10-11T19:16:57.8654090Z C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen/Exceptions.h(63): error C2010: '.': unexpected in macro parameter list
2021-10-11T19:16:57.8656044Z C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen/Exceptions.h(68): error C2010: '.': unexpected in macro parameter list
2021-10-11T19:16:57.8657927Z C:\actions-runner\_work\pytorch\pytorch\aten\src\ATen/Exceptions.h(73): error C2010: '.': unexpected in macro parameter list
2021-10-11T19:16:58.1356035Z [5046/5696] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP @caffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\distributed\c10d\frontend.cpp.obj.rsp /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\distributed\c10d\frontend.cpp.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed\c10d\frontend.cpp
2021-10-11T19:16:58.2334510Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2021-10-11T19:16:58.2369588Z Copyright (C) Microsoft Corporation.  All rights reserved.
2021-10-11T19:16:58.2370551Z 
2021-10-11T19:16:58.2396016Z cl -DADD_BREAKPAD_SIGNAL_HANDLER -DCPUINFO_SUPPORTED_PLATFORM=1 -DFMT_HEADER_ONLY=1 -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DTH_BLAS_MKL -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_OPENMP_NOFORCE_MANIFEST -Dtorch_cpu_EXPORTS -IC:\actions-runner\_work\pytorch\pytorch\build\aten\src -IC:\actions-runner\_work\pytorch\pytorch\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build -IC:\actions-runner\_work\pytorch\pytorch -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\benchmark\include -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\contrib\aten -IC:\actions-runner\_work\pytorch\pytorch\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\onnx -IC:\actions-runner\_work\pytorch\pytorch\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\foxi -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\api\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party -IC:\actions-runner\_work\pytorch\pytorch\caffe2\..\third_party\breakpad\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\..\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc -IC:\actions-runner\_work\pytorch\pytorch\third_party\miniz-2.0.8 -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\kineto\libkineto\src -IC:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed -IC:\actions-runner\_work\pytorch\pytorch\aten\src\TH -IC:\actions-runner\_work\pytorch\pytorch\aten\..\third_party\catch\single_include -IC:\actions-runner\_work\pytorch\pytorch\aten\src\ATen\.. -IC:\actions-runner\_work\pytorch\pytorch\build\caffe2\aten\src\ATen -IC:\actions-runner\_work\pytorch\pytorch\caffe2\core\nomnigraph\include -IC:\actions-runner\_work\pytorch\pytorch\c10\.. -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\src\..\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\pthreadpool\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\cpuinfo\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm -IC:\actions-runner\_work\pytorch\pytorch\third_party\fbgemm\third_party\asmjit\src -IC:\actions-runner\_work\pytorch\pytorch\third_party\FP16\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\fmt\include -IC:\actions-runner\_work\pytorch\pytorch\build\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\gloo -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googlemock\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\googletest\googletest\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\protobuf\src -IC:\actions-runner\_work\pytorch\pytorch\build\win_tmp\mkl\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\XNNPACK\include -IC:\actions-runner\_work\pytorch\pytorch\third_party -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\eigen -IC:\Jenkins\Miniconda3\include -IC:\Jenkins\Miniconda3\lib\site-packages\numpy\core\include -IC:\actions-runner\_work\pytorch\pytorch\cmake\..\third_party\pybind11\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\mkl-dnn\include -IC:\actions-runner\_work\pytorch\pytorch\third_party\ideep\include -IC:\actions-runner\_work\pytorch\pytorch\caffe2 /DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -DHAVE_AVX512_CPU_DEFINITION -DHAVE_AVX2_CPU_DEFINITION /MD /O2 /Ob2 /DNDEBUG /w /bigobj -DNDEBUG -DCAFFE2_USE_GLOO -DUSE_GCC_GET_CPUID -DUSE_AVX -DUSE_AVX2 -DTH_HAVE_THREAD /EHsc /DNOMINMAX /wd4267 /wd4251 /wd4522 /wd4838 /wd4305 /wd4244 /wd4190 /wd4101 /wd4996 /wd4275 /bigobj -O2 -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/build/win_tmp/mkl/include -DCAFFE2_BUILD_MAIN_LIB -DONNX_BUILD_MAIN_LIB -std:c++14
2021-10-11T19:16:58.2420733Z 
2021-10-11T19:16:58.6437431Z [5047/5696] C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\bin\sccache-cl.exe   /TP @caffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\distributed\c10d\sequence_num.cpp.obj.rsp /showIncludes /Focaffe2\CMakeFiles\torch_cpu.dir\__\torch\csrc\distributed\c10d\sequence_num.cpp.obj /Fdcaffe2\CMakeFiles\torch_cpu.dir\ /FS -c C:\actions-runner\_work\pytorch\pytorch\torch\csrc\distributed\c10d\sequence_num.cpp
2021-10-11T19:16:58.6440563Z Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29337 for x64
2021-10-11T19:16:58.6441532Z Copyright (C) Microsoft Corporation.  All rights reserved.
2021-10-11T19:16:58.6443093Z 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

@pytorch-probot
Copy link

pytorch-probot bot commented Oct 6, 2021

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/r-barnes/pytorch/blob/858c7527a9fd0f9f2efdfc7c7c5ff99a7beece23/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/xla ✅ triggered
linux-vulkan-bionic-py3.6-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers ✅ triggered
linux-xenial-py3.6-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx ✅ triggered
linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
linux-xenial-py3.6-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/win ✅ triggered
Skipped Workflows
libtorch-linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
linux-xenial-cuda10.2-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow 🚫 skipped
parallelnative-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped
periodic-libtorch-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.1-py3.6-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.1-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
puretorch-linux-xenial-py3.6-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux 🚫 skipped

You can add a comment to the PR and tag @pytorchbot with the following commands:
# ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun

# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow

For more information, please take a look at the CI Flow Wiki.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

r-barnes added a commit to r-barnes/FBGEMM that referenced this pull request Oct 11, 2021
Summary:
Pull Request resolved: pytorch/pytorch#62653

This consolidates checks determining whether tensors live on the same device into a single line using template parameter packs to unroll the check code.

The advantage of using the new checking syntax is that it makes it easy to use static analysis to determine both if the check is present and whether or not it is comprehensive. D30072495 includes a linter which performs this action.

Note that this is especially useful for PyTorch extensions which don't receive this check automatically from codegen.

Reviewed By: ngimel

Differential Revision: D29924464

fbshipit-source-id: 18110f07f5b2dba9d231f767cfb0532849255bc7
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

r-barnes added a commit to r-barnes/FBGEMM that referenced this pull request Oct 11, 2021
Summary:
Pull Request resolved: pytorch#728

Pull Request resolved: pytorch/pytorch#62653

This consolidates checks determining whether tensors live on the same device into a single line using template parameter packs to unroll the check code.

The advantage of using the new checking syntax is that it makes it easy to use static analysis to determine both if the check is present and whether or not it is comprehensive. D30072495 includes a linter which performs this action.

Note that this is especially useful for PyTorch extensions which don't receive this check automatically from codegen.

Reviewed By: ngimel

Differential Revision: D29924464

fbshipit-source-id: dd2bc7f163366ca43e8eb00da0227e9ef972c636
Summary:
Pull Request resolved: pytorch/FBGEMM#728

Pull Request resolved: pytorch#62653

This consolidates checks determining whether tensors live on the same device into a single line using template parameter packs to unroll the check code.

The advantage of using the new checking syntax is that it makes it easy to use static analysis to determine both if the check is present and whether or not it is comprehensive. D30072495 includes a linter which performs this action.

Note that this is especially useful for PyTorch extensions which don't receive this check automatically from codegen.

Test Plan:
```
buck test //caffe2/torch/fb/sparsenn:gpu_test
buck test //caffe2/torch/fb/sparsenn:test
```

Reviewed By: ngimel

Differential Revision: D29924464

fbshipit-source-id: 6c575dda8b707eb6df7e9675d2bb62ec8e541753
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D29924464

@github-actions
Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label May 21, 2022
@github-actions github-actions bot closed this Jun 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants