Skip to content

CUDA Extension Build Failure on Windows and MSVC 2017 #11004

@JackHunt

Description

@JackHunt

Issue description

Building a custom CUDA extension fails with the following error on MSVC 2017:
<env_lib_dir>\site-packages\torch\lib\include\pybind11\cast.h(1393): error: expression must be a pointer to a complete object type

However, I have not observed the error with multiple GCC versions on Linux.

Code example

Here is an MWE.

setup.py

import os
from setuptools import setup
from torch.utils.cpp_extension import CppExtension, CUDAExtension, BuildExtension

setup(name='exten', ext_modules=[
    CUDAExtension('exten_cuda', ['exten_cuda.cpp', 'exten_cuda_kernel.cu'])
    ], cmdclass={'build_ext': BuildExtension})

exten_cuda.cpp

std::vector<at::Tensor> exten_cuda_forward(at::Tensor a);
std::vector<at::Tensor> exten_cuda_backward(at::Tensor a);

std::vector<at::Tensor> exten_forward(at::Tensor a) {
  return exten_cuda_forward(a);
}

std::vector<at::Tensor> exten_backward(at::Tensor a) {
  return exten_cuda_backward(a);
}

// Do Python bindings.
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
  m.def("forward", &exten_forward, "Extension forward CUDA");
  m.def("backward", &exten_backward, "Extension backward CUDA");
}

exten_cuda_kernel.cu

#include <torch/torch.h>
#include <cuda.h>
#include <cuda_runtime.h>
#include <vector>

std::vector<at::Tensor> exten_cuda_forward(at::Tensor a) {
  // Do nothing.

  // Return empty vector (just for compile error demo).
  return {};
}

std::vector<at::Tensor> exten_cuda_backward(at::Tensor a) {
  // Do nothing.

  // Return empty vector (just for compile error demo).
  return {};
}

System Info

Collecting environment information...
PyTorch version: 0.4.1
Is debug build: No
CUDA used to build PyTorch: 9.2

OS: Microsoft Windows 10 Education
GCC version: (x86_64-posix-seh, Built by strawberryperl.com project) 7.1.0
CMake version: version 3.12.0

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 9.2.148
GPU models and configuration: GPU 0: GeForce GTX 1060 6GB
Nvidia driver version: 399.07
cuDNN version: Could not collect

Versions of relevant libraries:
[pip] numpy (1.15.1)
[pip] torch (0.4.1)
[pip] torchvision (0.2.1)
[conda] cuda92                    1.0                           0    pytorch
[conda] pytorch                   0.4.1           py37_cuda92_cudnn7he774522_1  [cuda92]  pytorch
[conda] torchvision               0.2.1                     <pip>
  • PyTorch or Caffe2: PyTorch
  • How you installed PyTorch (conda, pip, source): conda
  • OS: Windows 10 x64
  • PyTorch version: 0.4.1
  • Python version: 3.7
  • CUDA/cuDNN version: 9.2
  • GPU models and configuration: GTX 1060 6GB
  • MSVC version (if compiling from source): 15.7.5
  • CMake version: 3.12.0
  • Versions of any other relevant libraries: setuptools 0.2.1

Metadata

Metadata

Labels

has workaroundmodule: cpp-extensionsRelated to torch.utils.cpp_extensionmodule: cudaRelated to torch.cuda, and CUDA support in generalmodule: dependency bugProblem is not caused by us, but caused by an upstream library we usemodule: pybindRelated to our Python bindings / interactions with other Python librariesmodule: windowsWindows support for PyTorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions