Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.utils.ffi is deprecated. How do I use cpp extensions instead? #15645

Closed
StrangeTcy opened this issue Dec 31, 2018 · 18 comments
Closed

torch.utils.ffi is deprecated. How do I use cpp extensions instead? #15645

StrangeTcy opened this issue Dec 31, 2018 · 18 comments

Comments

@StrangeTcy
Copy link

🐛 Bug

Trying to build code with a current pytorch under conda fails with the following error:

ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead.

To Reproduce

Steps to reproduce the behavior:

  1. On Ubuntu 16.04 x64, download and install anaconda

  2. Create an environment and install pytorch there: conda install -c pytorch pytorch

  3. Follow the instructions to try running this code: https://github.com/ruotianluo/pytorch-faster-rcnn

  4. Get to the ./make.sh part.

  5. Get a ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead. error

Expected behavior

./make.sh building all the code cleanly

Environment

Collecting environment information...
PyTorch version: 1.0.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176

OS: Ubuntu 16.04.5 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu116.04ppa1) 7.4.0
CMake version: version 3.13.20181022-g64947

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 7.5.17
GPU models and configuration: GPU 0: GeForce GTX 750 Ti
Nvidia driver version: 410.79
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_static_v7.a

Versions of relevant libraries:
[pip] Could not collect
[conda] Could not collect

@ezyang
Copy link
Contributor

ezyang commented Jan 3, 2019

Take a look at https://pytorch.org/tutorials/advanced/cpp_extension.html

@ezyang ezyang closed this as completed Jan 3, 2019
@IssamLaradji
Copy link

so there is no easy solution to this? it looks like it requires a full revamp of the code that used to work for pytorch <= 0.4.1

@ezyang
Copy link
Contributor

ezyang commented Jan 4, 2019

Unfortunately yes. An example port of some other ffi code is at pytorch/audio@8a41ecd but some rewriting is necessary. Let us know if you need help.

@aliutkus
Copy link

damn, weeks of work to the trash...

do you plan on deprecating such low-level stuff on a regular basis or is it really worth it trying to make a cpp extension now ?

@soumith
Copy link
Member

soumith commented Jan 14, 2019

@aliutkus we deprecated the feature after 1.5 years, and we really apologize for not having a structured deprecation path for it (it was technically not possible). We dont plan to deprecate any public API on a regular basis, especially without deprecation warnings over a few releases. This was a special case, and we apologize.

@aliutkus
Copy link

ok, great. Well I guess it's time to switch from C to C++ =)

thanks for all the work

@sunny52juli
Copy link

You should change it as "import torch.utils.cpp_extension"

@csjunxu
Copy link

csjunxu commented Feb 25, 2019

The problem is related to the version of pytorch you installed. I reduced its version from 1.0 to 0.4, and the problem is solved. Newer is not better!

@lerndeep
Copy link

I am using torch version 1.4.0 but I got the same issue how can I solve it?

@shamoons
Copy link

Same, I tried import torch.utils.cpp_extension, but no dice.

@TonojiKiobya
Copy link

TonojiKiobya commented Jun 27, 2021

NOTE THE FOLLOWING SOLUTION WILL ENABLE YOU TO WORK WITH HIGHER VERSION OF PYTORCH WITHOUT NECESSARILY NEEDING TO DOWNGRADE

this solution is found here https://blog.csdn.net/weixin_44493291/article/details/113097883 but I just translate

know that torch.utils.ffi is a module for pytorch 0.4, later versions of pytorch ceased to include this module

If the proplem is related to create_extension, then do the following

from torch.utils.ffi import create_extension

change the above line to:
from torch.utils.cpp_extension import BuildExtension

Also do the following on the code where you see the following:
xxx = create_extension(...)
change it to
xxx = BuildExtension(...)

If the error is about _wrap_function then do the following
Take the below code, put it in the file called ffiex.py and then go in every page of your project where there is **from torch.utils.ffi import wrap**function and change it to _from ffiex import wrap_function (follow the rule of importing, if the ffiex.py is in the same folder you will do from .nameoffolder import ffiex.

So the code is the following

`import os
import glob
import tempfile
import shutil
from functools import wraps, reduce
from string import Template
import torch
import torch.cuda
from torch._utils import _accumulate

try:
import cffi
except ImportError:
raise ImportError("torch.utils.ffi requires the cffi package")

if cffi.version_info < (1, 4, 0):
raise ImportError("torch.utils.ffi requires cffi version >= 1.4, but "
"got " + '.'.join(map(str, cffi.version_info)))

def _generate_typedefs():
typedefs = []
for t in ['Double', 'Float', 'Long', 'Int', 'Short', 'Char', 'Byte']:
for lib in ['TH', 'THCuda']:
for kind in ['Tensor', 'Storage']:
python_name = t + kind
if t == 'Float' and lib == 'THCuda':
th_name = 'THCuda' + kind
else:
th_name = lib + t + kind
th_struct = 'struct ' + th_name

			typedefs += ['typedef {} {};'.format(th_struct, th_name)]
			# We have to assemble a string here, because we're going to
			# do this lookup based on tensor.type(), which returns a
			# string (not a type object, as this code was before)
			python_module = 'torch.cuda' if lib == 'THCuda' else 'torch'
			python_class = python_module + '.' + python_name
			_cffi_to_torch[th_struct] = python_class
			_torch_to_cffi[python_class] = th_struct
return '\n'.join(typedefs) + '\n'

_cffi_to_torch = {}
_torch_to_cffi = {}
_typedefs = _generate_typedefs()

PY_MODULE_TEMPLATE = Template("""
from torch.utils.ffi import _wrap_function
from .$cffi_wrapper_name import lib as _lib, ffi as _ffi

all = []
def _import_symbols(locals):
for symbol in dir(_lib):
fn = getattr(_lib, symbol)
if callable(fn):
locals[symbol] = _wrap_function(fn, _ffi)
else:
locals[symbol] = fn
all.append(symbol)

_import_symbols(locals())
""")

def _setup_wrapper(with_cuda):
here = os.path.abspath(os.path.dirname(file))
lib_dir = os.path.join(here, '..', '..', 'lib')
include_dirs = [
os.path.join(lib_dir, 'include'),
os.path.join(lib_dir, 'include', 'TH'),
]

wrapper_source = '#include <TH/TH.h>\n'
if with_cuda:
	import torch.cuda
	wrapper_source += '#include <THC/THC.h>\n'
	if os.sys.platform == 'win32':
		cuda_include_dirs = glob.glob(os.getenv('CUDA_PATH', '') + '/include')
		cuda_include_dirs += glob.glob(os.getenv('NVTOOLSEXT_PATH', '') + '/include')
	else:
		cuda_include_dirs = glob.glob('/usr/local/cuda/include')
		cuda_include_dirs += glob.glob('/Developer/NVIDIA/CUDA-*/include')
	include_dirs.append(os.path.join(lib_dir, 'include', 'THC'))
	include_dirs.extend(cuda_include_dirs)
return wrapper_source, include_dirs

def _create_module_dir(base_path, fullname):
module, _, name = fullname.rpartition('.')
if not module:
target_dir = name
else:
target_dir = reduce(os.path.join, fullname.split('.'))
target_dir = os.path.join(base_path, target_dir)
try:
os.makedirs(target_dir)
except os.error:
pass
for dirname in _accumulate(fullname.split('.'), os.path.join):
init_file = os.path.join(base_path, dirname, 'init.py')
open(init_file, 'a').close() # Create file if it doesn't exist yet
return name, target_dir

def _build_extension(ffi, cffi_wrapper_name, target_dir, verbose):
try:
tmpdir = tempfile.mkdtemp()
ext_suf = '.pyd' if os.sys.platform == 'win32' else '.so'
libname = cffi_wrapper_name + ext_suf
outfile = ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
shutil.copy(outfile, os.path.join(target_dir, libname))
finally:
shutil.rmtree(tmpdir)

def _make_python_wrapper(name, cffi_wrapper_name, target_dir):
py_source = PY_MODULE_TEMPLATE.substitute(name=name,
cffi_wrapper_name=cffi_wrapper_name)
with open(os.path.join(target_dir, 'init.py'), 'w') as f:
f.write(py_source)

def create_extension(name, headers, sources, verbose=True, with_cuda=False,
package=False, relative_to='.', **kwargs):
base_path = os.path.abspath(os.path.dirname(relative_to))
name_suffix, target_dir = create_module_dir(base_path, name)
if not package:
cffi_wrapper_name = '
' + name_suffix
else:
cffi_wrapper_name = (name.rpartition('.')[0] +
'.{0}._{0}'.format(name_suffix))

wrapper_source, include_dirs = _setup_wrapper(with_cuda)
include_dirs.extend(kwargs.pop('include_dirs', []))

if os.sys.platform == 'win32':
	library_dirs = glob.glob(os.getenv('CUDA_PATH', '') + '/lib/x64')
	library_dirs += glob.glob(os.getenv('NVTOOLSEXT_PATH', '') + '/lib/x64')

	here = os.path.abspath(os.path.dirname(__file__))
	lib_dir = os.path.join(here, '..', '..', 'lib')

	library_dirs.append(os.path.join(lib_dir))
else:
	library_dirs = []
library_dirs.extend(kwargs.pop('library_dirs', []))

if isinstance(headers, str):
	headers = [headers]
all_headers_source = ''
for header in headers:
	with open(os.path.join(base_path, header), 'r') as f:
		all_headers_source += f.read() + '\n\n'

ffi = cffi.FFI()
sources = [os.path.join(base_path, src) for src in sources]
# NB: TH headers are C99 now
kwargs['extra_compile_args'] = ['-std=c99'] + kwargs.get('extra_compile_args', [])
ffi.set_source(cffi_wrapper_name, wrapper_source + all_headers_source,
			   sources=sources,
			   include_dirs=include_dirs,
			   library_dirs=library_dirs, **kwargs)
ffi.cdef(_typedefs + all_headers_source)

_make_python_wrapper(name_suffix, '_' + name_suffix, target_dir)

def build():
	_build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
ffi.build = build
return ffi

def _wrap_function(function, ffi):
@wraps(function)
def safe_call(args, **kwargs):
args = tuple(ffi.cast(_torch_to_cffi.get(arg.type(), 'void') + '
', arg._cdata)
if isinstance(arg, torch.Tensor) or torch.is_storage(arg)
else arg
for arg in args)
args = (function,) + args
result = torch._C._safe_call(*args, **kwargs)
if isinstance(result, ffi.CData):
typeof = ffi.typeof(result)
if typeof.kind == 'pointer':
cdata = int(ffi.cast('uintptr_t', result))
cname = typeof.item.cname
if cname in _cffi_to_torch:
# TODO: Maybe there is a less janky way to eval
# off of this
return eval(_cffi_to_torch[cname])(cdata=cdata)
return result
return safe_call
`

@TonojiKiobya
Copy link

This link has a good solution (use google translate if you dont know the meaning)
https://blog.csdn.net/weixin_44493291/article/details/113097883

@Kaviarasan-R
Copy link

This link has a good solution (use google translate if you dont know the meaning)
https://blog.csdn.net/weixin_44493291/article/details/113097883

After I canged line. I am getting
TypeError: init() missing 1 required positional argument: 'dist'

@marziyemahmoudifar
Copy link

NOTE THE FOLLOWING SOLUTION WILL ENABLE YOU TO WORK WITH HIGHER VERSION OF PYTORCH WITHOUT NECESSARILY NEEDING TO DOWNGRADE

this solution is found here https://blog.csdn.net/weixin_44493291/article/details/113097883 but I just translate

know that torch.utils.ffi is a module for pytorch 0.4, later versions of pytorch ceased to include this module

If the proplem is related to create_extension, then do the following

from torch.utils.ffi import create_extension

change the above line to: from torch.utils.cpp_extension import BuildExtension

Also do the following on the code where you see the following: xxx = create_extension(...) change it to xxx = BuildExtension(...)

If the error is about _wrap_function then do the following Take the below code, put it in the file called ffiex.py and then go in every page of your project where there is **from torch.utils.ffi import wrap**function and change it to _from ffiex import wrap_function (follow the rule of importing, if the ffiex.py is in the same folder you will do from .nameoffolder import ffiex.

So the code is the following

`import os import glob import tempfile import shutil from functools import wraps, reduce from string import Template import torch import torch.cuda from torch._utils import _accumulate

try: import cffi except ImportError: raise ImportError("torch.utils.ffi requires the cffi package")

if cffi.version_info < (1, 4, 0): raise ImportError("torch.utils.ffi requires cffi version >= 1.4, but " "got " + '.'.join(map(str, cffi.version_info)))

def _generate_typedefs(): typedefs = [] for t in ['Double', 'Float', 'Long', 'Int', 'Short', 'Char', 'Byte']: for lib in ['TH', 'THCuda']: for kind in ['Tensor', 'Storage']: python_name = t + kind if t == 'Float' and lib == 'THCuda': th_name = 'THCuda' + kind else: th_name = lib + t + kind th_struct = 'struct ' + th_name

			typedefs += ['typedef {} {};'.format(th_struct, th_name)]
			# We have to assemble a string here, because we're going to
			# do this lookup based on tensor.type(), which returns a
			# string (not a type object, as this code was before)
			python_module = 'torch.cuda' if lib == 'THCuda' else 'torch'
			python_class = python_module + '.' + python_name
			_cffi_to_torch[th_struct] = python_class
			_torch_to_cffi[python_class] = th_struct
return '\n'.join(typedefs) + '\n'

_cffi_to_torch = {} _torch_to_cffi = {} _typedefs = _generate_typedefs()

PY_MODULE_TEMPLATE = Template(""" from torch.utils.ffi import _wrap_function from .$cffi_wrapper_name import lib as _lib, ffi as _ffi

all = [] def _import_symbols(locals): for symbol in dir(_lib): fn = getattr(_lib, symbol) if callable(fn): locals[symbol] = _wrap_function(fn, _ffi) else: locals[symbol] = fn all.append(symbol)

_import_symbols(locals()) """)

def _setup_wrapper(with_cuda): here = os.path.abspath(os.path.dirname(file)) lib_dir = os.path.join(here, '..', '..', 'lib') include_dirs = [ os.path.join(lib_dir, 'include'), os.path.join(lib_dir, 'include', 'TH'), ]

wrapper_source = '#include <TH/TH.h>\n'
if with_cuda:
	import torch.cuda
	wrapper_source += '#include <THC/THC.h>\n'
	if os.sys.platform == 'win32':
		cuda_include_dirs = glob.glob(os.getenv('CUDA_PATH', '') + '/include')
		cuda_include_dirs += glob.glob(os.getenv('NVTOOLSEXT_PATH', '') + '/include')
	else:
		cuda_include_dirs = glob.glob('/usr/local/cuda/include')
		cuda_include_dirs += glob.glob('/Developer/NVIDIA/CUDA-*/include')
	include_dirs.append(os.path.join(lib_dir, 'include', 'THC'))
	include_dirs.extend(cuda_include_dirs)
return wrapper_source, include_dirs

def _create_module_dir(base_path, fullname): module, _, name = fullname.rpartition('.') if not module: target_dir = name else: target_dir = reduce(os.path.join, fullname.split('.')) target_dir = os.path.join(base_path, target_dir) try: os.makedirs(target_dir) except os.error: pass for dirname in _accumulate(fullname.split('.'), os.path.join): init_file = os.path.join(base_path, dirname, 'init.py') open(init_file, 'a').close() # Create file if it doesn't exist yet return name, target_dir

def _build_extension(ffi, cffi_wrapper_name, target_dir, verbose): try: tmpdir = tempfile.mkdtemp() ext_suf = '.pyd' if os.sys.platform == 'win32' else '.so' libname = cffi_wrapper_name + ext_suf outfile = ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname) shutil.copy(outfile, os.path.join(target_dir, libname)) finally: shutil.rmtree(tmpdir)

def _make_python_wrapper(name, cffi_wrapper_name, target_dir): py_source = PY_MODULE_TEMPLATE.substitute(name=name, cffi_wrapper_name=cffi_wrapper_name) with open(os.path.join(target_dir, 'init.py'), 'w') as f: f.write(py_source)

def create_extension(name, headers, sources, verbose=True, with_cuda=False, package=False, relative_to='.', **kwargs): base_path = os.path.abspath(os.path.dirname(relative_to)) name_suffix, target_dir = create_module_dir(base_path, name) if not package: cffi_wrapper_name = '' + name_suffix else: cffi_wrapper_name = (name.rpartition('.')[0] + '.{0}._{0}'.format(name_suffix))

wrapper_source, include_dirs = _setup_wrapper(with_cuda)
include_dirs.extend(kwargs.pop('include_dirs', []))

if os.sys.platform == 'win32':
	library_dirs = glob.glob(os.getenv('CUDA_PATH', '') + '/lib/x64')
	library_dirs += glob.glob(os.getenv('NVTOOLSEXT_PATH', '') + '/lib/x64')

	here = os.path.abspath(os.path.dirname(__file__))
	lib_dir = os.path.join(here, '..', '..', 'lib')

	library_dirs.append(os.path.join(lib_dir))
else:
	library_dirs = []
library_dirs.extend(kwargs.pop('library_dirs', []))

if isinstance(headers, str):
	headers = [headers]
all_headers_source = ''
for header in headers:
	with open(os.path.join(base_path, header), 'r') as f:
		all_headers_source += f.read() + '\n\n'

ffi = cffi.FFI()
sources = [os.path.join(base_path, src) for src in sources]
# NB: TH headers are C99 now
kwargs['extra_compile_args'] = ['-std=c99'] + kwargs.get('extra_compile_args', [])
ffi.set_source(cffi_wrapper_name, wrapper_source + all_headers_source,
			   sources=sources,
			   include_dirs=include_dirs,
			   library_dirs=library_dirs, **kwargs)
ffi.cdef(_typedefs + all_headers_source)

_make_python_wrapper(name_suffix, '_' + name_suffix, target_dir)

def build():
	_build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
ffi.build = build
return ffi

def _wrap_function(function, ffi): @wraps(function) def safe_call(_args, **kwargs): args = tuple(ffi.cast(torch_to_cffi.get(arg.type(), 'void') + '', arg._cdata) if isinstance(arg, torch.Tensor) or torch.is_storage(arg) else arg for arg in args) args = (function,) + args result = torch._C._safe_call(*args, **kwargs) if isinstance(result, ffi.CData): typeof = ffi.typeof(result) if typeof.kind == 'pointer': cdata = int(ffi.cast('uintptr_t', result)) cname = typeof.item.cname if cname in _cffi_to_torch: # TODO: Maybe there is a less janky way to eval # off of this return eval(_cffi_to_torch[cname])(cdata=cdata) return result return safe_call `

I have a similar problem. But I could not use this solution. Can you help me?

@JamesQian11
Copy link

This link has a good solution (use google translate if you dont know the meaning)
https://blog.csdn.net/weixin_44493291/article/details/113097883

After I canged line. I am getting TypeError: init() missing 1 required positional argument: 'dist'

Hi, do you solved thsii problem?

@marziyemahmoudifar
Copy link

This link has a good solution (use google translate if you dont know the meaning)
https://blog.csdn.net/weixin_44493291/article/details/113097883

After I canged line. I am getting TypeError: init() missing 1 required positional argument: 'dist'

Hi, do you solved thsii problem?

Hi, no I could not solve that problem. I changed the version of Pytorch.

@aalvan
Copy link

aalvan commented Jun 22, 2022

This link has a good solution (use google translate if you dont know the meaning)
https://blog.csdn.net/weixin_44493291/article/details/113097883

This worked for me.

@MurtazaAbdissamat
Copy link

I have this error. Please help me!

File "C:\Users\Murtaza\Desktop\TDAN-VSR-CVPR-2020-master\build.py", line 3, in
from torch.utils.ffi import create_extension

ModuleNotFoundError: No module named 'torch.utils.ffi'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests