Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build libtorch with -D_GLIBCXX_USE_CXX11_ABI=1 #14620

Open
xin-xinhanggao opened this issue Nov 30, 2018 · 9 comments

Comments

Projects
None yet
5 participants
@xin-xinhanggao
Copy link

commented Nov 30, 2018

It seems that the latest libtorch lib is build with -D_GLIBCXX_USE_CXX11_ABI=0 which causes link error with other c++ project build with -D_GLIBCXX_USE_CXX11_ABI=1.

When I build pytorch with gcc 5.4.0 and cmake 3.5.0 like this

cd tools
python build_libtorch.py

I get the error message like this

[ 90%] Built target event_test
[ 90%] Built target net_async_tracing_test
[ 90%] Built target operator_test
[ 98%] Built target torch
[ 98%] Built target test_jit
[100%] Built target test_api
[100%] Built target cuda_packedtensoraccessor_test
[100%] Built target integer_divider_test
[100%] Built target cuda_half_test
[100%] Built target cuda_optional_test
Install the project...
-- Install configuration: "Release"
CMake Error at third_party/ideep/mkl-dnn/cmake_install.cmake:40 (file):
  file INSTALL cannot find
  "/home/xhg/pytorch/third_party/ideep/mkl-dnn/-fopenmp".
Call Stack (most recent call first):
  cmake_install.cmake:86 (include)
  
Makefile:61: recipe for target 'install' failed
make: *** [install] Error 1
Traceback (most recent call last):
  File "build_libtorch.py", line 38, in <module>
    subprocess.check_call(command, universal_newlines=True)
  File "/usr/lib/python2.7/subprocess.py", line 541, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/home/xhg/pytorch/tools/build_pytorch_libs.sh', '--use-nnpack', '--use-mkldnn', '--use-cuda', 'caffe2']' returned non-zero exit status 2
@goldsborough

This comment has been minimized.

Copy link
Contributor

commented Nov 30, 2018

Try:

mkdir build
cd build
python ../tools/build_libtorch.py
@xin-xinhanggao

This comment has been minimized.

Copy link
Author

commented Dec 2, 2018

@goldsborough Thanks for your advice, I build libtorch by myself with CXX flags -D_GLIBCXX_USE_CXX11_ABI=1. ( I add -D_GLIBCXX_USE_CXX11_ABI=1 in tools/build_pytorch_libs.sh line 127)

But When I try to load a module using torch::jit::load
I got the bug report message

terminate called after throwing an instance of 'c10::Error'
what(): memcmp("PYTORCH1", buf, kMagicValueLength) != 0 ASSERT FAILED at /home/xhg/new_pytorch/pytorch/caffe2/serialize/inline_container.cc:75, please report a bug to PyTorch. File is an unsupported archive format from the preview release. (PyTorchStreamReader at /home/xhg/new_pytorch/pytorch/caffe2/serialize/inline_container.cc:75)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0x57 (0x7f47d137a1a7 in /home/xhg/libtorch/lib/libc10.so)
frame #1: torch::jit::PyTorchStreamReader::PyTorchStreamReader(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::istream*) + 0x629 (0x7f47e46b72c9 in /home/xhg/libtorch/lib/libcaffe2.so)
frame #2: torch::jit::load(std::istream&) + 0x2ae (0x7f47e8495cce in /home/xhg/libtorch/lib/libtorch.so.1)
frame #3: torch::jit::load(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&) + 0x42 (0x7f47e8495ea2 in /home/xhg/libtorch/lib/libtorch.so.1)
frame #4: main + 0x54 (0x401557 in ./example-app)
frame #5: __libc_start_main + 0xf0 (0x7f47d15a7830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: _start + 0x29 (0x401399 in ./example-app)

Aborted (core dumped)

@Oldpan

This comment has been minimized.

Copy link

commented Dec 3, 2018

@xin-xinhanggao I meet a same situation with you , do you make a breakthrough on this?

terminate called after throwing an instance of 'c10::Error'
  what():  memcmp("PYTORCH1", buf, kMagicValueLength) != 0 ASSERT FAILED at /home/prototype/Downloads/pytorch/caffe2/serialize/inline_container.cc:75, please report a bug to PyTorch. File is an unsupported archive format from the preview release. (PyTorchStreamReader at /home/prototype/Downloads/pytorch/caffe2/serialize/inline_container.cc:75)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6c (0x7fc355316f1c in /home/prototype/Downloads/pytorch/torch/lib/tmp_install/lib/libc10.so)
frame #1: torch::jit::PyTorchStreamReader::PyTorchStreamReader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::istream*) + 0x6fc (0x7fc36793488c in /home/prototype/Downloads/pytorch/torch/lib/tmp_install/lib/libcaffe2.so)
frame #2: torch::jit::load(std::istream&) + 0x2c5 (0x7fc36adfb9f5 in /home/prototype/Downloads/pytorch/torch/lib/tmp_install/lib/libtorch.so.1)
frame #3: torch::jit::load(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x55 (0x7fc36adfbc15 in /home/prototype/Downloads/pytorch/torch/lib/tmp_install/lib/libtorch.so.1)
frame #4: ./simnet() [0x404e67]
frame #5: __libc_start_main + 0xf0 (0x7fc352e60830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #6: ./simnet() [0x406be9]
@xin-xinhanggao

This comment has been minimized.

Copy link
Author

commented Dec 3, 2018

@Oldpan It seems that we should export the model.pt file using the latest pytorch version. I have fixed it.

@Oldpan

This comment has been minimized.

Copy link

commented Dec 3, 2018

@xin-xinhanggao
Yeah. The same but not necessarily lastest version of Pytorch will work. We should export model.pt and build libtorch from the same version of Pytorch. It will work.

@xin-xinhanggao

This comment has been minimized.

Copy link
Author

commented Dec 3, 2018

@Oldpan Could you plese tell me which file folder do you use as the librotch? When I build as @goldsborough said, I got the lib file folder in pytorch/torch/lib/tmp_install. But in tmp_install/lib, I cannot find libcaffe2_gpu.so

@Oldpan

This comment has been minimized.

Copy link

commented Dec 3, 2018

@xin-xinhanggao
Hi. My cmake options is -DCMAKE_PREFIX_PATH=/home/prototype/Documents/pytorch/torch/lib/tmp_install just like you said. In tmp_install fold is OK.

@NathanSegerlind

This comment has been minimized.

Copy link

commented Dec 4, 2018

Try:

mkdir build
cd build
python ../tools/build_libtorch.py

I've been having this problem with libtorch, I tried the suggested today with pytorch the last commit 9e1f4ba, the build failed while compiling some of the cuda libraries with "nvlink fatal : Internal error: reference to deleted section
"
interestingly enough, I was able to build pytorch proper with python setup.py install

not sure if this is related or if it should be another issue

@LvJC

This comment has been minimized.

Copy link

commented Dec 18, 2018

@xin-xinhanggao @Oldpan 两位难道不都是中国人吗。。。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.