Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please Help! ImportError: No module named cpu_nms (AGAIN) #91

Closed
fire17 opened this issue Aug 19, 2018 · 8 comments
Closed

Please Help! ImportError: No module named cpu_nms (AGAIN) #91

fire17 opened this issue Aug 19, 2018 · 8 comments

Comments

@fire17
Copy link

fire17 commented Aug 19, 2018

Hi there!
finally I've got tensorflow (gpu), and Torch (after following issue #10 ) all installed properly
(using Python2.7)

when running ./run.sh --indir examples/demo/ --outdir examples/results/ --vis
I a bunch of errors, the first of which is: ImportError: No module named cpu_nms
(hopefully this is a chain reaction and not multiple problems)

0
generating bbox from Faster RCNN...
Traceback (most recent call last):
  File "demo-alpha-pose.py", line 22, in <module>
    from newnms.nms import  soft_nms
  File "/home/magic/AlphaPose/human-detection/tools/../lib/newnms/nms.py", line 3, in <module>
    from cpu_nms import cpu_nms, cpu_soft_nms
ImportError: No module named cpu_nms
pose estimation with RMPE...
/home/magic/torch/install/bin/lua: /home/magic/torch/install/share/lua/5.2/trepl/init.lua:389: /home/magic/torch/install/share/lua/5.2/hdf5/ffi.lua:56: expected align(#) on line 579
stack traceback:
	[C]: in function 'error'
	/home/magic/torch/install/share/lua/5.2/trepl/init.lua:389: in function 'require'
	/home/magic/AlphaPose/predict/util.lua:7: in main chunk
	[C]: in function 'dofile'
	/home/magic/torch/install/share/lua/5.2/paths/init.lua:84: in function 'dofile'
	main-alpha-pose.lua:7: in main chunk
	[C]: in function 'dofile'
	...agic/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
	[C]: in ?
/home/magic/.local/lib/python2.7/site-packages/h5py/__init__.py:36: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._conv import register_converters as _register_converters
/home/magic/.local/lib/python2.7/site-packages/h5py/__init__.py:45: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from . import h5a, h5d, h5ds, h5f, h5fd, h5g, h5r, h5s, h5t, h5p, h5z
/home/magic/.local/lib/python2.7/site-packages/h5py/_hl/group.py:22: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from .. import h5g, h5i, h5o, h5r, h5t, h5l, h5p
Traceback (most recent call last):
  File "parametric-pose-nms-MPII.py", line 256, in <module>
    get_result_json(args)
  File "parametric-pose-nms-MPII.py", line 243, in get_result_json
    test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
  File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
    h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
  File "/home/magic/.local/lib/python2.7/site-packages/h5py/_hl/files.py", line 312, in __init__
    fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
  File "/home/magic/.local/lib/python2.7/site-packages/h5py/_hl/files.py", line 142, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/magic/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
  File "json-video.py", line 63, in <module>
    with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/magic/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'

I've seen more people with this issue in #41 and in #49
But they were closed without a real solution...

I tried to fix this by myself...
I found the nms files in AlphaPose/human-detection/lib/ nms and newnms
added their dirs to PYTHONPATH
made a copy of the .pyx files to .py
that solved the import error,
but then there were a bunch of syntax errors because of the cython syntax "cimport" "cdef" etc...
I changed everything to the proper Python syntax
and that actually worked! solved all the import problems, but then...
trying to do the same to gpu_nms, but I couldnt figure out the original python syntax for

cdef extern from "gpu_nms.hpp":
    void _nms(np.int32_t*, int*, np.float32_t*, int, int, float, int)

(not to mention that there's no actual void _nms function in the gpu_nms.cpp file so can't try to convert the cpp logic to python either, and even if I get the syntax right for the extern function it will not load anything) 😢

So all in all, I'm sure that theres a better solution than to edit all those files manually...
Ubuntu 16.04
tensorflow, torch, cuda, cudnn, +all dependencies compiled/install properly

Please help
has anyone figured this out?
@Fang-Haoshu @hd120105 @luoyuncen

Thank you so much!
Tami

@Fang-Haoshu
Copy link
Member

Have you already followed the installation instruction to compile or install all the necessary libs or dependencies? Especially step 1

@fire17
Copy link
Author

fire17 commented Aug 19, 2018

yes of course
followed all instructions

@Fang-Haoshu
Copy link
Member

Fang-Haoshu commented Aug 19, 2018

including

cd AlphaPose/human-detection/lib/
make clean
make
cd newnms/
make
cd ../../../

?
It should install nms properly so that you can find cpu_nms

@fire17
Copy link
Author

fire17 commented Aug 19, 2018

yes of course
followed all instructions

I will delete the entire AlphaPose folder and try again, but yes, I did that....

@Fang-Haoshu
Copy link
Member

Thanks. Did you meet any errors when running these commands?

@fire17
Copy link
Author

fire17 commented Aug 19, 2018

OK! looks like this is resolved, (though I got a different error, you can close this issue, as ill open a new one for the new error)
still I'm posting everything I got so far....

I'm pretty sure everything was all ok the last time,
Now I've deleted AlphaPose, recloned It,
and running the instructions again,
Here are the outputs for each step:
cd AlphaPose/human-detection/lib/
make clean

rm -rf */*.pyc
rm -rf */*.so

make

python2 setup.py build_ext --inplace
running build_ext
cythoning utils/bbox.pyx to utils/bbox.c
building 'utils.cython_bbox' extension
creating build
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/utils
{'gcc': ['-Wno-cpp', '-Wno-unused-function']}
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/magic/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c utils/bbox.c -o build/temp.linux-x86_64-2.7/utils/bbox.o -Wno-cpp -Wno-unused-function
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/utils/bbox.o -o /home/magic/AlphaPose/human-detection/lib/utils/cython_bbox.so
cythoning utils/nms.pyx to utils/nms.c
building 'utils.cython_nms' extension
{'gcc': ['-Wno-cpp', '-Wno-unused-function']}
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/magic/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c utils/nms.c -o build/temp.linux-x86_64-2.7/utils/nms.o -Wno-cpp -Wno-unused-function
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/utils/nms.o -o /home/magic/AlphaPose/human-detection/lib/utils/cython_nms.so
rm -rf build

cd newnms/
make

python2 setup_linux.py build_ext --inplace
running build_ext
skipping 'cpu_nms.c' Cython extension (up-to-date)
building 'cpu_nms' extension
creating build
creating build/temp.linux-x86_64-2.7
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/magic/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c cpu_nms.c -o build/temp.linux-x86_64-2.7/cpu_nms.o -Wno-cpp -Wno-unused-function
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/cpu_nms.o -o /home/magic/AlphaPose/human-detection/lib/newnms/cpu_nms.so
skipping 'gpu_nms.cpp' Cython extension (up-to-date)
building 'gpu_nms' extension
/usr/local/cuda/bin/nvcc -I/home/magic/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/local/cuda/include -I/usr/include/python2.7 -c nms_kernel.cu -o build/temp.linux-x86_64-2.7/nms_kernel.o -arch=sm_35 --ptxas-options=-v -c --compiler-options '-fPIC'
ptxas info    : 0 bytes gmem
ptxas info    : Compiling entry function '_Z10nms_kernelifPKfPy' for 'sm_35'
ptxas info    : Function properties for _Z10nms_kernelifPKfPy
    0 bytes stack frame, 0 bytes spill stores, 0 bytes spill loads
ptxas info    : Used 22 registers, 1280 bytes smem, 344 bytes cmem[0], 12 bytes cmem[2]
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/magic/.local/lib/python2.7/site-packages/numpy/core/include -I/usr/local/cuda/include -I/usr/include/python2.7 -c gpu_nms.cpp -o build/temp.linux-x86_64-2.7/gpu_nms.o -Wno-unused-function
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/magic/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1821:0,
                 from /home/magic/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:18,
                 from /home/magic/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4,
                 from gpu_nms.cpp:449:
/home/magic/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
 #warning "Using deprecated NumPy API, disable it by " \
  ^
c++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/nms_kernel.o build/temp.linux-x86_64-2.7/gpu_nms.o -L/usr/local/cuda/lib64 -Wl,-R/usr/local/cuda/lib64 -lcudart -o /home/magic/AlphaPose/human-detection/lib/newnms/gpu_nms.so
rm -rf build

Is that allright? pretty sure thats the same as last time...

later:
chmod +x install.sh
./install.sh

Collecting easydict
Installing collected packages: easydict
Successfully installed easydict-1.8
You are using pip version 8.1.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting tqdm
  Using cached https://files.pythonhosted.org/packages/7d/e6/19dfaff08fcbee7f3453e5b537e65a8364f1945f921a36d08be1e2ff3475/tqdm-4.24.0-py2.py3-none-any.whl
Installing collected packages: tqdm
Successfully installed tqdm-4.24.0
You are using pip version 8.1.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting opencv-python
  Using cached https://files.pythonhosted.org/packages/60/6a/dcc146a95bc8bde469958ee3ae693a8721798c5f9da7ea58e5a580754610/opencv_python-3.4.2.17-cp27-cp27mu-manylinux1_x86_64.whl
Collecting numpy>=1.11.1 (from opencv-python)
  Using cached https://files.pythonhosted.org/packages/85/51/ba4564ded90e093dbb6adfc3e21f99ae953d9ad56477e1b0d4a93bacf7d3/numpy-1.15.0-cp27-cp27mu-manylinux1_x86_64.whl
Installing collected packages: numpy, opencv-python
Successfully installed numpy-1.15.0 opencv-python-3.4.2.17
You are using pip version 8.1.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Collecting h5py
  Using cached https://files.pythonhosted.org/packages/33/0c/1c5dfa85e05052aa5f50969d87c67a2128dc39a6f8ce459a503717e56bd0/h5py-2.8.0-cp27-cp27mu-manylinux1_x86_64.whl
Collecting six (from h5py)
  Using cached https://files.pythonhosted.org/packages/67/4b/141a581104b1f6397bfa78ac9d43d8ad29a7ca43ea90a2d863fe3056e86a/six-1.11.0-py2.py3-none-any.whl
Collecting numpy>=1.7 (from h5py)
  Using cached https://files.pythonhosted.org/packages/85/51/ba4564ded90e093dbb6adfc3e21f99ae953d9ad56477e1b0d4a93bacf7d3/numpy-1.15.0-cp27-cp27mu-manylinux1_x86_64.whl
Installing collected packages: six, numpy, h5py
Successfully installed h5py-2.8.0 numpy-1.15.0 six-1.11.0
You are using pip version 8.1.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Installing https://raw.githubusercontent.com/torch/rocks/master/hdf5-20-0.rockspec...
Using https://raw.githubusercontent.com/torch/rocks/master/hdf5-20-0.rockspec... switching to 'build' mode
Cloning into 'torch-hdf5'...
remote: Counting objects: 45, done.
remote: Compressing objects: 100% (37/37), done.
remote: Total 45 (delta 2), reused 27 (delta 2), pack-reused 0
Receiving objects: 100% (45/45), 29.96 KiB | 0 bytes/s, done.
Resolving deltas: 100% (2/2), done.
Checking connectivity... done.
cmake -E make_directory build;
cd build;
cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_PREFIX_PATH="/home/magic/torch/install/bin/.." -DCMAKE_INSTALL_PREFIX="/home/magic/torch/install/lib/luarocks/rocks/hdf5/20-0";
make
   
-- The C compiler identification is GNU 5.4.0
-- The CXX compiler identification is GNU 5.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Torch7 in /home/magic/torch/install
-- Found HDF5: /usr/lib/x86_64-linux-gnu/hdf5/serial/lib/libhdf5.so;/usr/lib/x86_64-linux-gnu/libpthread.so;/usr/lib/x86_64-linux-gnu/libsz.so;/usr/lib/x86_64-linux-gnu/libz.so;/usr/lib/x86_64-linux-gnu/libdl.so;/usr/lib/x86_64-linux-gnu/libm.so (found suitable version "1.8.16", minimum required is "1.8") 
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/luarocks_hdf5-20-0-5962/torch-hdf5/build
cd build && make install
Install the project...
-- Install configuration: "Release"
-- Generating /home/magic/torch/install/lib/luarocks/rocks/hdf5/20-0/lua/hdf5/config.lua
-- Installing: /home/magic/torch/install/lib/luarocks/rocks/hdf5/20-0/lua/hdf5/dataset.lua
-- Installing: /home/magic/torch/install/lib/luarocks/rocks/hdf5/20-0/lua/hdf5/testUtils.lua
-- Installing: /home/magic/torch/install/lib/luarocks/rocks/hdf5/20-0/lua/hdf5/file.lua
-- Installing: /home/magic/torch/install/lib/luarocks/rocks/hdf5/20-0/lua/hdf5/group.lua
-- Installing: /home/magic/torch/install/lib/luarocks/rocks/hdf5/20-0/lua/hdf5/datasetOptions.lua
-- Installing: /home/magic/torch/install/lib/luarocks/rocks/hdf5/20-0/lua/hdf5/ffi.lua
-- Installing: /home/magic/torch/install/lib/luarocks/rocks/hdf5/20-0/lua/hdf5/init.lua
Updating manifest for /home/magic/torch/install/lib/luarocks/rocks
hdf5 20-0 is now built and installed in /home/magic/torch/install/ (license: BSD)

then:
chmod +x fetch_models.sh
./fetch_models.sh

--2018-08-19 19:52:41--  http://mvig.sjtu.edu.cn/publications/rmpe/output.zip
Resolving mvig.sjtu.edu.cn (mvig.sjtu.edu.cn)... 202.121.182.216
Connecting to mvig.sjtu.edu.cn (mvig.sjtu.edu.cn)|202.121.182.216|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 475973264 (454M) [application/zip]
Saving to: ‘output.zip’

output.zip          100%[===================>] 453.92M  2.45MB/s    in 3m 18s  

2018-08-19 19:55:59 (2.30 MB/s) - ‘output.zip’ saved [475973264/475973264]

Archive:  output.zip
   creating: output/res152/
   creating: output/res152/coco_2014_train+coco_2014_valminusminival/
   creating: output/res152/coco_2014_train+coco_2014_valminusminival/default/
  inflating: output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt.data-00000-of-00001  
  inflating: output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt.index  
  inflating: output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt.meta  
  inflating: output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.pkl  
--2018-08-19 19:56:02--  http://mvig.sjtu.edu.cn/publications/rmpe/final_model.t7
Resolving mvig.sjtu.edu.cn (mvig.sjtu.edu.cn)... 202.121.182.216
Connecting to mvig.sjtu.edu.cn (mvig.sjtu.edu.cn)|202.121.182.216|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 229334152 (219M) [application/octet-stream]
Saving to: ‘final_model.t7’

final_model.t7      100%[===================>] 218.71M  2.70MB/s    in 1m 42s  

2018-08-19 19:57:44 (2.15 MB/s) - ‘final_model.t7’ saved [229334152/229334152]

and finally:
./run.sh --indir examples/demo/ --outdir examples/results/ --vis

0
generating bbox from Faster RCNN...
/home/magic/.local/lib/python2.7/site-packages/h5py/__init__.py:36: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._conv import register_converters as _register_converters
/home/magic/.local/lib/python2.7/site-packages/h5py/__init__.py:45: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from . import h5a, h5d, h5ds, h5f, h5fd, h5g, h5r, h5s, h5t, h5p, h5z
/home/magic/.local/lib/python2.7/site-packages/h5py/_hl/group.py:22: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from .. import h5g, h5i, h5o, h5r, h5t, h5l, h5p
2018-08-19 20:00:38.007923: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-08-19 20:00:38.082485: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-08-19 20:00:38.082915: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: 
name: GeForce GTX 970 major: 5 minor: 2 memoryClockRate(GHz): 1.253
pciBusID: 0000:01:00.0
totalMemory: 3.95GiB freeMemory: 3.64GiB
2018-08-19 20:00:38.082934: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
2018-08-19 20:00:38.285112: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-08-19 20:00:38.285141: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971]      0 
2018-08-19 20:00:38.285146: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0:   N 
2018-08-19 20:00:38.285278: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3366 MB memory) -> physical GPU (device: 0, name: GeForce GTX 970, pci bus id: 0000:01:00.0, compute capability: 5.2)
Loaded network ../output/res152/coco_2014_train+coco_2014_valminusminival/default/res152.ckpt
/home/magic/AlphaPose/examples/demo/

  0%|                                                     | 0/3 [00:00<?, ?it/s]2018-08-19 20:00:44.832560: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.58GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
2018-08-19 20:00:44.974977: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.57GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
100%|█████████████████████████████████████████████| 3/3 [00:06<00:00,  2.21s/it]
pose estimation with RMPE...
/home/magic/torch/install/bin/lua: /home/magic/torch/install/share/lua/5.2/trepl/init.lua:389: /home/magic/torch/install/share/lua/5.2/hdf5/ffi.lua:56: expected align(#) on line 579
stack traceback:
	[C]: in function 'error'
	/home/magic/torch/install/share/lua/5.2/trepl/init.lua:389: in function 'require'
	/home/magic/AlphaPose/predict/util.lua:7: in main chunk
	[C]: in function 'dofile'
	/home/magic/torch/install/share/lua/5.2/paths/init.lua:84: in function 'dofile'
	main-alpha-pose.lua:7: in main chunk
	[C]: in function 'dofile'
	...agic/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
	[C]: in ?
/home/magic/.local/lib/python2.7/site-packages/h5py/__init__.py:36: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from ._conv import register_converters as _register_converters
/home/magic/.local/lib/python2.7/site-packages/h5py/__init__.py:45: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from . import h5a, h5d, h5ds, h5f, h5fd, h5g, h5r, h5s, h5t, h5p, h5z
/home/magic/.local/lib/python2.7/site-packages/h5py/_hl/group.py:22: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  from .. import h5g, h5i, h5o, h5r, h5t, h5l, h5p
Traceback (most recent call last):
  File "parametric-pose-nms-MPII.py", line 256, in <module>
    get_result_json(args)
  File "parametric-pose-nms-MPII.py", line 243, in get_result_json
    test_parametric_pose_NMS_json(delta1, delta2, mu, gamma,args.outputpath)
  File "parametric-pose-nms-MPII.py", line 99, in test_parametric_pose_NMS_json
    h5file = h5py.File(os.path.join(outputpath,"POSE/test-pose.h5"), 'r')
  File "/home/magic/.local/lib/python2.7/site-packages/h5py/_hl/files.py", line 312, in __init__
    fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr)
  File "/home/magic/.local/lib/python2.7/site-packages/h5py/_hl/files.py", line 142, in make_fid
    fid = h5f.open(name, flags, fapl=fapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 78, in h5py.h5f.open
IOError: Unable to open file (unable to open file: name = '/home/magic/AlphaPose/examples/results/POSE/test-pose.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
visualization...
Traceback (most recent call last):
  File "json-video.py", line 63, in <module>
    with open(jsonpath) as f:
IOError: [Errno 2] No such file or directory: '/home/magic/AlphaPose/examples/results/POSE/alpha-pose-results-forvis.json'

(Different error, opening a new thread)
Thanks @Fang-Haoshu
deleting AlphaPose and recloning it + following the rest of the instructions fixed this issue

@Fang-Haoshu
Copy link
Member

cool

@HqWei
Copy link

HqWei commented Jun 17, 2019

You said,‘deleting AlphaPose and recloning it + following the rest of the instructions fixed this issue’. What is the 'the rest of the instructions'?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants