Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

colab demo to local #105

Closed
sbkim052 opened this issue Aug 9, 2020 · 9 comments
Closed

colab demo to local #105

sbkim052 opened this issue Aug 9, 2020 · 9 comments

Comments

@sbkim052
Copy link

sbkim052 commented Aug 9, 2020

Hi, thank you for sharing your nice work.
I think the performance of the colab demo is better than the eval.py.
I tried to make the colab demo to the local code, but i am struggling with it.
Is there any way to implement colab demo in local?

@sbkim052
Copy link
Author

sbkim052 commented Aug 9, 2020

The error occurs while"python setup.py build"

@ruotianluo
Copy link
Owner

Not sure why, that part is not my code. It's the problem of https://gitlab.com/vedanuj/vqa-maskrcnn-benchmark.git.

What does the error say?

@sbkim052
Copy link
Author

Hi @ruotianluo .

/home/urp3/.conda/envs/IC/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:150:56: warning: ‘c10::ScalarType detail::scalar_type(const at::DeprecatedTypeProperties&)’ is deprecated [-Wdeprecated-declarations]
at::ScalarType st = ::detail::scalar_type(the_type);
^
/home/urp3/project/ImageCaptioning.pytorch/vqa-maskrcnn-benchmark/maskrcnn_benchmark/csrc/cpu/nms_cpu.cpp:71:3: note: in expansion of macro ‘AT_DISPATCH_FLOATING_TYPES’
AT_DISPATCH_FLOATING_TYPES(dets.type(), "nms", [&] {
^
/home/urp3/.conda/envs/IC/lib/python3.6/site-packages/torch/include/ATen/Dispatch.h:78:23: note: declared here
inline at::ScalarType scalar_type(const at::DeprecatedTypeProperties &t) {
^
/usr/bin/nvcc -DWITH_CUDA -I/home/urp3/project/ImageCaptioning.pytorch/vqa-maskrcnn-benchmark/maskrcnn_benchmark/csrc -I/home/urp3/.conda/envs/IC/lib/python3.6/site-packages/torch/include -I/home/urp3/.conda/envs/IC/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/urp3/.conda/envs/IC/lib/python3.6/site-packages/torch/include/TH -I/home/urp3/.conda/envs/IC/lib/python3.6/site-packages/torch/include/THC -I/home/urp3/.conda/envs/IC/include/python3.6m -c /home/urp3/project/ImageCaptioning.pytorch/vqa-maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIAlign_cuda.cu -o build/temp.linux-x86_64-3.6/home/urp3/project/ImageCaptioning.pytorch/vqa-maskrcnn-benchmark/maskrcnn_benchmark/csrc/cuda/ROIAlign_cuda.o -D__CUDA_NO_HALF_OPERATORS
_ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
/usr/lib/gcc/x86_64-linux-gnu/5/include/mwaitxintrin.h(36): error: identifier "__builtin_ia32_monitorx" is undefined

/usr/lib/gcc/x86_64-linux-gnu/5/include/mwaitxintrin.h(42): error: identifier "__builtin_ia32_mwaitx" is undefined

/home/urp3/.conda/envs/IC/lib/python3.6/site-packages/torch/include/c10/util/Half-inl.h(21): error: identifier "__half_as_short" is undefined

/home/urp3/.conda/envs/IC/lib/python3.6/site-packages/torch/include/THC/THCNumerics.cuh(208): error: identifier "__half_as_ushort" is undefined

4 errors detected in the compilation of "/tmp/tmpxft_00011212_00000000-7_ROIAlign_cuda.cpp1.ii".
error: command '/usr/bin/nvcc' failed with exit status 2

These are some parts of the error message.

@ruotianluo
Copy link
Owner

Seems like the problem of vqa-maskrcnn. Maybe related to the torch viersion.

@sbkim052
Copy link
Author

Thank you @ruotianluo
i have an additional question.
Just for inference(not training), should i use eval.py?
It seems the performance of the eval.py with pretrained model trans12 is not that good.
The image which was correctly captioned in colab was not correctly captioned in eval.py.

@ruotianluo
Copy link
Owner

How do you run eval.py? Do you download the 12in1 features shown here https://github.com/ruotianluo/ImageCaptioning.pytorch/tree/master/data#image-features-option-3--vilbert-12-in-1-features.

@sbkim052
Copy link
Author

$ python tools/eval.py --model model.pth --infos_path infos.pkl --image_folder blah --num_images 10

I used this code to run the eval.py and used the pretrained model shown below (Trained with vilbert-12-in-1 feature:)
https://github.com/ruotianluo/ImageCaptioning.pytorch/blob/master/MODEL_ZOO.md

@ruotianluo
Copy link
Owner

Eval.py only support arbitrary image eval trained with resnet features.
Whats in the colab is the only way for now.

@sbkim052
Copy link
Author

Thank you @ruotianluo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants