Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error for CPU inference #3316

Closed
loveunk opened this issue Jul 15, 2020 · 1 comment
Closed

Error for CPU inference #3316

loveunk opened this issue Jul 15, 2020 · 1 comment

Comments

@loveunk
Copy link
Contributor

loveunk commented Jul 15, 2020

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. The bug has not been fixed in the latest version.

Describe the bug
In a CPU-only machine, I launched a MMDetection 2.1 docker and tried to do CPU inference with mmdet.
Got error in load_checkpoint().

Reproduction

  1. What command or script did you run?
Custom Jupyter code
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
    No
  2. What dataset did you use?
    Custom dataset

Environment

  1. Please run python mmdet/utils/collect_env.py to collect necessary environment information and paste it here.
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
sys.platform: linux
Python: 3.7.7 (default, Mar 23 2020, 22:36:06) [GCC 7.3.0]
CUDA available: False
GCC: gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
PyTorch: 1.5.0
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_INTERNAL_THREADPOOL_IMPL -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, 

TorchVision: 0.6.0a0+82fd1c8
OpenCV: 4.3.0
MMCV: 0.6.2
MMDetection: 2.1.0+8bc0b9c
MMDetection Compiler: GCC 7.4
MMDetection CUDA Compiler: 10.1
  1. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.

A placeholder for trackback.
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-4-d780b8e88872> in <module>
      1 # build the model from a config file and a checkpoint file
----> 2 model = tools.infer_init(config=config_file, checkpoint=checkpoint_file, gpu_id=-1, debug=False)
      3 
      4 # infer each video frame, save the inference iamge, and show a progress bar
      5 for img in mmcv.track_iter_progress(imgs):

~/code/****/tools/infer.py in infer_init(config, checkpoint, gpu_id, batch_det, batch_size, debug)
    240 
    241     device = 'cuda:{}'.format(gpu_id) if gpu_id >= 0 else 'cpu'
--> 242     model = init_detector(load_config(config), checkpoint, device=device)
    243 
    244     infer_model = InferModel(model, multi_det, det_window_size, batch_det, batch_size, debug)

/mmdetection/mmdet/apis/inference.py in init_detector(config, checkpoint, device)

---> 37         checkpoint = load_checkpoint(model, checkpoint)
     38         if 'CLASSES' in checkpoint['meta']:
     39             model.CLASSES = checkpoint['meta']['CLASSES']
	 
	 
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

I think we need fill in the parameter map_location for load_checkpoint().

loveunk added a commit to loveunk/mmdetection that referenced this issue Jul 15, 2020
@loveunk
Copy link
Contributor Author

loveunk commented Jul 16, 2020

Close this issue since PR #3318 has been merged.

@loveunk loveunk closed this as completed Jul 16, 2020
mike112223 pushed a commit to mike112223/mmdetection that referenced this issue Aug 25, 2020
FANGAreNotGnu pushed a commit to FANGAreNotGnu/mmdetection that referenced this issue Oct 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant