Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

gluon model_zoo get_model cause CUDA oom when using multiple gpus #13733

Closed
MyYaYa opened this issue Dec 27, 2018 · 1 comment
Closed

gluon model_zoo get_model cause CUDA oom when using multiple gpus #13733

MyYaYa opened this issue Dec 27, 2018 · 1 comment

Comments

@MyYaYa
Copy link

MyYaYa commented Dec 27, 2018

Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.

For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io

Description

I'm using gluon's model zoo to finetuning resnet50_v2. It's Ok when I just use 1~4 gpus, but it will be break down when I work with multiple gpus, which is more than 4. And the error info is CUDA OOM.

Environment info (Required)

GPU: Tesla V100 32g
CUDA: 9.0
MXNET_VERSION: mxnet-cu90 1.3.1

----------Python Info----------
Version      : 3.6.7
Compiler     : GCC 4.9.2
Build        : ('default', 'Dec  8 2018 13:38:58')
Arch         : ('64bit', 'ELF')
------------Pip Info-----------
Version      : 18.1
Directory    : /usr/local/lib/python3.6/site-packages/pip
----------MXNet Info-----------
Version      : 1.3.1
Directory    : /usr/local/lib/python3.6/site-packages/mxnet
Commit Hash   : 19c501680183237d52a862e6ae1dc4ddc296305b
----------System Info----------
Platform     : Linux-4.9.0-0.bpo.6-amd64-x86_64-with-debian-8.9
system       : Linux
node         : n22-146-038
release      : 4.9.0-0.bpo.6-amd64
version      : #1 SMP Debian 4.9.88-1+deb9u1~bpo8+1 (2018-05-13)
----------Hardware Info----------
machine      : x86_64
processor    : 
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                64
On-line CPU(s) list:   0-63
Thread(s) per core:    2
Core(s) per socket:    16
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
Stepping:              4
CPU MHz:               2799.957
CPU max MHz:           3700.0000
CPU min MHz:           1000.0000
BogoMIPS:              4201.56
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              22528K
NUMA node0 CPU(s):     0-15,32-47
NUMA node1 CPU(s):     16-31,48-63
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.1624 sec, LOAD: 1.1204 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 1.2265 sec, LOAD: 3.4280 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 1.6494 sec, LOAD: 4.4747 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.1920 sec, LOAD: 2.4969 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.1085 sec, LOAD: 4.3755 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.4053 sec, LOAD: 3.9252 sec.

Package used (Python/R/Scala/Julia):
opencv-python

Error Message:

terminate called after throwing an instance of 'dmlc::Error'
what(): [14:52:43] /root/mxnet-rdma/3rdparty/mshadow/mshadow/./stream_gpu-inl.h:184: Check failed: e == cudaSuccess CUDA: out of memory

Stack trace returned 10 entries:
[bt] (0) /usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(dmlc::StackTrace(unsigned long)+0x49) [0x7f6e32816e59]
[bt] (1) /usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x1f) [0x7f6e3281735f]
[bt] (2) /usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(void mshadow::DeleteStreammshadow::gpu(mshadow::Streammshadow::gpu)+0xc0) [0x7f6e35b99800]
[bt] (3) /usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(mshadow::Streammshadow::gpu
mshadow::NewStreammshadow::gpu(bool, bool, int)+0x523) [0x7f6e35b9a7d3]
[bt] (4) /usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(void mxnet::engine::ThreadedEnginePerDevice::GPUWorker<(dmlc::ConcurrentQueueType)1>(mxnet::Context, bool, mxnet::engine::ThreadedEnginePerDevice::ThreadWorkerBlock<(dmlc::ConcurrentQueueType)1>, std::shared_ptrdmlc::ManualEvent const&)+0x89) [0x7f6e35bb8729]
[bt] (5) /usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(std::_Function_handler<void (std::shared_ptrdmlc::ManualEvent), mxnet::engine::ThreadedEnginePerDevice::PushToExecute(mxnet::engine::OprBlock
, bool)::{lambda()#2}::operator()() const::{lambda(std::shared_ptrdmlc::ManualEvent)#1}>::_M_invoke(std::_Any_data const&, std::shared_ptrdmlc::ManualEvent)+0x3e) [0x7f6e35bb8a2e]
[bt] (6) /usr/local/lib/python3.6/site-packages/mxnet-1.3.1-py3.6.egg/mxnet/libmxnet.so(std::thread::_Impl<std::_Bind_simple<std::function<void (std::shared_ptrdmlc::ManualEvent)> (std::shared_ptrdmlc::ManualEvent)> >::_M_run()+0x3b) [0x7f6e35ba5d8b]
[bt] (7) /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0xb6970) [0x7f6ecba34970]
[bt] (8) /lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) [0x7f6eed1e2064]
[bt] (9) /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f6eec80f63d]

Minimum reproducible example

model = vision.get_model(name='resnet50_v2', pretrained=True, ctx=context)

What have you tried to solve it?

  1. use 1~4 gpus is ok, ex. context=[mx.gpu(0), mx.gpu(1), mx.gpu(2), mx.gpu(3)]
@MyYaYa
Copy link
Author

MyYaYa commented Dec 27, 2018

It's clear now, because I am working in a public cloud environment with docker, and one guy who share the same instance with me, use pytorch which causes memory leak. It's sad that nvidia-smi cannot find that leak in my docker.

now I will close this issue.

@MyYaYa MyYaYa closed this as completed Dec 27, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant