Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Imperative execution in MXNET with multiple GPUs does not run in parallel #16130

Closed
igolan opened this issue Sep 9, 2019 · 2 comments
Closed

Comments

@igolan
Copy link
Contributor

igolan commented Sep 9, 2019

Description

When running MXNET in imperative (not hybrid) mode using multiple GPUs, it seems like the GPUs do not run in parallel.
(might be related to #8884 )

Environment info (Required)

(mxnet_p36) ubuntu:~$ python diagnose.py
----------Python Info----------
Version      : 3.6.5
Compiler     : GCC 7.2.0
Build        : ('default', 'Apr 29 2018 16:14:56')
Arch         : ('64bit', '')
------------Pip Info-----------
Version      : 10.0.1
Directory    : /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/pip
----------MXNet Info-----------
Version      : 1.4.1
Directory    : /home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet
Commit hash file "/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/COMMIT_HASH" not found. Not installed from pre-built package or built from source.
Library      : ['/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet/libmxnet.so']
Build features:
No runtime build feature info available
----------System Info----------
Platform     : Linux-4.4.0-1092-aws-x86_64-with-debian-stretch-sid
system       : Linux
node         : ip-XXX-XX-XX-XXX
release      : 4.4.0-1092-aws
version      : #103-Ubuntu SMP Tue Aug 27 10:21:48 UTC 2019
----------Hardware Info----------
machine      : x86_64
processor    : x86_64
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                64
On-line CPU(s) list:   0-63
Thread(s) per core:    2
Core(s) per socket:    16
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping:              1
CPU MHz:               2193.175
CPU max MHz:           3000.0000
CPU min MHz:           1200.0000
BogoMIPS:              4600.13
Hypervisor vendor:     Xen
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              46080K
NUMA node0 CPU(s):     0-15,32-47
NUMA node1 CPU(s):     16-31,48-63
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq monitor est ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt ida

----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0020 sec, LOAD: 0.5027 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1331 sec, LOAD: 0.4725 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.2106 sec, LOAD: 0.5541 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0124 sec, LOAD: 0.2240 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0128 sec, LOAD: 0.2566 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0133 sec, LOAD: 0.0977 sec.
----------Environment----------

Package used: Python

Build info (Required if built from source)

N/A

Error Message:

N/A

Minimum reproducible example

Running GluonCV modelzoo cifar_resnet110_v2 on CIFAR10:
HYBRID profiler output
IMPERATIVE profiler output
IMPERATIVE profiler output zoomed

Steps to reproduce

Reproduce using the train_cifar10.py script from https://gluon-cv.mxnet.io/model_zoo/classification.html#cifar10 (download link https://gluon-cv.mxnet.io/_downloads/54189a15ba652c5a2587928303cc2171/train_cifar10.py ).
and add MXNET's profiler to the forward pass.

Or use the train_cifar10.py script including profiler code that can be found in https://gist.github.com/igolan/511b61d17da0694a817a1ac3f9bd8f95

Run:
python train_cifar10.py --num-epochs 200 --mode hybrid --num-gpus 4 -j 2 --batch-size 128 --wd 0.0001 --lr 0.1 --lr-decay 0.1 --lr-decay-epoch 100,150 --model cifar_resnet110_v2
Vs.
python train_cifar10.py --num-epochs 200 --mode imperative --num-gpus 4 -j 2 --batch-size 128 --wd 0.0001 --lr 0.1 --lr-decay 0.1 --lr-decay-epoch 100,150 --model cifar_resnet110_v2

What have you tried to solve it?

N/A

@mxnet-label-bot
Copy link
Contributor

Hey, this is the MXNet Label Bot.
Thank you for submitting the issue! I will try and suggest some labels so that the appropriate MXNet community members can help resolve it.
Here are my recommended label(s): Bug

@igolan
Copy link
Contributor Author

igolan commented Sep 12, 2019

Hi,
If I use a larger model (cifar_wideresnet40_8) it does run in parallel.
This issue can be closed.

@igolan igolan closed this as completed Sep 12, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants