-
Notifications
You must be signed in to change notification settings - Fork 6.8k
MXNet: Using FusedRNNCell with its "bidirectional" flag turned True, can lead to hanging of training run. #9171
Comments
I am using: As I understand, the FusedRNNCell is faster than the unused RNNCell because it makes direct function calls to a cuda kernel. It seems that the "bidirectional" flag in FusedRNNCell is directly passed to the cuda kernel call. This is just fyi. This might imply some cuda kernel issue but I am not a cuda expert. |
Want (but leads to hanging)cell = FusedRNNCell(.... bidirectional=True....) best workaround (but still slow)l_cell = FusedRNNCell(.... bidirectional=False....) All other workarounds are 3x-10x slower than what we ideally "Want" to use above. This workaround is "only" 2x slower. |
What's the patch version for cudnn? Would you confirm if the hanging still happens with the latest cuda 9.0.176/9.1.x and cudnn 7.0.5? Also, what's the GPU? |
Could you provide runnable code snippet that reproduces the hanging problem? You can use random input if data is not related to the hanging problem. |
@szha |
Proposed labels: Bug, Python, RNN |
@kalpitdixit does the problem still happen? |
Hi @DickJC123 this is the issue I mentioned with fused RNN with bidirectional=True. |
Looks similar. I'm trying to give a MWE. |
The error message:
|
@kalpitdixit Could you provide a script with which this issue occurs? |
@vandanavk |
Description
MXNet
Using FusedRNNCell with its "bidirectional" flag turned True, can lead to hanging (i.e. infinite pause without progress/error/crash) of training run.
Details
I am running a single training run of a Sequence-to-Sequence model using the BucketingModule. Iam using an Encoder-Decoder network. I am using a FusedRNNCell with its "bidirectional" flag turned on for the Encoder and an unfused RNNCell for the Decoder.
GPU utilization is 15000MB / 16000MB. CPU utilization is 95%.
For each batch during training, I do a forward() pass and a backward() pass. After a 5-15 epochs, the training run gets stuck in the forward() pass of one of the mini-batches. The forward pass does not complete. No errors are thrown nor does anything crash. GPU/CPU utilization remains identically the same.
I have tried an ablation of many-many things in my training run (architecture, data, code etc). The conclusion is that specifically using the FusedRNNCell with the "bidirectional" flag turned True causes this problem.
Package used
Python
Environment info
----------Python Info----------
Version : 3.5.2
Compiler : GCC 5.4.0 20160609
Build : ('default', 'Nov 23 2017 16:37:01')
Arch : ('64bit', 'ELF')
------------Pip Info-----------
Version : 9.0.1
Directory : /usr/local/lib/python3.5/dist-packages/pip
----------MXNet Info-----------
Version : 1.0.0
Directory : /usr/local/lib/python3.5/dist-packages/mxnet
Commit Hash : 25720d0
----------System Info----------
Platform : Linux-4.4.0-1039-aws-x86_64-with-Ubuntu-16.04-xenial
system : Linux
node : ip-172-31-85-194
release : 4.4.0-1039-aws
version : #48-Ubuntu SMP Wed Oct 11 15:15:01 UTC 2017
----------Hardware Info----------
machine : x86_64
processor : x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 1200.582
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.09
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq monitor est ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt ida
----------Network Test----------
Setting timeout: 10
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0300 sec, LOAD: 0.0514 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1141 sec, LOAD: 0.1956 sec.
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0016 sec, LOAD: 0.4062 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1799 sec, LOAD: 0.3847 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0046 sec, LOAD: 0.0126 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0154 sec, LOAD: 0.1567 sec.
The text was updated successfully, but these errors were encountered: