bidirectional_dynamic_rnn + MultiRNNCell will result in incorrect result if num_layers>1.
The combined forward and backward layer outputs are used as input of the next layer. tf.bidirectional_rnn does not allow to share forward and backward information between layers.
It is better to use tf.contrib.rnn.stack_bidirectional_dynamic_rnn to implement it.