In rnnt_loss_tf function, you take the tf.nn.log_softmax(logits) on line 238, and you take the tf.nn.log_softmax(logits) again in the compute_rnnt_loss_and_grad_helper on line 159
is this intentional or an error?
reference file:
https://github.com/TensorSpeech/TensorFlowASR/blob/main/tensorflow_asr/losses/rnnt_losses.py#L150