You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! It's really great work!I am now learning and working on ASR related tasks and really appreciate your effort!
However, I still have some questions about the outputs of the functions. I noticed that "beam_scores" is a tensor but without grad_fn. Is there any way that I could preserving the computation graph in the meantime? So that I could directly use "beam_scores" to define new loss in my task.
Actually I found that tensorflow has an implementation called tf.nn.ctc_beam_search_decoder, which they could return a beam_score with gradient preserved. So I was wondering if there is any similar implementation in PyTorch?
Also, could you explain a little bit on this: // compute aproximate ctc score as the return score, without affecting the // return order of decoding result. To delete when decoder gets stable.
The text was updated successfully, but these errors were encountered:
Hi! It's really great work!I am now learning and working on ASR related tasks and really appreciate your effort!
However, I still have some questions about the outputs of the functions. I noticed that "beam_scores" is a tensor but without grad_fn. Is there any way that I could preserving the computation graph in the meantime? So that I could directly use "beam_scores" to define new loss in my task.
Actually I found that tensorflow has an implementation called tf.nn.ctc_beam_search_decoder, which they could return a beam_score with gradient preserved. So I was wondering if there is any similar implementation in PyTorch?
Also, could you explain a little bit on this:
// compute aproximate ctc score as the return score, without affecting the
// return order of decoding result. To delete when decoder gets stable.
The text was updated successfully, but these errors were encountered: