You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was thinking that it could be useful to have support for system combination at nnet output level. For e.g. we can use this feature to combine 2 chain models (e.g. TDNN-F and TDNN-LSTM) which use the same tree for combined decoding. Currently, it seems that there are no scripts/binaries for this.
One way to do this is to use the existing class DecodableSum() or something like that to write a new decoder.
Another possibility is to create a neural net that outputs the
average of the outputs of 2 nnets.
It would be possible by renaming the 2 inputs to have different
suffixes, and then messing with the input and output nodes. That would
allow us to use the same decoding code.
Anyone interested?
The text was updated successfully, but these errors were encountered:
I'm wondering (just thinking aloud) what prevents someone from assembling a single network from two models. Outputs should be trivial to sum. May be more efficient to compute at runtime, and would use the existing nnet3 computation mechanics to consume just the right amount of the (combied, longest) left/right context for each internal subnetwork. Looped decode would trivially work for TDNN too, so no special decoder needed.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I was thinking that it could be useful to have support for system combination at nnet output level. For e.g. we can use this feature to combine 2 chain models (e.g. TDNN-F and TDNN-LSTM) which use the same tree for combined decoding. Currently, it seems that there are no scripts/binaries for this.
One way to do this is to use the existing class DecodableSum() or something like that to write a new decoder.
Another possibility is to create a neural net that outputs the
average of the outputs of 2 nnets.
It would be possible by renaming the 2 inputs to have different
suffixes, and then messing with the input and output nodes. That would
allow us to use the same decoding code.
Anyone interested?
The text was updated successfully, but these errors were encountered: