❓ Question
Doesn't torch tensorrt support LSTM-based decoder optimization? The reason for asking this question is that the model forward and model test structures learned in the seq2seq structure are different (beam search, sequence inference ..), and the optimized model cannot be used by inputting only training forward logic.
Environment
Tensorrt 22.03 docker image:
https://docs.nvidia.com/deeplearning/tensorrt/container-release-notes/rel_22-03.html#rel_22-03