You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We only released the fine-tuning code of VECO. In the directory of NLG code which are implemented based on fairseq, we transferred the pre-trained model state_dict keys into the format of fairseq via a mapping when fine-tuning on NLG tasks.
The encoder_attn denotes the cross_attention. The detailed mapping is shown as below:
Could you tell me where is the code implement of cross-attention in the paper VECO?
The text was updated successfully, but these errors were encountered: