You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 24, 2024. It is now read-only.
Hello, Thanks a lot for your great works. I want to know the principle of fine-tuned model. For example, if i want to fine-tune the pretrained model on supervised cnn/dailymail dataset, is the model seq2seq? And i only load the pretrained word embedding based on pegasus, and set it as the input of seq2seq?
The text was updated successfully, but these errors were encountered:
Thanks for your reply. When I fine-tune the model with encoder-decoder transformer framework on cnn/dailymail dataset, whether the model structure and initial model parameter is the same as pretrained model. The model parameter of transformer encoder and decoder is updated during fine-tuning.
I want to sure whether the fine-tuned model are as follows:
Hello, Thanks a lot for your great works. I want to know the principle of fine-tuned model. For example, if i want to fine-tune the pretrained model on supervised cnn/dailymail dataset, is the model seq2seq? And i only load the pretrained word embedding based on pegasus, and set it as the input of seq2seq?
The text was updated successfully, but these errors were encountered: