You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I believe the best way of doing continued pretraining of COCO-LM is to use the original COCO-LM pretraining tasks (CLM + SCL) to avoid the discrepancy of objectives in pretraining/continued pretraining.
However, if you have to continue pretraining COCO-LM with different objectives like MLM, it probably is still feasible: As shown in Section 5.3 of our paper, although COCO-LM is not trained with MLM, it performs well in MLM-based prompt-based fine-tuning.
We want to use coco-LM to do post-ptrtain(like MLM), but the main transformer �didn‘t seen MLM task,we don't know if it‘s feasible
The text was updated successfully, but these errors were encountered: