We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
自适应预训练 (Adaptive Pretraining) ,即在某一领域的无标签语料上面进行预训练,再在该领域下游任务上进行微调,往往比直接微调通用领域的预训练模型效果更好,典型工作: Don't Stop Pretraining: Adapt Language Models to Domains and Tasks。对应到UER的论文中,即 3.4 中提到的 Stage 2: pre-training on downstream dataset.
目前相关工作有论文发表的 (BioBERT、SciBERT) 基本都是英文数据集。请问有没有用中文特定领域数据集做过自适应预训练,在下游任务取得一定性能提升的朋友可以介绍一下数据集的规模?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
自适应预训练 (Adaptive Pretraining) ,即在某一领域的无标签语料上面进行预训练,再在该领域下游任务上进行微调,往往比直接微调通用领域的预训练模型效果更好,典型工作: Don't Stop Pretraining: Adapt Language Models to Domains and Tasks。对应到UER的论文中,即 3.4 中提到的 Stage 2: pre-training on downstream dataset.
目前相关工作有论文发表的 (BioBERT、SciBERT) 基本都是英文数据集。请问有没有用中文特定领域数据集做过自适应预训练,在下游任务取得一定性能提升的朋友可以介绍一下数据集的规模?
The text was updated successfully, but these errors were encountered: