You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The mixture proportions of pretraining data domains (e.g., Wikipedia, books,web text) greatly affect language model (LM) performance. In this paper, wepropose Domain Reweighting with Minimax Optimization (DoReMi), which firsttrains a small proxy model using group distributionally robust optimization(Group DRO) over domains to produce domain weights (mixture proportions)without knowledge of downstream tasks. We then resample a dataset with thesedomain weights and train a larger, full-sized model. In our experiments, we useDoReMi on a 280M-parameter proxy model to find domain weights for training an8B-parameter model (30x larger) more efficiently. On The Pile, DoReMi improvesperplexity across all domains, even when it downweights a domain. DoReMiimproves average few-shot downstream accuracy by 6.5% over a baseline modeltrained using The Pile's default domain weights and reaches the baselineaccuracy with 2.6x fewer training steps. On the GLaM dataset, DoReMi, which hasno knowledge of downstream tasks, even matches the performance of using domainweights tuned on downstream tasks.
AkihikoWatanabe
changed the title
あ
DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining, Sang Michael Xie+, N/A, arXiv'23
May 21, 2023
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: