You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In many domains, autoregressive models can attain high likelihood on the taskof predicting the next observation. However, this maximum-likelihood (MLE)objective does not necessarily match a downstream use-case of autoregressivelygenerating high-quality sequences. The MLE objective weights sequencesproportionally to their frequency under the data distribution, with no guidancefor the model's behaviour out of distribution (OOD): leading to compoundingerror during autoregressive generation. In order to address this compoundingerror problem, we formulate sequence generation as an imitation learning (IL)problem. This allows us to minimize a variety of divergences between thedistribution of sequences generated by an autoregressive model and sequencesfrom a dataset, including divergences with weight on OOD generated sequences.The IL framework also allows us to incorporate backtracking by introducing abackspace action into the generation process. This further mitigates thecompounding error problem by allowing the model to revert a sampled token if ittakes the sequence OOD. Our resulting method, SequenceMatch, can be implementedwithout adversarial training or major architectural changes. We identify theSequenceMatch-$\chi^2$ divergence as a more suitable training objective forautoregressive models which are used for generation. We show that empirically,SequenceMatch training leads to improvements over MLE on text generation withlanguage models.
AkihikoWatanabe
changed the title
あ
SequenceMatch: Imitation Learning for Autoregressive Sequence Modelling
with Backtracking, Chris Cundy+, N/A, arXiv'23
Jun 26, 2023
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: