Skip to content

likenneth/q_probe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Q-probe

A repo for building and evaluating q-probes, which is proposed in as discussed in Q-Probe: A Lightweight Approach to Reward Maximization for Language Models.

The repo is split into two halves, each with it's own environment and README.

Abstract

We present an approach called Q-probing to adapt a pre-trained language model to maximize a task-specific reward function. At a high level, Q-probing sits between heavier approaches such as finetuning and lighter approaches such as few shot prompting, but can also be combined with either. The idea is to learn a simple linear function on a model's embedding space that can be used to reweigh candidate completions. We theoretically show that this sampling procedure is equivalent to a KL-constrained maximization of the Q-probe as the number of samples increases. To train the Q-probes we consider either reward modeling or a class of novel direct policy learning objectives based on importance weighted policy gradients. With this technique, we see gains in domains with ground-truth rewards (code generation) as well as implicit rewards defined by preference data, even outperforming finetuning in data-limited regimes. Moreover, a Q-probe can be trained on top of an API since it only assumes access to sampling and embeddings.

How to Cite

@article{li2024q,
  title={Q-Probe: A Lightweight Approach to Reward Maximization for Language Models},
  author={Li, Kenneth and Jelassi, Samy and Zhang, Hugh and Kakade, Sham and Wattenberg, Martin and Brandfonbrener, David},
  journal={arXiv preprint arXiv:2402.14688},
  year={2024}
}

About

Q-Probe: A Lightweight Approach to Reward Maximization for Language Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published