Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[P1] Is it possible to "bake in" ReFT changes to the weights and produce a model without pyreft dependencies? #54

Closed
ThaddeusChristopher opened this issue Apr 17, 2024 · 3 comments
Labels
question Further information is requested

Comments

@ThaddeusChristopher
Copy link

ThaddeusChristopher commented Apr 17, 2024

I imagine it would be non-trivial (and then some), but am wondering if any plans are afoot.

@frankaging
Copy link
Collaborator

Hi @ThaddeusChristopher!

Thanks for raising this point. Unfortunately, it could be hard --- in some sense, just like adaptors are not folded into the weights due to non-linearity and thus increase compute overhead. For instance, if ReFT is applied in the middle of two linear layers, then yes, it could be potentially folded in by coupling the rotation weights with the linear layer weights.

Another reason why it is hard is that, we intervene not on all positions, but just on very limited positions (e.g., the first n tokens and last n tokens in the prompt). And these intervening positions depend on the input. As a result, the interventions have to happen during run-time to target dynamic locations.

Hopefully, these are helpful!

@frankaging frankaging added the question Further information is requested label Apr 17, 2024
@frankaging frankaging changed the title Is it possible to "bake in" ReFT changes to the weights and produce a model without pyreft dependencies? [P1] Is it possible to "bake in" ReFT changes to the weights and produce a model without pyreft dependencies? Apr 17, 2024
@fblissjr
Copy link

This is helpful in understanding how it works. From tinkering with it over the last few weeks, it seems unique in how it works, and would probably need to be built from scratch to work on something other than torch/transformers - is that a fair assumption? Was looking into what it would take to do mlx weeks ago, came back to it this weekend, and given pyreft requires pyvene, it sounds like much more than a weekend project.

Fun though - really neat to be able to steer it so easily.

@frankaging
Copy link
Collaborator

closing this issue for now, as the MLX support will be tracked in pyvene: stanfordnlp/pyvene#67

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants