You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When use LoReFT in practice, the orthogonalization process of torch takes up the majority of memory overhead during training. If we get rid of this constraint, then it is no longer a pure LoReFT - it makes it Non-linear Low-rank ReFT (NoReFT). There is some trade-off in memory efficiency and performance. One should feel free to explore ideas like NoReFT to see the trade-off if there is one.
We did try it, it did not work out well comparing with LoreftIntervention. We may do an ablation experiment in our next paper revision to show the full picture.
The text was updated successfully, but these errors were encountered:
Descriptions:
When use LoReFT in practice, the orthogonalization process of
torch
takes up the majority of memory overhead during training. If we get rid of this constraint, then it is no longer a pure LoReFT - it makes it Non-linear Low-rank ReFT (NoReFT). There is some trade-off in memory efficiency and performance. One should feel free to explore ideas like NoReFT to see the trade-off if there is one.Updates:
NoreftIntervention
is now implemented and provided by default here: try it!https://github.com/stanfordnlp/pyreft/blob/main/pyreft/interventions.py#L59
We did try it, it did not work out well comparing with
LoreftIntervention
. We may do an ablation experiment in our next paper revision to show the full picture.The text was updated successfully, but these errors were encountered: