You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Injected additional trainable modules to connect the unfrozen modules in parameter efficient finetuning can improve the gradient flow and significantly improve convergence speed and performance (at least when finetuning models for information retrieval) see https://arxiv.org/pdf/2208.09847.pdf.
The text was updated successfully, but these errors were encountered:
Thank you for the suggestion. After reading this paper, we think that this is a beneficial feature for opendelta to improve its performance. However, we are currently adding other methods, for example, ladder-side tuning and adapt the acceleration framework. Your request may be postponed a bit. For now, you can implement your own version and make a pull request? We will help to check it.
Injected additional trainable modules to connect the unfrozen modules in parameter efficient finetuning can improve the gradient flow and significantly improve convergence speed and performance (at least when finetuning models for information retrieval) see https://arxiv.org/pdf/2208.09847.pdf.
The text was updated successfully, but these errors were encountered: