You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We present Generalized LoRA (GLoRA), an advanced approach for universalparameter-efficient fine-tuning tasks. Enhancing Low-Rank Adaptation (LoRA),GLoRA employs a generalized prompt module to optimize pre-trained model weightsand adjust intermediate activations, providing more flexibility and capabilityacross diverse tasks and datasets. Moreover, GLoRA facilitates efficientparameter adaptation by employing a scalable, modular, layer-wise structuresearch that learns individual adapter of each layer. Originating from a unifiedmathematical formulation, GLoRA exhibits strong transfer learning, few-shotlearning and domain generalization abilities, as it adjusts to new tasksthrough additional dimensions on weights and activations. Comprehensiveexperiments demonstrate that GLoRA outperforms all previous methods in natural,specialized, and structured benchmarks, achieving superior accuracy with fewerparameters and computations on various datasets. Furthermore, our structuralre-parameterization design ensures that GLoRA incurs no extra inference cost,rendering it a practical solution for resource-limited applications. Code isavailable at: https://github.com/Arnav0400/ViT-Slim/tree/master/GLoRA.
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: