Skip to content

Latest commit

History

History
31 lines (18 loc) 路 2.17 KB

loha.md

File metadata and controls

31 lines (18 loc) 路 2.17 KB

LoHa

Low-Rank Hadamard Product (LoHa), is similar to LoRA except it approximates the large weight matrix with more low-rank matrices and combines them with the Hadamard product. This method is even more parameter-efficient than LoRA and achieves comparable performance.

The abstract from the paper is:

In this work, we propose a communication-efficient parameterization, FedPara, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our FedPara method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters.

LoHaConfig

[[autodoc]] tuners.loha.config.LoHaConfig

LoHaModel

[[autodoc]] tuners.loha.model.LoHaModel