Skip to content

[pull] main from huggingface:main#141

Merged
pull[bot] merged 1 commit intodumpmemory:mainfrom
huggingface:main
Nov 18, 2025
Merged

[pull] main from huggingface:main#141
pull[bot] merged 1 commit intodumpmemory:mainfrom
huggingface:main

Conversation

@pull
Copy link
Copy Markdown

@pull pull bot commented Nov 18, 2025

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

Implements GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient
Fine-Tuning (https://arxiv.org/abs/2505.20355).

This PEFT method subdivides the base weight into smaller blocks and
applies LoRA to those. This more granular adaptation promises to
increase expressitivity and improve performance, especially at higher
ranks (64+), closing the gap to full fine-tuning.

---------

Co-authored-by: HaohanTsao <andy94729@gmail.com>
@pull pull bot locked and limited conversation to collaborators Nov 18, 2025
@pull pull bot added the ⤵️ pull label Nov 18, 2025
@pull pull bot merged commit 5fbdd67 into dumpmemory:main Nov 18, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant