-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[Feature Request] LoRA-Null: Low-Rank Adaptation via Null Space for Large Language Models
#2425
opened Mar 12, 2025 by
sailfish009
UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization
#2424
opened Mar 12, 2025 by
suhyun01150
load_adapter Fails When modules_to_save Are Different for Each Adapter
#2422
opened Mar 11, 2025 by
saeid93
2 of 4 tasks
running forward loop using get_peft_model disables requires_grad on output
#2410
opened Mar 6, 2025 by
Hamidreza3252
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported.
#2388
opened Feb 19, 2025 by
samuellimabraz
Bug when deleting adapters of a model with modules_to_save
bug
Something isn't working
#2381
opened Feb 17, 2025 by
BenjaminBossan
4 tasks
prompt_tuning_peft tutorial raises cache layer error
#2379
opened Feb 15, 2025 by
jakerobers
1 of 4 tasks
Request to intergrate Structure Sparsity-based PEFT (S2FT)
#2329
opened Jan 14, 2025 by
Hanyuezhuohua
[Warning]
Merge lora module to 4-bit linear may get different generations
#2321
opened Jan 11, 2025 by
steveepreston
1 of 4 tasks
Comparison of Different Fine-Tuning Techniques for Conversational AI
contributions-welcome
good first issue
Good for newcomers
help wanted
Extra attention is needed
#2310
opened Jan 7, 2025 by
ImamaDev
Incompatibility of X-LoRA and MistralForSequenceClassification
#2281
opened Dec 13, 2024 by
cyx96
2 of 4 tasks
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.