Skip to content

Issues: johnsmith0031/alpaca_lora_4bit

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

Support for moe model?
#156 opened Mar 29, 2024 by laoda513
Merging LoRA after finetune
#145 opened Aug 7, 2023 by gameveloster
Crashes during finetuning
#131 opened Jul 4, 2023 by gameveloster
how to change into 8 bit
#120 opened Jun 15, 2023 by leexinyu1204
fine tune with 2 GPU
#118 opened Jun 2, 2023 by shawei3000
Implementing Landmark Attention
#116 opened May 31, 2023 by juanps90
Finetuning 2-bit Quantized Models
#115 opened May 29, 2023 by kuleshov
Code reference request
#112 opened May 25, 2023 by PanQiWei
Consider using new QLoRA
#107 opened May 21, 2023 by juanps90
Version of GPTQ
#104 opened May 13, 2023 by juanps90
ProTip! Add no:assignee to see everything that’s not assigned.