-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LoRA, QLoRA fine-tuning for HF models #97
Comments
yeah, sounds good |
Hey @VictorOdede, I'd like to work on this. |
Kindly mention this in the engineering channel on Discord then @mmirman or @cartazio will give you the green light.
|
@bilal-aamer We certainly won't stop you from working on it :-) but it'd be good to get on a call with @abhigya-sodani to see what is useful to know before starting it! |
Sure will do! Did some due diligence already, need some pointers from @abhigya-sodani. |
sure bilal lets get on a call soon |
If someone is not actively working on this, I can take up this task |
Have at it!
…On Mon, Aug 14, 2023 at 9:47 AM Hari Vamsi ***@***.***> wrote:
If someone is not actively working on this, I can take up this task
—
Reply to this email directly, view it on GitHub
<#97 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAABBQVSV53J3WQS3P3EZLTXVIT65ANCNFSM6AAAAAA2XRNFKI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
#7 is also applicable/related to this ticket |
Is there any team working on this issue? |
if youre confident you can bang something together, have at it! this one of the more complex open tickets, so be honest with yourself! that said, worst case you learn something! |
Current onsite LLM class uses full parameter fine-tuning which costly. LoRA fine-tuning will require less memory and prevent overfitting by freezing the pretrained weights.
The text was updated successfully, but these errors were encountered: