Skip to content

Can 1-bit quantized model be finetuned using SFT using LoRA or without it #63

Answered by mobicham
sanjeev-bhandari asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @sanjeev-bhandari , that's an issue of the peft library not hqq.
We have our own way of doing LoRA: https://github.com/mobiusml/hqq/?tab=readme-ov-file#peft-training

Replies: 1 comment 5 replies

Comment options

You must be logged in to vote
5 replies
@sanjeev-bhandari
Comment options

@mobicham
Comment options

@sanjeev-bhandari
Comment options

@mobicham
Comment options

@sanjeev-bhandari
Comment options

Answer selected by mobicham
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants