-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support GPTQ file training LoRA #2714
Comments
Thank you very much! |
when you try training you get :
Been trying to work out how to train a lora or qlora on a gptq 4bit model for well over a couple of weeks now, everything i read says its do-able, everything i try fails. We really need a step by step guide to training Lora/Qlora in textgen webui for these very popular and used models |
Do you have a link to a guide that waorks for training gptq 4bit models inside textgen webui? I'll take a look at the link when i get home, though no idea how to compile something with out an IDE. I wish making Lora's for LLM's was as well documented as it is form aking Lora's for image generation models lol |
I'm currently training Wizard 7b uncensored on my computer, with out monkey patch and just using autogptq to load_in_4bit. For some reason this works with no monkey patch, I've followed the monkey patch guide but it just dosen't work for me. Also in the Docs folder of this repo there is a readme for training with 4bit lora's and it has the instructions for using monkey patch. |
EDIT: Text gen loaded.
|
Interesting, going to do a fresh download and install and give it a try with out attempting to do anything else. Good to see someone successful able to train a lora for a gptq 4bit model, i was really starting to think it was impossible Nope same error as above |
do you have a link to a guide on how to do that? |
@Ph0rk0z how could I train in 4-bit using monkey patches? I've tried everything I can find coming up with different errors. |
This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, please leave a comment below. |
Any news on GPTQ lora training that's more up to date? |
Description
First of all, thank you very much for this tool to help me get started quickly.
Can you support GPTQ models in the training LORA files in subsequent versions?
Additional Context
https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored-GPTQ
This is the model I use, which can load and chat(with --wbits 4 --groupsize 128), but cannot train to generate LORA files, when I start LoRA Training,command says: LoRA training has only currently been validated for LLaMA,OPT,GPT-J,and GPT-NeoX models.(Found model type:LlamaGPTQForCausalLM)
The text was updated successfully, but these errors were encountered: