-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do we finetune the model with new data? #466
Comments
Sounds cool. But this is not on the short-term Roadmap. |
The goal of these integrations is to enable academia adapt to the new era of AI, and to simplify the intricacies involved. So users should be able to finetune their models to suit their data needs. I was running the 30B this morning and the AI does not have important data about Langchain and other recent use cases from 2021 until now. I believe the data used to build the models are old. For my team, we are looking for a no GPU deployment like this one, that can also support finetuning. What can be done to move this request ahead on the Roadmap |
What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model. |
I think it depents on the approach of fine-tuning. |
loading only the lora part IS on the short-term roadmap #457 |
There is the lxe/simple-llama-finetuner repo available for finetuning but you need a GPU with at least 16GB VRAM to finetune the 7B model. |
IS there a way to fine tune these models for reading my documents, etc utilizing cloud hardware but no openai, pinecone, non-free 3rd party dependencies? Code examples would be awesome (i've seen langchain's docs but they are not detailed enough, at leasdt not for me) @leszekhanusz @Green-Sky @PriNova @rupakhetibinit @ekolawole |
@Free-Radical , try vector storage, such as Weaviate. Your query string can contain text in a natural language, the response is based on vector similarity between that string and the documents in the storage. I also tried Vespa, but it didn't work at all. The reason is a design choice that I find questionable, see vespa-engine/pyvespa#499 for details. There are other open source vector storage solutions too. |
@ch3rn0v Thank man, Weaviate looks good, better than going "raw" with FAISS. Will check out Vespa too. |
@Free-Radical you can look at https://github.com/tloen/alpaca-lora |
I agree. It will be helpful to fine-tune LLaMA models only using llama.cpp on CPU. |
I disagree. What if we need to add a little data? It will be done in hours, why not add a little fine-tuning utility? |
Hopefully this will be possible someday. Like many others, I do not have the VRAM to fine tune or create a LORA for models. I wonder if its possible to use the newly added CUDA acceleration in llama.cpp to fine tune quantized models so it doesn't take ages compared to a CPU only approach. |
I'm afraid it's not as simple as a little fine-tuning utility. While you may only want to add a small amount of data, the process of fine-tuning requires updating many weights in the model. Even a small change can have a significant impact on the entire model, so it typically involves retraining or adjusting a considerable portion of the weights |
Yes, but a little amount of data means a little number of iterations. And also we can use LoRA or QLoRA to train only adapter and make fine-tuning simpler. |
* feat: oai-adapter * simplify optional adapter for instruct start and end tags --------- Co-authored-by: Concedo <39025047+LostRuins@users.noreply.github.com>
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Can we have a finetune.cpp or finetune.exe file to incorporate new data into the model? The use case will be to design an AI model that can do more than just general chat. It can become very knowledgeable in specific topics they are finetuned on. Also, after creating the finetune.exe , please ensure no GPU is required for the entire process. Because that is what makes this repo awesome in the first place.
The text was updated successfully, but these errors were encountered: