Unify Efficient Fine-Tuning of 100+ LLMs
-
Updated
May 23, 2024 - Python
Unify Efficient Fine-Tuning of 100+ LLMs
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
Firefly: 大模型训练工具,支持训练Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
chatglm 6b finetuning and alpaca finetuning
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
🐋MindChat(漫谈)——心理大模型:漫谈人生路, 笑对风霜途
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
chatglm2 6b finetuning and alpaca finetuning
🌿孙思邈中文医疗大模型(Sunsimiao):提供安全、可靠、普惠的中文医疗大模型
LongQLoRA: Extent Context Length of LLMs Efficiently
Tuning the Finetuning: An exploration of achieving success with QLoRA
Finetuning Some Wizard Models With QLoRA
Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow
Small(7B and below) finetuned LLMs for a diverse set of useful tasks
A Gradio web UI for Large Language Models. Supports LoRA/QLoRA finetuning,RAG(Retrieval-augmented generation) and Chat
Add a description, image, and links to the qlora topic page so that developers can more easily learn about it.
To associate your repository with the qlora topic, visit your repo's landing page and select "manage topics."