A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
-
Updated
Aug 7, 2024 - Python
A WebUI for Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
Firefly: 大模型训练工具,支持训练Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
🦖 𝗟𝗲𝗮𝗿𝗻 about 𝗟𝗟𝗠𝘀, 𝗟𝗟𝗠𝗢𝗽𝘀, and 𝘃𝗲𝗰𝘁𝗼𝗿 𝗗𝗕𝘀 for free by designing, training, and deploying a real-time financial advisor LLM system ~ 𝘴𝘰𝘶𝘳𝘤𝘦 𝘤𝘰𝘥𝘦 + 𝘷𝘪𝘥𝘦𝘰 & 𝘳𝘦𝘢𝘥𝘪𝘯𝘨 𝘮𝘢𝘵𝘦𝘳𝘪𝘢𝘭𝘴
AirLLM 70B inference with single 4GB GPU
chatglm 6b finetuning and alpaca finetuning
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
🐋MindChat(漫谈)——心理大模型:漫谈人生路, 笑对风霜途
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
Fine-Tuning Falcon-7B, LLAMA 2 with QLoRA to create an advanced AI model with a profound understanding of the Indian legal context.
🐳 Aurora is a [Chinese Version] MoE model. Aurora is a further work based on Mixtral-8x7B, which activates the chat capability of the model's Chinese open domain.
chatglm2 6b finetuning and alpaca finetuning
🌿孙思邈中文医疗大模型(Sunsimiao):提供安全、可靠、普惠的中文医疗大模型
LongQLoRA: Extent Context Length of LLMs Efficiently
End to End Generative AI Industry Projects on LLM Models with Deployment
Implements pre-training, supervised fine-tuning (SFT), and reinforcement learning from human feedback (RLHF), to train and fine-tune the LLaMA2 model to follow human instructions, similar to InstructGPT or ChatGPT, but on a much smaller scale.
Add a description, image, and links to the qlora topic page so that developers can more easily learn about it.
To associate your repository with the qlora topic, visit your repo's landing page and select "manage topics."