ms-swift: Use PEFT or Full-parameter to finetune 300+ LLMs or 40+ MLLMs. (Qwen2, GLM4, Internlm2.5, Yi, Llama3, Llava, MiniCPM-V, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
-
Updated
Jul 6, 2024 - Python
ms-swift: Use PEFT or Full-parameter to finetune 300+ LLMs or 40+ MLLMs. (Qwen2, GLM4, Internlm2.5, Yi, Llama3, Llava, MiniCPM-V, Deepseek, Baichuan2, Gemma2, Phi3-Vision, ...)
chatglm 6b finetuning and alpaca finetuning
聚宝盆(Cornucopia): 中文金融系列开源可商用大模型,并提供一套高效轻量化的垂直领域LLM训练框架(Pretraining、SFT、RLHF、Quantize等)
Code and datasets for "Character-LLM: A Trainable Agent for Role-Playing"
本项目旨在结合以往研究人员的代表性工作,从多个维度评估sft数据,并自动化过滤sft数据。
Finetune baichuan pretrained model with QLora method
EasyRLHF aims to provide an easy and minimal interface to train aligned language models, using off-the-shelf solutions and datasets
Train expert conversational role-play LLMs with synthetic data
DICE: Detecting In-distribution Data Contamination with LLM's Internal State
Advancing Prompt Evolution through Hybridization
Add a description, image, and links to the sft topic page so that developers can more easily learn about it.
To associate your repository with the sft topic, visit your repo's landing page and select "manage topics."