Skip to content

We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. Meanwhile, we created a new branch to build a Tabular LLM.(我们分别统一了丰富的IFT数据(如CoT数据,目前仍不断扩充)、多种训练效率方法(如lora,p-tuning)以及多种LLMs,三个层面上的接口,打造方便研究人员上手的LLM-IFT研究平台。同时tabular_llm分支构建了面向表格智能任务的LLM。

License

Notifications You must be signed in to change notification settings

mattkallo/Alpaca-CoT

 
 

Repository files navigation

中文 | English

Alpaca-CoT

Alpaca-CoT: An Instruction-Tuning Platform with Unified Interface of Instruction Collection, Parameter-efficient Methods, and Large Language Models

LICENSE torch data model wandb colab

This is the repository for the Alpaca-CoT project, which aims to build an instruction finetuning (IFT) platform with extensive instruction collection (especially the CoT datasets) and a unified interface for various large language models and parameter-efficient methods. We are constantly expanding our instruction-tuning data collection, and integrating more LLMs and more parameter-efficient methods. In addition, we created a new branch tabular_llm to build a Tabular LLM for solving Table Intelligence Tasks.

You are in a warm welcome to provide us with any non-collected instruction-tuning datasets (or their sources). We will uniformly format them, train Alpaca model (and other LLMs in the early future) with these datasets, open source the model checkpoints, and conduct extensive empirical studies. We hope that our project can make a modest contribution to the open-source process of large language models, and reduce its threshold for NLP researchers to get started.

You can also choose to join our group chat (WeChat) and communicate with more people with the same interests. At present, the number of group members is too large to join the group directly through the group QR code. You need to connect with me first to get into the group.

News

  • ⚠ If you want to use other methods besides LORA, please install the edited version in our project pip install -e ./peft.

  • 6.25: Add model evaluation code, including belle and MMCU.

  • 5.20: fixs bugs in model saving and add wandb support.

  • 5.15: more datasets like GPT4Tools, Auto CoT, pCLUE are add.

  • 🚀5.5: A new branch tabular_llm is created to build a Tabular LLM. We collect instruction fine-tuning data for table-related tasks like table question answering and use them to fine-tune LLMs in this repo.

  • 🚀5.4: All parameter-efficient methods in PEFT (e.g., p-tuning) were merged, which can be set by hyper-parameter directly.

  • 🚀5.4: LLM MOSS was merged.

  • 4.21: Datasets GAOKAO, camel, FLAN-Muffin, COIG are collected and formatted.

  • 4.15: Datasets webGPT, dolly, baize, hh-rlhf, OIG(part) are collected and formatted.

  • 4.12: Now you can try Alpaca-CoT on Google Colab.

  • 4.11: Added function multi-turn conversation by @paulcx.

  • 4.9: Datasets firefly, instruct, Code Alpaca are collected and formatted, which can be found here.

  • 4.7: Added functions Parameter merging, Local chatting, Batch predicting and Web service building by @weberr.

  • 4.4: Datasets GPTeacher,Guanaco,HC3,prosocial-dialog, belle-chat&belle-math, xP3 and natural-instructions are collected and formatted.

  • 4.3: The Chinese CoT dataset CoT_CN_data.json can be found here.

Overview

LLaMA [1] is a great work that demonstrates the amazing zero-shot and few-shot ability. It significantly reduces the cost of training, finetuning, and using competitive large language models, i.e., LLaMA-13B outperforms GPT-3(175B) and LLaMA-65B is competitive to PaLM-540M. Recently, to boost the instruction-following ability of LLaMA, Stanford Alpaca [2] finetuned LLaMA-7B on 52K instruction-following data generated by the Self-Instruct [3] techniques. However, at present, the LLM research community still faces three challenges: 1. Even LLaMA-7b still has high requirements for computing resources; 2. There are few open source datasets for instruction finetuning; and 3. There is a lack of empirical study on the impact of various types of instruction on model abilities, such as the ability to respond to Chinese instruction and the CoT reasoning.

To this end, we propose this project, which leverages various improvements that were subsequently proposed, with the following advantages:

    1. This repo contains code, modified from here and here, which can finetune LLaMA cheaply and efficiently (without performance degradation compared to Stanford Alpaca) by using low-rank adaptation (LoRA) [4], PEFT and bitsandbytes. The 7b, 13b and 30b versions of LLaMA models can be easily trained on a single 80G A100.
    1. The models published in this repo significantly improve the CoT (reasoning) capability.
    1. The models published in this repo significantly improve the ability to follow Chinese instructions.
    1. This repo contains a collection of instruction-finetuning datasets that are continuously collected, which so far includes English, Chinese and CoT instructions. In addition, a collection of checkpoints trained with various instruction datasets is also provided.
    1. This repo integrates multiple LLMs and unifies their interfaces, It can be easily switched through hyperparameters. Currently, it includes LLaMA, ChatGLM[5], Bloom[6] and MOSS, and more will continue to be added in the future for researchers to easily invoke and compare different LLMs.
    1. This repo integrates multiple parameter-efficient methods and unifies their interfaces, It can be easily switched through hyperparameters. Currently, it includes LoRA, P-tuning[5], adalora and prefix tuning, and more will continue to be added in the future for researchers to easily invoke and compare different parameter-efficient methods.
    1. This repo contains extensive empirical studies and qualitative analysis, which may provide valuable findings and promote the exploration of LLM in the future.

To the best of our knowledge, this work is the first to study CoT reasoning based on LLaMA and Alpaca. Therefore, we abbreviate our work to Alpaca-CoT.

Data Collection

The relative size of collected datasets can be shown by this graph:

img

Referring to this (@yaodongC), we labeled each collected dataset according to the following rules:

(Lang)Lingual-Tags:

  • EN: Instruction datasets in English
  • CN: Instruction datasets in Chinese
  • ML: [Multi-lingual] Instruction datasets in multiple languages

(Task)Task-Tags:

  • MT: [Multi-task] Datasets containing multiple tasks
  • TS: [Task-specific] Datasets tailored for specific tasks

(Gen)Generation-method:

  • HG: [Human Generated Dataset] Datasets created by humans
  • SI: [Self-Instruct] Datasets generated using self-instruct methods
  • MIX: [Mixed Dataset] Dataset contains both human and machine generated data
  • COL: [Collection of Dataset] Dataset made from a collection of other datasets

Statistics

Dataset Nums Lang Task Gen Type Src Url
Chain of Thought 74771 EN/CN MT HG instruct with cot reasoning annotating CoT on existing data download
GPT4all 806199 EN MT COL code, storys and dialogs distillation from GPT-3.5-turbo download
GPTeacher 29013 EN MT SI general, roleplay, toolformer GPT-4 & toolformer download
Guanaco 534610 ML MT SI various linguistic tasks text-davinci-003 download
HC3 37175 EN/CN TS MIX dialogue evaluation human or ChatGPT download
alpaca 52002 EN MT SI general instruct text-davinci-003 download
Natural Instructions 5040134 ML MT COL diverse nlp tasks human annotated datasets collection download
belle_cn 1079517 CN TS/MT SI general, mathematical reasoning, dialogue text-davinci-003 download
instinwild 52191 EN/CN MT SI generation, open-qa, mind-storm text-davinci-003 download
prosocial dialog 165681 EN TS MIX dialogue GPT-3 rewrites questions + humans feedback manually download
finance_en 68912 EN TS COL financial related qa GPT3.5 download
xP3 78883588 ML MT COL a collection of prompts & datasets across 46 of languages & 16 NLP tasks human annotated datasets collection download
firefly 1649398 CN MT COL 23 nlp tasks human annotated datasets collection download
instruct 888969 EN MT COL augmented of GPT4All, Alpaca, open-source Meta datasets augmentation performed using the advanced NLP tools provided by AllenAI download
Code Alpaca 20022 EN TS SI code generation, editing, optimization text-davinci-003 download
Alpaca_GPT4 52002 EN/CN MT SI general instruct generated by GPT-4 using Alpaca download
webGPT 18994 EN TS MIX information retrieval (IR) QA fine-tuned GPT-3, each instruction has two outputs, select better one download
dolly 2.0 15015 EN TS HG closed QA , summarization and etc, Wikipedia as references human annotated download
baize 653699 EN MT COL a collection from Alpaca, Quora, StackOverFlow and MedQuAD questions human annotated datasets collection download
hh-rlhf 284517 EN TS MIX dialogue dialog between human and RLHF models download
OIG(part) 49237 EN MT COL created from various tasks, such as question and answering using data augmentation, human annotated datasets collection download
GAOKAO 2785 CN MT COL Multiple-choice, Fill-in-the-blank and Open-ended questions from examination human annotated download
camel 760620 EN MT SI Role-Playing conversations in AI Society, Code, Math, Physics, Chemistry, Biolog gpt-3.5-turbo download
FLAN-Muffin 1764800 EN MT COL 60 nlp tasks human annotated datasets collection download
COIG(FlagInstruct) 298428 CN MT COL collect fron Exam, Translated, Human Value Alignment Instructions and Counterfactural Correction Multi-round Chat using automatic tool and manual verification download
GPT4Tools 71446 EN MT SI a collection of tool-related instructions gpt-3.5-turbo download
ShareChat 1663241 EN MT MIX general instruct crowdsourcing to collect conversations between people and ChatGPT (ShareGPT) download
Auto CoT 5816 EN MT COL arithmetic, commonsense, symbolic, and other logical reasoning tasks human annotated datasets collection download
MOSS 1583595 EN/CN TS SI general instruct text-davinci-003 download
ultrachat 28247446 EN Questions about the World, Writing and Creation, Assistance on Existent Materials two separate gpt-3.5-turbo download
Chinese-medical 792099 CN TS COL Questions about medical advice crawl download
CSL 396206 CN MT COL paper text generation, keyword extraction, text summarization and text classification crawl download
pCLUE 1200705 CN MT COL general instruct download
news_commentary 252776 CN TS COL translate download
StackLLaMA todo EN

Download

You can download all the formatted data here. Then you should put them in the data folder.

You can download all checkpoints trained on various types of instruction data from here. Then, after setting LoRA_WEIGHTS (in generate.py) to the local path, you can directly execute the model inference.

Data Fomatting

All data in our collection is formatted into the same templates, where each sample is as follows:

[
{"instruction": instruction string,
"input": input string, # (may be empty)
"output": output string}
]

Note that, for CoT datasets, we first use the template provided by FLAN to change the original dataset into various Chain-of-Thoughts forms, and then convert it to the above format. The formatting script can be found here.

Multi-interface Unified Platform

Setup

pip install -r requirements.txt

Note that, make sure python>=3.9 when finetuning ChatGLM.

PEFT

  • if you want to use other methods besides LORA, please install the edited version in our project
pip install -e ./peft

Instruction Finetuning

In order for researchers to conduct systematic IFT research on LLMs, we have collected different types of instruction data, integrated multiple LLMs, and unified interfaces, making it easy to customize the desired collocation:

  • --model_type : Set the LLM you want to use. Currently, [llama, chatglm, bloom, moss] are supported. The latter two have strong Chinese capabilities, and more LLMs will be integrated in the future.
  • --peft_type: Set the PEFT you want to use. Currently, [lora, adalora, prefix tuning, p tuning, prompt] are supported.
  • --data: Set the data type used for IFT to flexibly tailor the desired command compliance ability. For example, for strong reasoning ability, set "alpaca-cot", for strong Chinese ability, set "belle1.5m", for coding and story generation ability, set "gpt4all", and for financial related response ability, set "finance".
  • --model_name_or_path: This is set to load different versions of the model weights for the target LLM --model_type. For example, to load the llama's 13b version of weights, you can set decapoda-research/llama-13b-hf.

Single GPU

  • for LLaMA
python3 uniform_finetune.py --model_type llama --model_name_or_path decapoda-research/llama-7b-hf \
    --data alpaca-belle-cot --lora_target_modules q_proj v_proj \
    --per_gpu_train_batch_size 4 --learning_rate 3e-4 --epochs 1

Note: for multiple datasets, you can use --data like --data ./data/alpaca.json ./data/finance.json <path2yourdata_1>

  • for ChatGLM
python3 uniform_finetune.py   --model_type chatglm --model_name_or_path THUDM/chatglm-6b \
    --data alpaca-belle-cot --lora_target_modules query_key_value \
    --lora_r 32 --lora_alpha 32 --lora_dropout 0.1 --per_gpu_train_batch_size 2 \
    --learning_rate 2e-5 --epochs 1

Note that load_in_8bit is not yet suitable for ChatGLM, so batch_size must be smaller than others.

  • for BLOOM
python3 uniform_finetune.py   --model_type bloom --model_name_or_path bigscience/bloomz-7b1-mt \
    --data alpaca-belle-cot --lora_target_modules query_key_value \
    --per_gpu_train_batch_size 4 --learning_rate 3e-4 --epochs 1
  • for MOSS
python3 uniform_finetune.py   ---model_type moss --model_name_or_path fnlp/moss-moon-003-sft  \
    --data alpaca --lora_target_modules q_proj v_proj --per_gpu_train_batch_size 1 \
    --learning_rate 3e-4 --epochs 3

Note that you can also pass the local path (where LLM weights saved) to --model_name_or_path. And the data type --data can be freely set according to your interests.

Multiple GPUs

torchrun --nnodes 1 --nproc_per_node $ngpu uniform_finetune.py $args --data $data 
  • for LLaMA
python3 -m torch.distributed.launch --nproc_per_node 4  \
    --nnodes=1 --node_rank=0 --master_addr=xxx --master_port=yyy uniform_finetune.py \
    --model_type llama --model_name_or_path decapoda-research/llama-7b-hf \
    --data alpaca-belle-cot --lora_target_modules q_proj v_proj \
    --per_gpu_train_batch_size 4 --learning_rate 3e-4 --epochs 1
  • for ChatGLM
python3 -m torch.distributed.launch --nproc_per_node 4  \
    --nnodes=1 --node_rank=0 --master_addr=xxx --master_port=yyy \
    uniform_finetune.py   --model_type chatglm --model_name_or_path THUDM/chatglm-6b \
    --data alpaca-belle-cot --lora_target_modules query_key_value \
    --lora_r 32 --lora_alpha 32 --lora_dropout 0.1 --per_gpu_train_batch_size 2 \
    --learning_rate 2e-5 --epochs 1

Note that load_in_8bit is not yet suitable for ChatGLM, so batch_size must be smaller than others.

  • for BLOOM
python3 -m torch.distributed.launch --nproc_per_node 4  \
    --nnodes=1 --node_rank=0 --master_addr=xxx --master_port=yyy \
    uniform_finetune.py   --model_type bloom --model_name_or_path bigscience/bloomz-7b1-mt \
    --data alpaca-belle-cot --lora_target_modules query_key_value \
    --per_gpu_train_batch_size 4 --learning_rate 3e-4 --epochs 1

Inference

python3 generate.py  --data alpaca-belle-cot --model_type llama

python3 generate.py  --data alpaca-belle-cot --model_type chatglm

python3 generate.py  --data alpaca-belle-cot --model_type bloom

More details of instruction finetuing and inference can be found here where we modified from. Note that the folders saved-xxx7b are the save path for LoRA weights, and LLaMA weights are automatically downloaded from Hugging Face.

Inference Hyper-parameter Explanation

top_p=0.9,
        #Moderately increase the probability threshold of nucleus sampling to increase the quantity of candidate tokens and increase generation diversity.

temperature=1.0,
        #The previous low temperature parameter could lead to a severe polarization in the probability distribution of generated words, which degenerates the generation strategy into greedy decoding.

do_sample=True,
        #do_sample parameter is set to False by default. After setting to True, the generation methods turn into beam-search multinomial sampling decoding strategy.

no_repeat_ngram_size=6,
        #Configure the probability of the next repeating n-gram to 0, to ensure that there are no n-grams appearing twice. This setting is an empirical preliminary exploration.

repetition_penalty=1.8,
        #For words that have appeared before, in the subsequent prediction process, we reduce the probability of their reoccurrence by introducing the repetition_penalty parameter. This setting is an empirical preliminary exploration.

Parameter merging

python3 merge.py --model_type llama --size 7b --lora_dir xxx --merged_dir yyy

Local chatting

python3 server.py --model_type chatglm --size 6b --lora_dir xxx

Batch predicting

python3 predict.py --model_type chatglm --size 6b --data for_dict_data --lora_dir xxx --result_dir yyy

Web service building

python3 web.py --model_type chatglm --size 6b --lora_dir xxx

Quantitative Analysis

Note: The following figure shows the statistics of the dataset collected as of March 26, which is only displayed as a motivation of data collection. More datasets have been collected, such as financial related instruction datasets. data collection statistics The current collection of instruction-finetuning datasets consists mainly of three parts:

  • alpaca_data_cleaned.json: about 52K English instruction-following training samples.
  • CoT_data.json: 9 CoT datasets involving about 75k samples. (published by FLAN[7])
  • belle_data_cn.json: about 0.5M Chinese |instruction-following training samples. (published by BELLE [8])

Ablation of CoT and Chinese Instructions

ablation-cot "w/o CoT" and "w/o CN" denote models that exclude CoT data and Chinese instructions from their instruction finetuning data, respectively.

The above table shows two examples (invoving with numerical calculations) that require a certain amount of reasoning ability to respond correctly. As shown in the middle column, Ours w/o CoT fails to generate the correct response, which shows that once the finetuning data does not contain CoT data, the model's reasoning ability significantly decreases. This further demonstrates that CoT data is essential for LLM models.

ablation-cot

The above table shows two examples that require the ability to respond to Chinese instructions. As shown in the right column, either the generated content of Ours w/o CN is unreasonable, or the Chinese instructions are answered in English by Ours w/o CN. This shows that removing Chinese data during finetuning will cause the model to be unable to handle Chinese instructions, and further demonstrates the need to collect Chinese instruction finetuning data.

ablation-cot

The above table shows a relatively difficult example, which requires both a certain accumulation of knowledge of Chinese history and a logical and complete ability to state historical events. As shown in this table, Ours w/o CN can only generate a short and erroneous response, because due to the lack of Chinese finetuning data, the corresponding knowledge of Chinese history is naturally lacking. Although Ours w/o CoT lists some relevant Chinese historical events, its logic of expression is self-contradictory, which is caused by the lack of CoT data. `

In summary, the models finetuned from our complete dataset (English, Chinese, and CoT instruction data) can significantly improve model reasoning and Chinese instruction following abilities.

The Effect of CoT Data

CoT-comparison Samples of each odd number of rows do not apply the CoT prompt, such as "step-by-step reasoning." Both Ours(w/CoT) and Alpaca are based on LLaMA-7B, and the only difference between them two is that the instruction-finetuning data of Ours(w/CoT) has a extra CoT data than that of Alpaca.

From the above table, we find that:

  • Ours(w/CoT) always generates the correct rationale before the answer, while Alpaca fails to generate any reasonable rationale, as shown in the first 4 examples (commonsense questions). This shows that using CoT data for finetuning can significantly improve reasoning ability.
  • For Ours(w/CoT), the CoT prompt (e.g., concatenate 'step-by-step' with the input question) has little effect on easy examples (e.g., commonsense questions) and has an important effect on challenging questions (e.g., questions requiring reasoning, like the last four examples).
  • For Alpaca, CoT prompt always has little effect or even negative impact. For the last two examples, after adding CoT prompt, Aplpaca changes the correct generated answer to the wrong one. This may be due to the inconsistency between the input forms of finetuning and inference.

The Effect of Chinese Instruction Data

Quantitative comparison of responses to Chinese instructions. CN_compare_CN

Our model is finetuned from a 7B LLaMA on 52K English instructions and 0.5M Chinese instructions. Stanford Alpaca (our reimplementation) is finetuned from a 7B LLaMA on 52K English instructions. BELLE is finetuned from a 7B BLOOM on 2B Chinese instructions.

From the above table, several observations can be found:

  • Compared to Alpaca, ours (w/ CN) has a stronger ability to understand Chinese instructions. For the first example, Alpaca fails to distinguish between the instruction part and input part, while we do.
  • Chinese instruction finetuning data can significant enhance the ability to interact in Chinese. For the second example, ours (w/ CN) not only provides the correct code, but also provides the corresponding Chinese annotation, while Alpaca does not. In addition, as shown in the 3-5 examples, Alpaca can only respond to Chinese instruction with an English response.
  • Compared to BELLE, ours (w/ CN)'s performance on instructions requiring an open response (as shown in last two examples) still needs to be improved. BELLE's outstanding performance against such instructions is due to: 1. Its BLOOM backbone model encounters much more multilingual data during pre-training; 2. Its Chinese instruction finetuning data is more than ours, that is, 2M vs 0.5M.

Quantitative comparison of responses to English instructions. The purpose of this subsection is to explore whether finetuning on Chinese instructions has a negative impact on Alpaca. CN_compare_EN

From the above table, we find that:

  • Finetuning with Chinese instruction data does not weaken the original English instruction–following ability, on the contrary, there is also a certain enhancement in genearting a better response to English intructions. The response of ours (w/ CN) shows more detail than that of Alpaca, e.g. for the third example, ours (w/ CN) list three more provinces than Alpaca.

Citation

Please cite the repo if you use the data collection, code, and experimental findings in this repo.

@misc{alpaca-cot,
  author = {Qingyi Si, Tong Wang, Naibin Gu, Rui Liu, Zheng Lin },
  school = {Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China},
  title = {Alpaca-CoT: An Instruction-Tuning Platform with Unified Interface of Instruction Collection, Parameter-efficient Methods, and Large Language Models},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/PhoebusSi/alpaca-CoT}},
}

For data and models, please cite the original data, parameter-efficient methods and LLMs source as well.

About

We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. Meanwhile, we created a new branch to build a Tabular LLM.(我们分别统一了丰富的IFT数据(如CoT数据,目前仍不断扩充)、多种训练效率方法(如lora,p-tuning)以及多种LLMs,三个层面上的接口,打造方便研究人员上手的LLM-IFT研究平台。同时tabular_llm分支构建了面向表格智能任务的LLM。

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 90.8%
  • Python 7.0%
  • MDX 2.1%
  • Other 0.1%