Skip to content

Latest commit

 

History

History

intermediate_tasks

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

Intermediate Pre-training

intermediate

By performing intermediate multi-task learning on T5, we can obtain a Boolean Answer Generator. We have released our intermediate model and based on unieval-intermediate, you can train a custom evaluator for a specific NLG task.

Pre-train Data

In total, we use data from the following four tasks to perform intermediate multi-task learning:

The statistics are in data_info.txt. All the pre-train data in the Boolean QA format can be found here. Please unzip it and put it in ./data.

Training

Run the following script to perform intermediate pre-training:

export TOKENIZERS_PARALLELISM=true
export OMP_NUM_THREADS=1

CUDA_VISIBLE_DEVICES=0,1,2 \
python -m torch.distributed.launch --nproc_per_node 3 train_seq2seq.py \
    --model_name_or_path google/t5-v1_1-large \
    --do_train \
    --train_file data/intermediate_train.json \
    --text_column src \
    --summary_column tgt \
    --output_dir ./inter_model \
    --per_device_train_batch_size 3 \
    --gradient_accumulation_steps 4 \
    --max_source_length 1024 \
    --max_target_length 16 \
    --save_strategy epoch \
    --num_train_epochs 10 \
    --ddp_find_unused_parameters False \
  • The batch size can be determined based on your GPUs.
  • We use the checkpoint of the second epochs as unieval-intermediate.