Skip to content

Implementation of NAACL 2024 paper Unveiling the Generalization Power of Fine-Tuned Large Language Models

Notifications You must be signed in to change notification settings

LHRYANG/Generalization_of_FT-LLM

Repository files navigation

Unveiling the Generalization Power of Fine-Tuned Large Language Models

1. Train the model

bash run_train.sh

If you want to fine-tune the model with in-context learning, just change the train.py in run_train.sh to train_ptune.py

2. Evaluate the model on various datasets

bash run_evaluate.sh

3. Assess the performance

Modify the variable prefix in evaluate_cross.py then

python evaluate_cross.py

About

Implementation of NAACL 2024 paper Unveiling the Generalization Power of Fine-Tuned Large Language Models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published