Skip to content
/ IDEAL Public

IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models

License

Notifications You must be signed in to change notification settings

skzhang1/IDEAL

Repository files navigation

(ICLR 2024) IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models

Shaokun Zhang1*, Xiaobo Xia2*, Zhaoqing Wang2, Ling-Hao Chen3, Jiale Liu4, Qingyun Wu1, Tongliang Liu2

1Pennsylvania State University, 2The University of Sydney, 3Tsinghua University, 4Xidian University

*Equal Contribution.

Official implementation for paper IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models.

🏠 Abstract

In-context learning is a promising paradigm that utilizes In-context examples as prompts for the predictions of large language models. These prompts are crucial for achieving strong performance. However, since the prompts need to be sampled from a large volume of annotated examples, finding the right prompt may result in high annotation costs. To address this challenge, this paper introduces an influence-driven selective annotation method that aims to minimize annotation costs while improving the quality of In-context examples. The essence of our method is to select a pivotal subset from a large-scale unlabeled data pool to annotate for the subsequent sampling of prompts. Specifically, a directed graph is first constructed to represent unlabeled data. Afterward, the influence of candidate unlabeled subsets is quantified with a diffusion process. A simple yet effective greedy algorithm for unlabeled data selection is lastly introduced. It iteratively selects the data if it provides a maximum marginal gain with respect to quantified influence. Compared with previous efforts on selective annotations, our influence-driven method works in an end-to-end manner, avoids an intractable explicit balance between data diversity and representativeness, and enjoys theoretical support. Experiments confirm the superiority of the proposed method on various benchmarks, achieving better performance under lower time consumption during subset selection.

🛠️ Requirements

To install requirements:

conda env create -f ideal.yml
conda activate ideal
cd transformers
pip install -e .

It will create the conda environment ideal we used.

🚀 How to run?

Activate the environment

conda activate ideal

End-to-end pip line for experiments

a. Perform evaluations on MRPC, SST-5, MNLI, DBpedia, RTE, HellaSwag, and Xsum.

python main.py  --model_cache_dir models 
                --data_cache_dir datasets 
                --task_name mrpc 
                --selective_annotation_method ideal 
                --annotation_size 18
                --cuda_id 0
                --model_name EleutherAI/gpt-j-6B

It will run IDEAL on mrpc with GPT-J 6B. The annotation budget is 18.

b. Perform evaluations on MWoZ

python main_mowz.py --model_key your_openai_key_here
                    --annotation_size 18
                    --selection_1 ideal
                    --selection_2 similar
                    --cuda_id 0

It will run IDEAL on MWoZ dataset with Text-devinci-002. The annotation budget is 18.

c. Perform evaluations on GeoQuery

python main_geo.py  --model_key your_openai_key_here
                    --annotation_size 18
                    --selective_annotation_method ideal
                    --cuda_id 0

It will run IDEAL on GeoQuery dataset with Text-devinci-002. The annotation budget is 18.

📚 License

This code is distributed under an Apache LICENSE. Note that our code depends on other libraries and datasets which each have their own respective licenses that must also be followed.

🌹 Acknowledgement

The code is on the basis of MetalCL and Vote-k. Thanks to all contributors!

🤝🏼 Citation

If you find the code is useful in your research, please cite us:

@article{zhang2023ide,
  title={IDEAL: Influence-Driven Selective Annotations Empower In-context Learners in Large Language Models},
  author={Zhang, Shaokun and Xia, Xiaobo and Wang, Zhaoqing and Chen, Ling-Hao and Liu, Jiale and Wu, Qingyun and Liu, Tongliang},
  journal={arXiv preprint arXiv:2310.10873},
  year={2023}
}

If you have any question, please contact at: shaokun [DOT] zhang [AT] psu [DOT] edu, xiaoboxia [DOT] uni [AT] gmail [DOT] com.

About

IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published