Skip to content
/ DARG Public

The official repo for DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph

Notifications You must be signed in to change notification settings

SALT-NLP/DARG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🌟 DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph 🌟

Project Banner

🚀 This is the official code for the paper titled DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph. This project aims to dynamically augment current LLM benchmarks through internal reasoning graph interpolation with fine-grained complexity control, addressing issues such as data contamination and the benchmarks' inability to adapt to LLMs' ever-evolving capabilities.

Authors: Zhehao Zhang, Jiaao Chen, Diyi Yang, 👩‍💼👨‍💼

🌟 Abstract

Abstract

The current paradigm of evaluating Large Language Models (LLMs) through static benchmarks comes with significant limitations, such as vulnerability to data contamination and a lack of adaptability to the evolving capabilities of LLMs. Therefore, evaluation methods that can adapt and generate evaluation data with controlled complexity are urgently needed. In this work, we introduce Dynamic Evaluation of LLMs via Adaptive Reasoning Graph Evolvement (DARG) to dynamically extend current benchmarks with controlled complexity and diversity. Specifically, we first extract the reasoning graphs of data points in current benchmarks and then perturb the reasoning graphs to generate novel testing data. Such newly generated test samples can have different levels of complexity while maintaining linguistic diversity similar to the original benchmarks. We further use a code-augmented LLM to ensure the label correctness of newly generated data. We apply our DARG framework to diverse reasoning tasks in four domains with 15 state-of-the-art LLMs. Experimental results show that almost all LLMs experience a performance decrease with increased complexity and certain LLMs exhibit significant drops. Additionally, we find that LLMs exhibit more biases when being evaluated via the data generated by DARG with higher complexity levels. These observations provide useful insights into how to dynamically and adaptively evaluate LLMs.

🛠️ Installation

To get started with this project, follow these steps:

Clone the Repository

git clone https://github.com/SALT-NLP/DARG.git
cd DARG

Create a Conda Environment

conda create --name darg python=3.11
conda activate darg

Install Dependencies

pip install -r requirements.txt

Prerequisites

To run the code as is, you need to have access to Azure OpenAI. Set the following environment variables to configure the API access:

export AZURE_OPENAI_ENDPOINT=<Your Azure OpenAI Endpoint>
export AZURE_OPENAI_KEY=<Your Azure OpenAI Key>

These variables ensure that the Langchain library can properly authenticate with Azure's OpenAI service. Alternatively, if you wish to use a different LLM provider, adjust the prepare_llm function in all utils.py accordingly so that it can return a Langchain LLM instance llm which you can use llm.invoke() to generate an output given a prompt.

📊 Original Benchmarks

We use the following 4 widely-used benchmarks on 4 diverse reasoning domains

  • Math Reasoning-GSM8K: One of the most widely-used math reasoning dataset which contains high-quality, linguistically diverse school math word problems.
  • Social Reasoning-Bias Benchmark for QA (BBQ):A widely-used dataset that highlight attested social biases against people belonging to protected classes along nine social dimensions.
  • Spatial Reasoning-BIG-Bench Hard (BBH) Navigate: A spatial reasoning dataset which involves giving the LLM navigation steps to determine if the agent returns to the starting point.
  • Symbolic Reasoning-BIG-Bench Hard (BBH) Dyck Language: A symbolic reasoning dataset which requires the model to predict the sequence of closing parentheses for a Dyck-4 word missing its last few closing parentheses.

📁 Directory Structure

DARG/
├── assets/                 # Some project assets
├── GSM8K/                  # Reasoning graph construction and graph-to-text for GSM8K
│   ├── Width/              # Reasoning graph's width's modification
│   ├── Depth/              # Reasoning graph's depth's modification
│   └── Numerical/          # Reasoning graph's numerical complexity's modification
├── BBQ/                    # BBQ's reasoning graph construction, modification and graph-to-text
├── BBH_dyck/               # BBH Dyck Language's reasoning graph construction, modification and graph-to-text 
├── BBH_navigate/           # BBH Navigate's reasoning graph construction, modification and graph-to-text
└── README.md               # Project README

To dynamically generate new data points by interpolating their internal reasoning structures, please follow the instructions in each directory corresponding to the dataset name.

📞 Contact

For inquiries regarding the framework or the paper, please contact:

Citation

If you find our work useful or it contributes to your research, please cite our paper:

@misc{zhang2024dargdynamicevaluationlarge,
      title={DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph}, 
      author={Zhehao Zhang and Jiaao Chen and Diyi Yang},
      year={2024},
      eprint={2406.17271},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
      url={https://arxiv.org/abs/2406.17271}, 
}

About

The official repo for DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages