Skip to content

petezh/ChatGPT-Advice

Repository files navigation

Taking Advice from ChatGPT

Last Updated on 05/20/2023

Author: Peter Zhang

Paper link

This study treats GPT models as an advisor in a judge-advisor system. We prompt InstructGPT using CoT on the MMLU benchmark and treat model output as "advice." In our lab study, 118 student participants answer 2,828 questions and recieve a chance to update their answer from the advice. We analyze factors affecting weight on advice and participant confidence in advice answers. This repository contains all of the collected data as well as code to reproduce the tables and figures in the paper.

Setup

  1. Install packages using the following:
pip install -r requirements.txt
  1. Add your OpenAI API key to ask_question.py.
  2. Download the MMLU benchmark from the official repository and move it to hendrycks_test

Data

Model Output

Survey Responses

  • The survey responses are downloaded from Qualtrics and placed in the survey_responses folder.
  • The updated survey responses from April 23rd are stored in `responses_0425.csv.

Scripts

The create_dataset.py script creates a sample of the MMLU benchmark. The evaluate.py script supports multiple types of prompting. The templates used in the evaluation script are located in the templates folder.

To reproduce the model evaluation, run cot_eval.sh from the main repository folder.

The clean_responses.py script preprocesses the survey responses and creates features used in the analysis.

Notebooks

Outputs

All figures and tables are output to their respective folders.