Skip to content

sjdai/open-ended-question-eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

open-ended-question-eval

Intro

This is my capstone project at UT-Austin MSIS 2023. The open-ended question is a widely used method in human-subject surveys. Open-ended questions provide opportunities for discovering human subjects’ spontaneous responses to the questions. However, it is labor-intensive and time- consuming to analyze the responses. In light of the challenge, I aim to leverage NLP methods to create a tool for assisting the analysis of open-ended questions.

Code Extraction

I extracted the codes using PyABSA. The sample data is also aquire from the repo.

Code Classification

I implemented the classifier leveraging GPT-neo released by EleutherAI. The model can be accessed on Hugging Face

Workflow

Usage

  1. Packages
pip install -r requirement.txt
  1. Codes Extraction
python main.py --file_path <path to responses file> --extraction

The codes will be saved as codes.csv. You can edit the codes before you do classification.

  1. Classification
python main.py --file_path <path to responses file> --classification

The results will saved as json files.

You can also run python main.py --file_path <path to responses file> --extraction --classification. However, you will not be able to edit the codes.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages