This is a repository which shares the data and code regarding the paper entitled "Performance evaluation of automated scoring for the descriptive similarity response task" (Oka, Kusumi, & Utsumi, under review)
model
: This is a blank repository (if you run the scripts onnotebook
directory, the model will be saved in this directory).notebook
: Experiment scrpits of Experiment 1and Experiment 2. Analysis included the scripts to calculate fleiss kappa of Experiment 2. In addition, additional_analysis regarding Experiment 2 was included.result
: Results of Experiment 1, Experiment 2, and additional_analysis. Though we carefully set the hyperparameters which affected the result of experiments, be sure the codes were affected by some hyperparameters regarding coding environment (e.g., random seed of python, numpy, scikit-learn, and torch).data
: Data of Experiment 1 (Response: 20230213_PreExp_SST_ClassifyAnswer_v5.1, Classification criteria: CreateSST_PreExp_SST_20230103_1258_ScoringKey_v5.1) and Experiment 2 (20230412_edited_dat_after_aggregate_v0.1).
Dr. Ryunosuke Oka. oka.exp@gmail.com