How well are students in your peer-assessed class grading? How would it compare to grades that staff would award? Are there particular questions that are harder or easier for students to grade? How can I improve grading?
These scripts (will soon be) a collection of tools that help instructor answers questions like these. These scripts is aimed at TAs/instructors for a MOOC that runs on Coursera.
Right now, the script produces three graphs that show self-staff agreement, self-peer agreement, and self-peer correlation. Here are sample graphs from the HCI class.
It's easy. Use our script!
-
Install R. It's easy to do for most platforms. Download R. Then, on most platforms, you should just be able to double-click the installer.
-
Once you have installed R, create a new directory. We're going to say it's called
instructor-insights
. Copy thegrading-accuracy.R
file to this folder. (You can alsogit clone
this repository to do this.) -
Go to your Coursera class page, and download your class' Peer Assessment data. The zip file you download for each assignment has two files in it:
submissions.csv
andevaluations.csv
. Copy these files to a directory underinstructor-insights
. If yougit clone
this repository, we create placeholder folders for three assignments. -
Open up a terminal, and navigate to the
instructor-insights
folder. Then type: Rscript grading-accuracy.R This will create graphs like ours in theoutput
folder.
Ask questions and report bugs at the instructor-insights mailing group.