How well are students in your peer-assessed class grading? How would it compare to grades that staff would award? Are there particular questions that are harder or easier for students to grade? How can I improve grading?
These scripts (will soon be) a collection of tools that help instructor answers questions like these. These scripts is aimed at TAs/instructors for a MOOC that runs on Coursera.
Right now, the script produces three graphs that show self-staff agreement, self-peer agreement, and self-peer correlation. Here are sample graphs from the HCI class.
How do I get these charts for my own class?
It's easy. Use our script!
Install R. It's easy to do for most platforms. Download R. Then, on most platforms, you should just be able to double-click the installer.
Once you have installed R, create a new directory. We're going to say it's called
instructor-insights. Copy the
grading-accuracy.Rfile to this folder. (You can also
git clonethis repository to do this.)
Go to your Coursera class page, and download your class' Peer Assessment data. The zip file you download for each assignment has two files in it:
evaluations.csv. Copy these files to a directory under
instructor-insights. If you
git clonethis repository, we create placeholder folders for three assignments.
Open up a terminal, and navigate to the
instructor-insightsfolder. Then type: Rscript grading-accuracy.R This will create graphs like ours in the
Support and Questions
Ask questions and report bugs at the instructor-insights mailing group.