Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
This branch is 2 commits behind cmkumar87:master.

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Library for processing MOOC data dumps. Currently limited to Coursera data.

Published Findings

Papers published using this code on our MOOC corpus are available in this repository for download here:

If you use this code for your own research, we request you to let us know by email or github issues and cite us.

  • Chandrasekaran, M. K., Kan, M.-Y., Ragupathi, K., Tan, B. C. Y. 2015. “Learning instructor intervention from MOOC forums”. In Proceedings of the 8th International Conference on Educational Data Mining, Madrid, Spain. pp. 218-225. International Education Data Mining Society.
  • Chandrasekaran, Epp, C.D., M. K., Kan, M.-Y., Litman, D., 2017. “Using Discourse Signals for Robust Instructor Intervention Prediction”. In Proceedings of the Thirty-First AAAI conference on Artificial Intelligence (AAAI-17), San Francisco, USA. pp. 3415-3421. AAAI.

Coursera data export

To use this library you need to procure data dumps of MOOCs you won from Coursera. Coursera exports data from its MOOCs after compeltion for use by the university that is hosting it on its platform. These data dumps are .sql exports from MySQL databases. A typical data export consists of the following .sql files
  1. <Full_Coursename>(<coursecode>)_SQL_anonymized_forum.sql
  2. <Full_Coursename>(<coursecode>)_SQL_hash_mapping.sql
  3. <Full_Coursename>(<coursecode>)_SQL_anonymized_general.sql
  4. <Full_Coursename>(<coursecode>)_SQL_unanonymizable.sql

A .txt file with clickstream data is also provided. We do not ywt process them in this library
5. <coursecode>_clickstream_export.gz

For replicating our published results (in our papers), it is sufficient to import files (1), (2) and (3).

How to run this code?

Step by step instructions on runnning experiments to replicate our EDM 2015 and AAAI 2017 papers are accessible here.


To use the library to process and analyse your data you will first need to install the MySQL database and ingest the .sql files into the database.
Command to ingest .sql files using MySQL command line interface (CLI): mysql> source <path to .sql file>/<name of the.sql file>

Note that Coursera supplies a sql export for every course. This means DDL statements across the files from different courses will be redundant. More importatnly there is no field for coursecode in any of the tables. So, you have to either: i) create a separate MySQL database for each course dump (1 per each course iteration) or ii) add a 'coursecode' field to every table and issue update statements to populate the coursecode field after running the *.sql import


The scripts require you to have installed Perl 5 and some dependant perl packages.

For Windows users
Install Strawberyy Perl from here or Active Perl from here

For Linux, Mac users
Linux and Mac users should have perl already installed as part of your OS. You can check this with the command perl -v in your terminal.

Dependant Perl Modules (Packages) to install

CPAN has tools to easy install perl modules. Please see this step-by-step tutorial
The packages to install are:
  • DBI
  • FindBin
  • Getopt::Long
  • Encode
  • HTML::Entities
  • Lingua::EN::Sentence
  • Lingua::EN::Tokenizer::Offsets
  • Lingua::StopWords
  • Lingua::EN::StopWordList
  • Lingua::Stem::Snowball
  • Lingua::EN::Ngram
  • Lingua::EN::Bigram ## Fails on linux centos 6
  • Lingua::EN::Tagger
  • Lingua::EN::PluralToSingular
  • Config::Simple
  • File::Remove


Library for processing MOOC data dumps. Currently limited to Coursera data.







No releases published


No packages published


  • Perl 99.1%
  • TSQL 0.9%