Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


This is a repository containing code and data for the paper:

U. Upadhyay, G. Lancashire, C. Moser and M. Gomez-Rodriguez. Large-scale randomized experiment reveals machine learning helps people learn and remember more effectively., npj Science of Learning, 6, Article number: 26 (2021).

Spaced Selection is a method for optimally selecting the items which the user should revise during a given session to optimize learning.

The modeling of human memory is based on our previous work, Memorize, but instead of choosing the optimal time to review each item, in this work, we allow the user to select the session time and we choose the set of items which she will study during the session.

This repository consists of scripts for analysis of Spaced selection and baseline as well as code to run simulations to compare the performance of different item selection strategies.

The model was trained using data from the popular smart driving-learning app by Swift. The data generated during the randomized trial of the app with different learning algorithms is available for download here.

To prepare, download all the csv files to the data/spaced-algorithms folder. An example file for 1 day of data has been added to the folder already. An IPython notebook has been added to showcase the performance of the leraners for different algorithms.

Unless otherwise stated, the code should be run from the root folder.

Installing Dependencies

pip install -r code/requirements.txt

Swift data to HLR format

➔ ./ --help

  Convert CSV files in INPUT_DIR from format to Duolingo's HLR
  format and save in OUTPUT_HLR_CSV, as well as to extract each attempt and
  save in OUTPUT_SIM_CSV.

  --verbose / --no-verbose  Verbose output.  [default: True]
  --force / --no-force      Overwrite output.  [default: False]
  --min-count INTEGER       Minimum number of times a user must have practiced
                            a question to include it for training/prediction.
                            [default: 1]
  --results-dir TEXT        The results folder for Lineage.  [default:
  --help                    Show this message and exit.

The processed folder contains an example of learned difficulty parameters for the HLR model. However, the user sessions file is not included with the repository.

HLR Parameter learning

➔ ./ --help
usage: [-h] [-b] [-l] [-t] [-m METHOD] [-x MAX_LINES]
                       [-h_reg HLWT] [-l2wt L2WT] [-bins BINS]
                       [-epochs EPOCHS] [-shuffle SHUFFLE]
                       [-training_fraction TRAINING_FRACTION] [-l_rate L_RATE]
                       [-o OUTPUT_FOLDER]

Fit a SpacedRepetitionModel to data.

positional arguments:
  input_file            log file for training

optional arguments:
  -h, --help            show this help message and exit
  -b                    omit bias feature
  -l                    omit lexeme features
  -t                    omit half-life term
  -m METHOD             hlr, lr, leitner, pimsleur, hlr-pw, power
  -x MAX_LINES          maximum number of lines to read (for dev)
  -h_reg HLWT           h regularization weight
  -l2wt L2WT            L2 regularization weight
  -bins BINS            File where the bins boundaries are stored (in days).
  -epochs EPOCHS        Number of epochs to train for.
  -shuffle SHUFFLE      The seed to use to shuffle data, -1 for no shuffling.
  -training_fraction TRAINING_FRACTION
                        The fraction of data to use for training.
  -l_rate L_RATE        Where to save the results.
  -o OUTPUT_FOLDER      Where to save the results.

Grid execution

This is a side script for executing the model on a SLURM engine, if one is available, for easy parameter search.

➔ ./slurm/ --help

  --slurm-output-dir TEXT  Where to save the output  [default: slurm-output]
  --dry / --no-dry         Dry run.  [default: True]
  --epochs INTEGER         Epochs.  [default: 500]
  --mem INTEGER            How much memory will each job need (MB).  
                           [default: 10000]
  --timeout INTEGER        Minutes to timeout.
  --shuffle INTEGER        Seed to shuffle training/testing using.
  --l-rate FLOAT           Initial learning rate.
  --help                   Show this message and exit.

HLR model evaluation

➔ ./ --help

  Read all *.detailed files from RESULTS_DIR, calculate the metrics, and
  save output to OUTPUT_CSV.

  --debug / --no-debug  Run in single threaded mode for debugging.
  --help                Show this message and exit.


➔ ./ --help

  Run the simulation with the given output of training the memory model in
  the file DIFFICULTY_PARAMS weights file.

  It also reads the user session information from USER_SESSIONS_CSV to
  generate feasible teaching times.

  Finally, after running the simulations for 10-seeds, the results are saved

  --seed INTEGER                  Random seed for the experiment.  [default: 42]
  --difficulty-kind [HLR|POWER]   Which memory model to assume for the
                                  difficulty_params.  [default: HLR]
  --student-kind [HLR|POWER|REPLAY]
                                  Which memory model to assume for the
                                  student.  [default: HLR]
                                  Which teacher model to simulate.  
                                  [default: RANDOMIZED]
  --num-users INTEGER             How many users to run the experiments for.
                                  [default: 100]
  --user-id TEXT                  Which user to run the simulation for? [Runs
                                  for the user with maximum attempts
  --force / --no-force            Whether to overwrite output file.  
                                  [default: False]
  --help                          Show this message and exit.

The required files DIFFICULTY_PARAMS (an example included in processed/ folder) and USER_SESSIONS_CSV are produced by the script above.

The different version of the Spaced-selection algorithm which can be simulated are:

  • SPACED_RANKING chooses the top-k items in terms of forgetting probability (that depends on the current half-life factor) for each session deterministically, where k can be tuned/modified per session (produced by the simulator or sampled from real data).

  • SPACED_SELECTION_DYN chooses the k items probabilistically with each item's selection proportional to the probability of forgetting it (that depends on the current half-life factor) for each session, where k can be tuned/modified per session (produced by the simulator or sampled from real data).

  • SPACED_SELECTION samples k items at random proportionally to the forgetting probability (that depends on the current half-life factor) for each session, where k is set by the population average size of sessions.


Code and real data for "Large-scale randomized experiment reveals machine learning helps people learn and remember more effectively", npj Science of Learning 2021







No releases published


No packages published