This repository hosts the source code for our master's thesis in Computer Science at IT-University, Copenhagen in spring 2021.
This thesis explores the effects of knowledge distillation, pruning, and quantization of the RoBERTa language model on SST-2, QQP, and MNLI. The goal is to learn the extent to which a finetuned RoBERTa language model can be compressed while providing tolerable performance.