Skip to content

Experimenting with knowledge distillation, pruning, and quantization of RoBERTa language model on SST-2, QQP, and MNLI. Master's Thesis in Computer Science at IT-University, Copenhagen.

Notifications You must be signed in to change notification settings

pocketML/lil_bobby

Repository files navigation

Lil' Bobby - Compressing RoBERTa_large to small networks

This repository hosts the source code for our master's thesis in Computer Science at IT-University, Copenhagen in spring 2021.

This thesis explores the effects of knowledge distillation, pruning, and quantization of the RoBERTa language model on SST-2, QQP, and MNLI. The goal is to learn the extent to which a finetuned RoBERTa language model can be compressed while providing tolerable performance.

About

Experimenting with knowledge distillation, pruning, and quantization of RoBERTa language model on SST-2, QQP, and MNLI. Master's Thesis in Computer Science at IT-University, Copenhagen.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages