Best viewed on the website
NOTE: There might be dependencies issues with torchtext version used in the notebooks, please see this which may solve the issue
-
This describes all the basics of a neural network, how gradient descent works, learning rate, fully connected neurons, chain rule, etc.
-
We built a Neural Network in a damn spreadsheet :)
-
Basics of PyTorch. Here i built a custom MNIST model, that can classify MNIST Image as well as do addition of that predicted image with a random integer.
-
Built an LSTM From Scratch and Trained an IMDb Sentiment analysis classifier using RNN & LSTM with PyTorch Text.
-
Trained a LSTM Model on the SST Dataset. And did a lot of NLP Augmentations.
-
A Simple Encoder-Decoder Architecture, but for Classification ! I got to learn how encoder-decoder's work, and how the feature vector is used to compress and extract information :)
-
Simple Sequence-to-Sequence Model for Question Answer and Similar Question Generation
Also includes a redo of
5
without any augmentation. 05 Redo -
Introduction to the new TorchText APIs, and deprecation of
torchtext.legacy
Here we convert few notebooks with legacy code to the modern torchtext
0.9.0+
-
Classification Metrics: F1, Precision, Recall, Confusion Matrix
Machine Translation Metrics: Perplexity, BLEU, BERTScore
-
Here we modified the Seq2Seq model to use attention mechanism.
-
Turns out there are a lot of things to learn about attention mechanism. There's the Bahdanau et al. paper, and the Luong et al. paper. In this assignment we implement a version of the Luong general attention model.
-
A Vanilla Implementation of AIAYN Paper (https://arxiv.org/abs/1706.03762)
-
Recap of Attention Is All You Need
-
BERT: Used it for a question answering model and sentence classification model. BART: Used it for paraphrasing model.