This project was completed as a part of the Honors portion of the Sequence Models Course on Coursera.
Credit to DeepLearning.AI and the Coursera platform for providing the course materials and guidance.
In this notebook, my objective is to explore and understand the pre-processing methods applied to raw text before passing it to the encoder and decoder blocks of the Transformer architecture.
Upon completing this assignment, I will gain the skills to create visualizations that provide insights into positional encodings, enhancing my understanding of how they impact word embeddings. By delving into the pre-processing techniques, I will be equipped with the knowledge to effectively prepare text data for the Transformer model, facilitating more accurate and meaningful representations.