Build transformer scratch for text summarization This project involves the implementation of a text summarization model using the Transformer architecture. The model is designed to generate concise and coherent summaries for given texts. The implementation leverages PyTorch for building and training the model, and TorchText for text processing and tokenization. #Installation Prerequisites
- Python 3.8+
 - PyTorch
 - TorchText
 - pandas
 - NumPy
 
The model uses a Transformer-based architecture with the following components:
- Encoder: Extracts features from the input text.
 - Decoder: Generates summaries based on the encoded features.
 - Multi-Head Attention: Allows the model to focus on different parts of the input sequence.
 - Positional Encoding: Provides information about the position of words in the sequence.