This project implements a BERT-based Question Answering system using the SQuAD 2.0 dataset. It fine-tunes pre-trained language models to accurately answer questions based on provided contexts.
- BERT
- SQuAD 2.0
- Fine-Tuning
- Natural Language Processing
- F1 Score: 76.70
- Exact Match (EM) Score: 73.85
- Install required packages (
transformers
,torch
,tqdm
). - Mount Google Drive and download SQuAD 2.0 dataset.
- Preprocess the data, train the model, and evaluate performance.
- Use the trained model to answer questions on new contexts.