Skip to content

Naive implementation of DrQA and BERT-base finetune on SQuAD 2.0

Notifications You must be signed in to change notification settings

Tsingularity/Naive_SQuAD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Naive_SQuAD

This repo contains the data pre-processing and model training code for the following two methods:

Naive DrQA

We utilize the Document Reader architecture in DrQA to predict answer text span in the context for the given questions. But for the context encoding, we only use fixed word embedding and aligned question embedding as inputs.

Finetune BERT-base

We add a linear classifer on top of BERT output and finetune them together on the given question answering task. The BERT model is Huggingface Transformer's BERT-base-uncased.

About

Naive implementation of DrQA and BERT-base finetune on SQuAD 2.0

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published