Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for BERT for relevance transfer #19

Merged
merged 17 commits into from May 24, 2019

Conversation

Projects
None yet
3 participants
@achyudh
Copy link
Member

commented May 5, 2019

Commit history is garbled. Feel free to squash the changes.

achyudh added some commits Apr 14, 2019

Integrate BERT into Hedwig (#29)
* Fix package imports

* Update README.md

* Fix bug due to TAR/AR attribute check

* Add BERT models

* Add BERT tokenizer

* Return logits from the model.py

* Remove unused classes in models/bert

* Return logits from the model.py (#12)

* Remove unused classes in models/bert (#13)

* Add initial main file

* Add args for BERT

* Add partial support for BERT

* Initialize training and optimization

* Draft the structure of Trainers for BERT

* Remove duplicate tokenizer

* Add utils

* Move optimization to utils

* Add more structure for trainer

* Refactor the trainer (#15)

* Refactor the trainer

* Add more edits

* Add support for our datasets

* Add evaluator

* Split data4bert module into multiple processors

* Refactor BERT tokenizer

* Integrate BERT into Castor framework (#17)

* Remove unused classes in models/bert

* Split data4bert module into multiple processors

* Refactor BERT tokenizer

* Add multilabel support in BertTrainer

* Add multilabel support in BertEvaluator

* Add get_test_samples method in dataset processors

* Fix args.py for BERT

* Add support for Reuters, IMDB datasets for BERT

* Revert "Integrate BERT into Castor framework (#17)"

This reverts commit e4244ec.

* Fix paths to datasets in dataset classes and args

* Add SST dataset

* Add hedwig-data instructions to README.md

* Fix KimCNN README

* Fix RegLSTM README

* Fix typos in README

* Remove trec_eval from README

* Add tensorboardX to requirements.txt

* Rename processors module to bert_processors

* Add method to print metrics after training

* Add model check-pointing and early stopping for BERT

* Add logos

* Update README.md

* Fix code comments in classification trainer

* Add support for AAPD, Sogou, AGNews and Yelp2014

* Fix bug that deleted saved models

* Update README for HAN

* Update README for XML-CNN

* Remove redundant TODOs from the READMEs

* Fix logo in README.md

* Update README for Char-CNN

* Fix all the READMEs

* Resolve conflict

* Fix Typos

* Re-Add SST2 Processor

* Add support for evaluating trained model

* Update args.py

* Resolve issues due to DataParallel wrapper on saved model

* Remove redundant Yelp processor

* Fix bug for safely creating the saving directory

* Change checkpoint paths to timestamps

* Remove unwanted string.strip() from tokenizer

* Create save path if it doesn't exist

* Decouple model checkpoints from code

* Remove model choice restrictions for BERT

* Remove model/distill driver

* Simplify checkpoint directory creation

@achyudh achyudh self-assigned this May 5, 2019

@achyudh achyudh added the enhancement label May 5, 2019

@achyudh achyudh requested review from Ashutosh-Adhikari and daemon May 5, 2019

@Ashutosh-Adhikari

This comment has been minimized.

Copy link
Member

commented May 5, 2019

Can we please maintain a separate branch for this until EMNLP?

@achyudh

This comment has been minimized.

Copy link
Member Author

commented May 15, 2019

Can someone please look at this? It's been open for a while now.

@daemon

daemon approved these changes May 15, 2019

super().__init__(kwargs['dataset'], model, kwargs['embedding'], kwargs['data_loader'],
batch_size=config['batch_size'], device=config['device'])

if config['model'] in {'BERT-Base', 'BERT-Large'}:

This comment has been minimized.

Copy link
@daemon

daemon May 15, 2019

Member

Why not use bert-large-uncased or bert-large for config['model']?

This comment has been minimized.

Copy link
@achyudh

achyudh May 24, 2019

Author Member

I just wanted to be consistent with the model names (KimCNN, XML-CNN etc.). I replace BERT-Large to bert-large-uncased in the driver method.

train_features = convert_examples_to_features(
self.train_examples, self.config['max_seq_length'], self.tokenizer)

all_input_ids = torch.tensor([f.input_ids for f in train_features], dtype=torch.long)

This comment has been minimized.

Copy link
@daemon

daemon May 15, 2019

Member

torch.(lowercase t)ensor should already default to longs for int Python types.

This comment has been minimized.

Copy link
@achyudh

achyudh May 24, 2019

Author Member

Maybe train_features contains Python float variables? I got an error downstream when calculating the loss and I had to explicitly specify torch.long here to resolve it.

class Robust45Processor(BertProcessor):
NAME = 'Robust45'
NUM_CLASSES = 2
TOPICS = ['307', '310', '321', '325', '330', '336', '341', '344', '345', '347', '350', '353', '354', '355', '356',

This comment has been minimized.

Copy link
@daemon

daemon May 15, 2019

Member

This is redundant?

This comment has been minimized.

Copy link
@achyudh

achyudh May 24, 2019

Author Member

Yeah it is. I can use the values from the Robust45 class, but then I would have to create a dict that maps the dataset classes with the corresponding processors. I'll add it to the things to do in the future.

@achyudh achyudh merged commit 3cd54c2 into castorini:master May 24, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.