Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

Fix incorrect default in conll2003.from_params #1453

Merged
merged 2 commits into from Jul 3, 2018

Conversation

nelson-liu
Copy link
Contributor

No description provided.

@nelson-liu nelson-liu merged commit 59ecd3b into master Jul 3, 2018
@nelson-liu nelson-liu deleted the conll2003_from_params branch July 3, 2018 23:55
gabrielStanovsky pushed a commit to gabrielStanovsky/allennlp that referenced this pull request Sep 7, 2018
matt-peters pushed a commit that referenced this pull request Sep 10, 2018
…arning rates (ULMFiT) (#1636)

* Created a STLR schedule with gradual unfreezing

* Fixed parameter groups regex example

* Added splitting of parameters into param groups, discriminative fine-tuning

* Added splitting of predefined modules

* Created a STLR schedule with gradual unfreezing

* Fixed parameter groups regex example

* Added splitting of parameters into param groups, discriminative fine-tuning

* Fix a typo in embedding_tokens notebook. (#1449)

Fix a typo in embedding_tokens notebook : The vocabulary is `token_ids` and not `tokens_ids`.

* remove RegistrableVocabulary (#1454)

* remove RegistrableVocabulary

* add comment about special from_params logic

* fix pylint

* Allow to use a different validation iterator from training iterator (#1455)

* Allow to use a different validation iterator from training iterator

* Use validation iterator

* Use validation iterator for evaluate, if present

* pylint

* Fix conll2003.from_params incorrect default (#1453)

* fix Vocabulary.from_params to accept a dict for max_vocab_size (#1460)

* fix Vocabulary.from_params to accept a dict for max_vocab_size

* pylint

* Fix call to vocab.token_from_index -> self.label_namespace (#1459)

* Added splitting of predefined modules

* Fixed off-by-one-error as suggested by Matthew

* Modified SlantedTriangular learning rate schedule name and id

* Renamed epoch_no to num_layers_to_unfreeze

* Addressed pylint issues

* Replaced print with logger.info

* Moved specification of discriminative fine-tuning from trainer to learning rate scheduler

* Added pylint: disable=protected-access

* Added handling when optimizer is a string

* Fixed bug with negative lr due to frozen_steps > 0 when not freezing; removed lr argument; added modification of base_lrs

* Added test for slanted triangular learning rate schedule

* Removed automatic assignment to param_groups in trainer.py, added check in STLR

* nits and pylint
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants