Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

additional W&B params #5114

Merged
merged 4 commits into from
Apr 12, 2021
Merged

additional W&B params #5114

merged 4 commits into from
Apr 12, 2021

Conversation

epwalsh
Copy link
Member

@epwalsh epwalsh commented Apr 12, 2021

Adds additional parameters to the W&B callback. See the wandb.init docs for reference: https://docs.wandb.ai/ref/python/init

@epwalsh epwalsh requested a review from dirkgr April 12, 2021 20:28
Copy link
Member

@dirkgr dirkgr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not in this PR, but I'm wondering if we should separate the experiment config from the monitoring config.

If we think of the experiment config as a record of what you ran, then configuring wandb in it is a distraction. Things that don't alter the results should not be part of the experiment config. Unless you use batch norm, how many GPUs you ran on should be irrelevant to the results, so it should not be part of the experiment config.

We might instead create a ~/.allennlp.config file that contains this stuff.

I'd love to get some more opinions on this, @matt-gardner, @nelson-liu, @mahnerak, @wlhgtc, @pvcastro. Does that align with how you use it?

@matt-gardner
Copy link
Contributor

I think you have to trade off the conceptual gain you might get from separating them with the complexity of such a configuration system. It's another thing for a user to figure out. I'm not sure the benefit would be worth the cost, honestly, though it's hard to say before seeing a more detailed proposal.

@epwalsh epwalsh merged commit 99415e3 into main Apr 12, 2021
@epwalsh epwalsh deleted the wandb-extra-params branch April 12, 2021 21:26
@pvcastro
Copy link
Contributor

I do agree that unless it's something we're just experimenting with to see how it affects the model training, it would make sense to separate it from the regular training config files. From my experience, the more we can reuse in the training configs, the better. Otherwise it's just something more we have to keep in the training configs and eventually update in all existing configs when something in AllenNLP gets upgraded. These training callbacks are a good recent example, from my perspective.

@wlhgtc
Copy link
Contributor

wlhgtc commented Apr 13, 2021

Maybe we could divide the whole config file into different parts: model, trainer, dataloader....
Then we just need some code to merge it in train process.

@matt-gardner
Copy link
Contributor

Remember that jsonnet already gives you the ability to separate things out into another file.

@pvcastro
Copy link
Contributor

Thanks @matt-gardner , I wasn't aware!

epwalsh added a commit that referenced this pull request Apr 22, 2021
* additional W&B params

* add wandb_kwargs

* fix

* fix docs
epwalsh added a commit that referenced this pull request Apr 22, 2021
* Formatting

* New activation functions

* Makes position embeddings optional in the transformer embeddings

* Adds T5

* Various fixes to make this start up

* Share weights

* Adds one test that passes, and one test that fails

* use min_value_of_dtype in apply_mask

* fixes, add beam search

* encoder fixes

* fix

* fix beam search

* fix tests

* rename to just 'T5'

* fix initialization from pretrained

* add Model, DatasetReader, and Predictor

* remove useless dataset reader

* move high-level peices to allennlp-models

* revert predictor changes

* remove unneeded hidden_size

* remove stray comment

* bool masks

* CHANGELOG

* fix test file name

* revert other change

* revert other change

* Distributed training with gradient accumulation (#5100)

* Fixes distributed training with gradient accumulation

* Fix in case we don't do anything in a batch group

* Test for the problematic condition

* Formatting

* More formatting

* Changelog

* Fix another test

* Fix even more tests

* Fixes one more test

* I can fix these tests all day.

* Add link to gallery and demo in README (#5103)

* Add link to gallery in README

* Update README.md

* try emojis

Is this overkill?

* Adding a metadata field to the basic classifier (#5104)

* Adding metadata parameter to BasicClassifier

* Fix

* Updating the changelog

* reformatting

* updating parameter type

* fixing import

Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>

* additional W&B params (#5114)

* additional W&B params

* add wandb_kwargs

* fix

* fix docs

* Add eval_mode argument to pretrained transformer embedder (#5111)

* Add eval_mode argument to pretrained transformer embedder

* Edit changelog entry

* Lint

* Update allennlp/modules/token_embedders/pretrained_transformer_embedder.py

* Apply suggestions from code review

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>
Co-authored-by: Evan Pete Walsh <petew@allenai.org>

* specify 'truncation' to avoid transformers warning (#5120)

* specify 'truncation' to avoid transformers warning

* Update docs

* Remove `stride` param

* Update CHANGELOG.md

Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>

* Predicting with a dataset reader on a multitask model (#5115)

* Create a way to use allennlp predict with a dataset and a multitask model

* Fix type ignoration

* Changelog

* Fix to the predictor

* fix bug with interleaving dataset reader (#5122)

* fix bug with interleaving dataset reader

* more tests

* Update allennlp/data/dataset_readers/interleaving_dataset_reader.py

* Update allennlp/data/dataset_readers/interleaving_dataset_reader.py

* remove jsonpickle from dependencies (#5121)

Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>

* Update docstring for basic_classifier (#5124)

* improve error message from Registrable class (#5125)

Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com>

* Prepare for release v2.3.0

* fix docs CI

* Take the number of runs in the test for distributed metrics (#5127)

* Take the number of runs in the test for distributed metrics

* Changelog

* Add influence functions to interpret module (#4988)

* creating a new functionality  to fields and instances to support outputing instnaces to json files

* creating tests for the new functionality

* fixing docs

* Delete __init__.py

* Delete influence_interpreter.py

* Delete use_if.py

* Delete simple_influence_test.py

* fixing docs

* finishing up SimpleInfluence

* passing lint

* passing format

* making small progress in coding

* Delete fast_influence.py

Submit to the wrong branch

* Delete faiss_utils.py

wrong branch

* Delete gpt2_bug.py

not sure why it's included

* Delete text_class.py

not sure why it's included

* adding test file

* adding testing files

* deleted unwanted files

* deleted unwanted files and rearrange test files

* small bug

* adjust function call to save instance in json

* Update allennlp/interpret/influence_interpreters/influence_interpreter.py

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>

* Update allennlp/interpret/influence_interpreters/influence_interpreter.py

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>

* Update allennlp/interpret/influence_interpreters/influence_interpreter.py

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>

* move some documentation of parameters to base class

* delete one comment

* delete one deprecated abstract method

* changing interface

* formatting

* formatting err

* passing mypy

* passing mypy

* passing mypy

* passing mypy

* passing integration test

* passing integration test

* adding a new option to the do-all function

* modifying the callable function to the interface

* update API, fixes

* doc fixes

* add `from_path` and `from_archive` methods

* fix docs, improve logging

* add test

* address @matt-gardner's comments

* fixes to documentation

* update docs

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>
Co-authored-by: Evan Pete Walsh <petew@allenai.org>

* Update CONTRIBUTING.md (#5133)

* Update CONTRIBUTING.md

* updated changelog

Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com>
Co-authored-by: Arjun Subramonian <arjuns@ip-192-168-0-106.us-west-2.compute.internal>

* fix #5132 (#5134)

* fix

* Prepare for release v2.3.1

* Fairness Metrics (#5093)

* Added three definitions of fairness

* Updated CHANGELOG

* Added DemographicParityWithoutGroundTruth and finished tests

* finished refactoring Independence, Separation, and Sufficiency to accumulate

* added distributed functionality to Independence, Sufficiency, and Separation

* Finished aggregate and distributed functionality for DemographicParityWithoutGroundTruth

* fixed GPU and doc issues

* fixed GPU and doc issues

* fixed GPU and doc issues

* fixed GPU issues

* fixed GPU issues

* added init file

* fixed typo

* minor docstring changes

* minor changes to docstring

* Added simple explanations of fairness metrics to docstrings

* Further vectorized all metric implementations

* Fixed device issue

Co-authored-by: Arjun Subramonian <arjuns@Arjuns-MacBook-Pro.local>
Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com>
Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>

* fix cached_path for hub downloads (#5141)

* fix cached_path for hub downloads

* fix test name

* fix type hint

* Update allennlp/common/file_utils.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* fix

* fix

Co-authored-by: epwalsh <epwalsh10@gmail.com>
Co-authored-by: Evan Pete Walsh <petew@allenai.org>
Co-authored-by: Jacob Morrison <jacob1morrison@gmail.com>
Co-authored-by: Nelson Liu <nelson-liu@users.noreply.github.com>
Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com>
Co-authored-by: Leo Liu <zeyuliu2@uw.edu>
Co-authored-by: ArjunSubramonian <arjun.subramonian@gmail.com>
Co-authored-by: Arjun Subramonian <arjuns@ip-192-168-0-106.us-west-2.compute.internal>
Co-authored-by: Arjun Subramonian <arjuns@Arjuns-MacBook-Pro.local>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
dirkgr pushed a commit that referenced this pull request May 10, 2021
* additional W&B params

* add wandb_kwargs

* fix

* fix docs
dirkgr added a commit that referenced this pull request May 10, 2021
* Formatting

* New activation functions

* Makes position embeddings optional in the transformer embeddings

* Adds T5

* Various fixes to make this start up

* Share weights

* Adds one test that passes, and one test that fails

* use min_value_of_dtype in apply_mask

* fixes, add beam search

* encoder fixes

* fix

* fix beam search

* fix tests

* rename to just 'T5'

* fix initialization from pretrained

* add Model, DatasetReader, and Predictor

* remove useless dataset reader

* move high-level peices to allennlp-models

* revert predictor changes

* remove unneeded hidden_size

* remove stray comment

* bool masks

* CHANGELOG

* fix test file name

* revert other change

* revert other change

* Distributed training with gradient accumulation (#5100)

* Fixes distributed training with gradient accumulation

* Fix in case we don't do anything in a batch group

* Test for the problematic condition

* Formatting

* More formatting

* Changelog

* Fix another test

* Fix even more tests

* Fixes one more test

* I can fix these tests all day.

* Add link to gallery and demo in README (#5103)

* Add link to gallery in README

* Update README.md

* try emojis

Is this overkill?

* Adding a metadata field to the basic classifier (#5104)

* Adding metadata parameter to BasicClassifier

* Fix

* Updating the changelog

* reformatting

* updating parameter type

* fixing import

Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>

* additional W&B params (#5114)

* additional W&B params

* add wandb_kwargs

* fix

* fix docs

* Add eval_mode argument to pretrained transformer embedder (#5111)

* Add eval_mode argument to pretrained transformer embedder

* Edit changelog entry

* Lint

* Update allennlp/modules/token_embedders/pretrained_transformer_embedder.py

* Apply suggestions from code review

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>
Co-authored-by: Evan Pete Walsh <petew@allenai.org>

* specify 'truncation' to avoid transformers warning (#5120)

* specify 'truncation' to avoid transformers warning

* Update docs

* Remove `stride` param

* Update CHANGELOG.md

Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>

* Predicting with a dataset reader on a multitask model (#5115)

* Create a way to use allennlp predict with a dataset and a multitask model

* Fix type ignoration

* Changelog

* Fix to the predictor

* fix bug with interleaving dataset reader (#5122)

* fix bug with interleaving dataset reader

* more tests

* Update allennlp/data/dataset_readers/interleaving_dataset_reader.py

* Update allennlp/data/dataset_readers/interleaving_dataset_reader.py

* remove jsonpickle from dependencies (#5121)

Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>

* Update docstring for basic_classifier (#5124)

* improve error message from Registrable class (#5125)

Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com>

* Prepare for release v2.3.0

* fix docs CI

* Take the number of runs in the test for distributed metrics (#5127)

* Take the number of runs in the test for distributed metrics

* Changelog

* Add influence functions to interpret module (#4988)

* creating a new functionality  to fields and instances to support outputing instnaces to json files

* creating tests for the new functionality

* fixing docs

* Delete __init__.py

* Delete influence_interpreter.py

* Delete use_if.py

* Delete simple_influence_test.py

* fixing docs

* finishing up SimpleInfluence

* passing lint

* passing format

* making small progress in coding

* Delete fast_influence.py

Submit to the wrong branch

* Delete faiss_utils.py

wrong branch

* Delete gpt2_bug.py

not sure why it's included

* Delete text_class.py

not sure why it's included

* adding test file

* adding testing files

* deleted unwanted files

* deleted unwanted files and rearrange test files

* small bug

* adjust function call to save instance in json

* Update allennlp/interpret/influence_interpreters/influence_interpreter.py

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>

* Update allennlp/interpret/influence_interpreters/influence_interpreter.py

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>

* Update allennlp/interpret/influence_interpreters/influence_interpreter.py

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>

* move some documentation of parameters to base class

* delete one comment

* delete one deprecated abstract method

* changing interface

* formatting

* formatting err

* passing mypy

* passing mypy

* passing mypy

* passing mypy

* passing integration test

* passing integration test

* adding a new option to the do-all function

* modifying the callable function to the interface

* update API, fixes

* doc fixes

* add `from_path` and `from_archive` methods

* fix docs, improve logging

* add test

* address @matt-gardner's comments

* fixes to documentation

* update docs

Co-authored-by: Evan Pete Walsh <epwalsh10@gmail.com>
Co-authored-by: Evan Pete Walsh <petew@allenai.org>

* Update CONTRIBUTING.md (#5133)

* Update CONTRIBUTING.md

* updated changelog

Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com>
Co-authored-by: Arjun Subramonian <arjuns@ip-192-168-0-106.us-west-2.compute.internal>

* fix #5132 (#5134)

* fix

* Prepare for release v2.3.1

* Fairness Metrics (#5093)

* Added three definitions of fairness

* Updated CHANGELOG

* Added DemographicParityWithoutGroundTruth and finished tests

* finished refactoring Independence, Separation, and Sufficiency to accumulate

* added distributed functionality to Independence, Sufficiency, and Separation

* Finished aggregate and distributed functionality for DemographicParityWithoutGroundTruth

* fixed GPU and doc issues

* fixed GPU and doc issues

* fixed GPU and doc issues

* fixed GPU issues

* fixed GPU issues

* added init file

* fixed typo

* minor docstring changes

* minor changes to docstring

* Added simple explanations of fairness metrics to docstrings

* Further vectorized all metric implementations

* Fixed device issue

Co-authored-by: Arjun Subramonian <arjuns@Arjuns-MacBook-Pro.local>
Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com>
Co-authored-by: Dirk Groeneveld <dirkg@allenai.org>

* fix cached_path for hub downloads (#5141)

* fix cached_path for hub downloads

* fix test name

* fix type hint

* Update allennlp/common/file_utils.py

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

Co-authored-by: Lysandre Debut <lysandre@huggingface.co>

* fix

* fix

Co-authored-by: epwalsh <epwalsh10@gmail.com>
Co-authored-by: Evan Pete Walsh <petew@allenai.org>
Co-authored-by: Jacob Morrison <jacob1morrison@gmail.com>
Co-authored-by: Nelson Liu <nelson-liu@users.noreply.github.com>
Co-authored-by: Akshita Bhagia <akshita23bhagia@gmail.com>
Co-authored-by: Leo Liu <zeyuliu2@uw.edu>
Co-authored-by: ArjunSubramonian <arjun.subramonian@gmail.com>
Co-authored-by: Arjun Subramonian <arjuns@ip-192-168-0-106.us-west-2.compute.internal>
Co-authored-by: Arjun Subramonian <arjuns@Arjuns-MacBook-Pro.local>
Co-authored-by: Lysandre Debut <lysandre@huggingface.co>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants