Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pin PTL, bump omegaconf #1049

Merged
merged 7 commits into from Aug 20, 2020
Merged

Conversation

titu1994
Copy link
Collaborator

Signed-off-by: smajumdar titu1994@gmail.com

Signed-off-by: smajumdar <titu1994@gmail.com>
@titu1994 titu1994 requested a review from blisc August 20, 2020 20:40
Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: smajumdar <titu1994@gmail.com>
Signed-off-by: smajumdar <titu1994@gmail.com>
@titu1994
Copy link
Collaborator Author

@blisc The patch for the DictConfig being cast down to ordinary dict inside save_hyperparameters() has been patched in this PR - Lightning-AI/pytorch-lightning#2519. We can revert it safely once we bump PTL version.

@titu1994
Copy link
Collaborator Author

@blisc We can also enable multi data loader testing by removing the explicit checks inside model_utils once we bump PTL to 0.9 - Lightning-AI/pytorch-lightning#2581

trainer.test() stalling has been patched via - Lightning-AI/pytorch-lightning#2997, will revert the DDP checks inside ModelPT.prepare_test(Trainer) to allow pass through.

Experiment manager might need some updates due to this PR - Lightning-AI/pytorch-lightning#2719, but I'll defer to your call.

Documenting it here so I can apply those patches once we bump PTL ver.

@blisc
Copy link
Collaborator

blisc commented Aug 20, 2020

@blisc We can also enable multi data loader testing by removing the explicit checks inside model_utils once we bump PTL to 0.9 - PyTorchLightning/pytorch-lightning#2581

trainer.test() stalling has been patched via - PyTorchLightning/pytorch-lightning#2997, will revert the DDP checks inside ModelPT.prepare_test(Trainer) to allow pass through.

Experiment manager might need some updates due to this PR - PyTorchLightning/pytorch-lightning#2719, but I'll defer to your call.

Documenting it here so I can apply those patches once we bump PTL ver.

Right Tomasz alerted me to those changes. IMO, those changes are still insufficient to make DDP work with Lightning. I also don't think it will make a change with exp_manager, since we set hydra.run.dir=. explicitly. Hence hydra.utils.get_original_cwd()==Path.cwd(), and that patch will not change any of exp_manager's behaviour.

@titu1994 titu1994 merged commit fcc1d99 into NVIDIA:candidate Aug 20, 2020
@titu1994 titu1994 deleted the candidate_omegaconf_patch branch August 20, 2020 23:39
@titu1994
Copy link
Collaborator Author

@blisc Yep I reached the same conclusion after going through the PR. It's fine, we can continue using the hydra decorator for the time being, we probably need a cleanup of the older configs to remove the redundant hydra setup (but that can be done much later).

@@ -246,7 +246,10 @@ def change_labels(self, new_labels: List[str]):

# Update config
self._cfg.labels = new_labels
self._cfg.decoder.params = new_decoder_config

OmegaConf.set_struct(self._cfg.decoder, False)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use open_dict:

with open_dict(self._cfg.decoder):
  self._cfg.decoder = new_decoder_config

@blisc blisc mentioned this pull request Sep 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants