Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Pass run index when performing multi-run #377

Closed
stepelu opened this issue Jan 16, 2020 · 6 comments
Closed

[Feature Request] Pass run index when performing multi-run #377

stepelu opened this issue Jan 16, 2020 · 6 comments
Labels
enhancement Enhanvement request
Milestone

Comments

@stepelu
Copy link

stepelu commented Jan 16, 2020

When running a multi-run job (say training the same model with different hyperparameters), it would help to have each DictConfig contain additional data (hydra.multi_run_id ?) that allows to identify in which of the multi-runs the code is being executed.
The issue is that as of now resources cannot easily be allocated.
If for instance we have 4 GPUs available, we would like to have the first run use GPU:0, the second run use GPU:1, ...

@stepelu stepelu added the enhancement Enhanvement request label Jan 16, 2020
@omry
Copy link
Collaborator

omry commented Jan 16, 2020

Hi @stepelu, thanks for the request.
This is actually there but not documented in a somewhat internal class. thanks for pointing out this need.
For now, you can access it through the HydraConfig singleton which is not documented and will change in the next major version:

import hydra
from hydra.plugins.common.utils import HydraConfig

@hydra.main(config_path="config.yaml", strict=False)
def experiment(_cfg):
    if "num" in HydraConfig().hydra.job:
        print(HydraConfig().hydra.job.num)
    else:
        print("No job number")


if __name__ == "__main__":
    experiment()

I will improve this API for the next major version.

@omry omry added this to the 0.12.0 milestone Jan 16, 2020
@omry
Copy link
Collaborator

omry commented Jan 16, 2020

I have plans of exposing all of the Hydra config to interpolations as an function, so you will be able to do something like:

gpu_id: ${hydra:job.num}

@omry
Copy link
Collaborator

omry commented Jan 16, 2020

By the way, I would be happy to hear more about your use case and context (company/university/what you are using Hydra for etc).
Can you join the chat and share some details?

@omry
Copy link
Collaborator

omry commented Feb 26, 2020

creating a dedicated issue for exposing hydra configuration as described in my comment above.

@omry omry closed this as completed Feb 26, 2020
@omry
Copy link
Collaborator

omry commented Feb 26, 2020

see #325

@agamemnonc
Copy link

agamemnonc commented Feb 23, 2021

Apologies for reviving an old issue.

I am trying to use the Optuna plugin for HPT (using PyTorch with PyTorch-Lightning). I understand that it is possible to use ${hydra:job.num} to tell PL which GPU to use. In this case, how does one setup the Launcher to use 4 parallel jobs so as to always make use of 4/4 GPUs on a local server? Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhanvement request
Projects
None yet
Development

No branches or pull requests

3 participants