-
-
Notifications
You must be signed in to change notification settings - Fork 608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Pass run index when performing multi-run #377
Comments
Hi @stepelu, thanks for the request. import hydra
from hydra.plugins.common.utils import HydraConfig
@hydra.main(config_path="config.yaml", strict=False)
def experiment(_cfg):
if "num" in HydraConfig().hydra.job:
print(HydraConfig().hydra.job.num)
else:
print("No job number")
if __name__ == "__main__":
experiment() I will improve this API for the next major version. |
I have plans of exposing all of the Hydra config to interpolations as an function, so you will be able to do something like: gpu_id: ${hydra:job.num} |
By the way, I would be happy to hear more about your use case and context (company/university/what you are using Hydra for etc). |
creating a dedicated issue for exposing hydra configuration as described in my comment above. |
see #325 |
Apologies for reviving an old issue. I am trying to use the Optuna plugin for HPT (using PyTorch with PyTorch-Lightning). I understand that it is possible to use |
When running a multi-run job (say training the same model with different hyperparameters), it would help to have each DictConfig contain additional data (
hydra.multi_run_id
?) that allows to identify in which of the multi-runs the code is being executed.The issue is that as of now resources cannot easily be allocated.
If for instance we have 4 GPUs available, we would like to have the first run use GPU:0, the second run use GPU:1, ...
The text was updated successfully, but these errors were encountered: