You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, great work with Hydra!! It simplifies experiment management a lot :)
馃殌 Feature Request
To be able to remove values or nodes added by config groups in the default list without resorting to CLI. Useful to configure experiments.
Motivation
Currently, I'm using experiments to configure my experiments. My config files define the whole network structure and the training. However, I hitting with one limitation. From CLI, I can remove values in the config. However, I can't transfer this action to my experiment.yaml.
Now, I want to do an experiment replacing GroupNorm2d with other normalization layers like BatchNorm2d. In addition to changing the _target_, I need to remove num_groups as BatchNorm2d doesn't have this argument. So, from cli, I can do it as python main.py model.layer.norm_layer._target_=torch.nn.BatchNorm2d ~model.layer.norm_layer.num_groups.
However, I can't do the same with an experiment.yaml :/ .
experiment.yaml
model:
layer:
norm_layer:
_target_: torch.nn.BatchNorm2dnum_groups: 32 <-- 驴How can I remove this entry?
This is probably related to #1745 . In there, Omry says:
My suggestion is to embrace that config composition is an additive process.
I already apply this approach in other parts of the config. However, applying in all possible places where _target_ is inside a node isn't feasible for me. See below for a more complete example.
Pitch
Describe the solution you'd like
A simple way to remove num_groups from the node in my experiment.yaml. For example, ~num_groups
Describe alternatives you've considered
Having a config group for any node that has a _target_ is a nightmare. I tried callbacks and resolvers but they don't allow to change the config. I could make a function that traverses the whole config that searches for a custom key where the user specify which keys must be removed from the node.
Additional context
I expose a simple example. The same can be applied wherever a user wants to change the _target_ key in the node. Below, there is my current .yaml for a custom pytorch lightning module. As you can see, it has 12 nodes with _target_. A can't add a config group for each of them as other pytorch lightning modules that handle training for classifiers or segmentation networks.
Note that the specific model, lightning callbacks, lightning trainer flags, dataset, etc are separated in different config groups. So, I can do: python train.py model=yolov5 callbacks=default trainer=local_training task=object_detection dataset=coco to configure and launch the whole experiment.
builder:
_target_: dltrain.tasks.bbox_detection.build_taskjit: falsehparams:
num_classes: ${model.dense_head.num_classes}anchors:
generator:
_target_: dltrain.tasks.models.detection.anchors.RetinaNetAnchorGeneratorlabeler: # Only used if needed. Data transforms can add them._target_: dltrain.tasks.models.detection.anchors.AnchorLabelernum_classes: ${...num_classes}post_processing:
_target_: dltrain.tasks.bbox_detection.OneStageDetectorPostProcessing# Not need to specify box_coder as the task past it for us.num_classes: ${..num_classes}box_coder:
_target_: dltrain.tasks.models.detection.box_coders.FasterRcnnBoxCodertrain:
optimize:
optimizer:
# Default value in Rwightman EfficientDet repo._target_: torch.optim.SGDlr: 0.01momentum: 0.9weight_decay: 4e-5lr_factor:
backbone: 0.3dense_head: 1.0neck: 1.0schedule: # Pytorch Lightning `lr_dict` for configure_optimizers_convert_: partial # Lightning needs python dict and not omegaconf.DictConfiginterval: stepname: nullscheduler:
_target_: dltrain.tasks.optimizers.schedulers.SchedulerLRmax_iterations: ${setup.trainer.max_steps}scheduler:
_target_: dltrain.tasks.optimizers.schedulers.CompositeSchedulerschedulers:
- _target_: dltrain.tasks.optimizers.schedulers.LinearSchedulerstart_value: 1e-3
- _target_: dltrain.tasks.optimizers.schedulers.CosineSchedulerend_value: 0.01until_iterations:
- 6000
- ${setup.trainer.max_steps}losses:
apply_ema_to_num_positive_anchors: True# YoloV5 uses [4.0, 1.0, 0.4] or [4.0, 1.0, 0.25, 0.06, .02] depending on the number of levels.features_levels_weights: [ 1., 1., 1., 1., 1. ]# In line of TF Object detection & Fast-RCNN detectron2. Other terms: regression, boxlocalization:
loss:
_target_: dltrain.tasks.losses.detection_losses.HuberLossdelta: 0.1normalize_by_num_positive_anchors: trueweight: 12.5classification:
loss:
_target_: dltrain.tasks.losses.detection_losses.FocalLossalpha: 0.25gamma: 1.5num_classes: ${.....num_classes}normalize_by_num_positive_anchors: trueweight: 1
The text was updated successfully, but these errors were encountered:
hal-314
changed the title
[Feature Request] Be able to remove values in yamls without CLI
[Feature Request] Be able to remove values and node in yamls without usnig CLI
Sep 16, 2021
Jasha10
changed the title
[Feature Request] Be able to remove values and node in yamls without usnig CLI
[Feature Request] Be able to remove values and nodes using the defaults list
Sep 16, 2021
Jasha10
changed the title
[Feature Request] Be able to remove values and nodes using the defaults list
[Feature Request] Be able to delete values and nodes using the defaults list
Sep 16, 2021
This was requested before.
Config composition is an additive process. There are no plans to extend the support for deleting through the command line.
You have two options:
"My suggestion is to embrace that config composition is an additive process"
Delete fields you don't want programmatically. (You may need to use open_dict, see OmegaConf docs).
I think 1 is the right way to go.
Instead of trying to morph a config node to be something else, just compose the right node to begin with.
You can take advantage of the defaults list to achieve that.
First of all, great work with Hydra!! It simplifies experiment management a lot :)
馃殌 Feature Request
To be able to remove values or nodes added by config groups in the default list without resorting to CLI. Useful to configure experiments.
Motivation
Currently, I'm using experiments to configure my experiments. My config files define the whole network structure and the training. However, I hitting with one limitation. From CLI, I can remove values in the config. However, I can't transfer this action to my experiment.yaml.
For example, let say that my config.yaml is
being
model/my_awesome_model.yaml:
Now, I want to do an experiment replacing GroupNorm2d with other normalization layers like BatchNorm2d. In addition to changing the
_target_
, I need to removenum_groups
as BatchNorm2d doesn't have this argument. So, from cli, I can do it aspython main.py model.layer.norm_layer._target_=torch.nn.BatchNorm2d ~model.layer.norm_layer.num_groups
.However, I can't do the same with an experiment.yaml :/ .
experiment.yaml
This is probably related to #1745 . In there, Omry says:
I already apply this approach in other parts of the config. However, applying in all possible places where
_target_
is inside a node isn't feasible for me. See below for a more complete example.Pitch
Describe the solution you'd like
A simple way to remove
num_groups
from the node in myexperiment.yaml
. For example,~num_groups
Describe alternatives you've considered
Having a config group for any node that has a
_target_
is a nightmare. I tried callbacks and resolvers but they don't allow to change the config. I could make a function that traverses the whole config that searches for a custom key where the user specify which keys must be removed from the node.Additional context
I expose a simple example. The same can be applied wherever a user wants to change the
_target_
key in the node. Below, there is my current .yaml for a custom pytorch lightning module. As you can see, it has 12 nodes with_target_
. A can't add a config group for each of them as other pytorch lightning modules that handle training for classifiers or segmentation networks.Note that the specific model, lightning callbacks, lightning trainer flags, dataset, etc are separated in different config groups. So, I can do:
python train.py model=yolov5 callbacks=default trainer=local_training task=object_detection dataset=coco
to configure and launch the whole experiment.The text was updated successfully, but these errors were encountered: