Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Be able to delete values and nodes using the defaults list #1827

Closed
hal-314 opened this issue Sep 16, 2021 · 2 comments
Closed
Labels
enhancement Enhanvement request

Comments

@hal-314
Copy link

hal-314 commented Sep 16, 2021

First of all, great work with Hydra!! It simplifies experiment management a lot :)

馃殌 Feature Request

To be able to remove values or nodes added by config groups in the default list without resorting to CLI. Useful to configure experiments.

Motivation

Currently, I'm using experiments to configure my experiments. My config files define the whole network structure and the training. However, I hitting with one limitation. From CLI, I can remove values in the config. However, I can't transfer this action to my experiment.yaml.

For example, let say that my config.yaml is

defaults:
     model: my_awesome_model
     ....

being model/my_awesome_model.yaml:

     ....
     layer:
         norm_layer:
               _target_: torch.nn.BatchNorm2d
               _target_: torch.nn.GroupNorm2d
               num_groups: 32
               num_features: 64
     .....

Now, I want to do an experiment replacing GroupNorm2d with other normalization layers like BatchNorm2d. In addition to changing the _target_, I need to remove num_groups as BatchNorm2d doesn't have this argument. So, from cli, I can do it as python main.py model.layer.norm_layer._target_=torch.nn.BatchNorm2d ~model.layer.norm_layer.num_groups.

However, I can't do the same with an experiment.yaml :/ .

experiment.yaml

model:
     layer:
         norm_layer:
               _target_: torch.nn.BatchNorm2d
               num_groups: 32 <-- 驴How can I remove this entry?

This is probably related to #1745 . In there, Omry says:

My suggestion is to embrace that config composition is an additive process.

I already apply this approach in other parts of the config. However, applying in all possible places where _target_ is inside a node isn't feasible for me. See below for a more complete example.

Pitch

Describe the solution you'd like
A simple way to remove num_groups from the node in my experiment.yaml. For example, ~num_groups

Describe alternatives you've considered
Having a config group for any node that has a _target_ is a nightmare. I tried callbacks and resolvers but they don't allow to change the config. I could make a function that traverses the whole config that searches for a custom key where the user specify which keys must be removed from the node.

Additional context

I expose a simple example. The same can be applied wherever a user wants to change the _target_ key in the node. Below, there is my current .yaml for a custom pytorch lightning module. As you can see, it has 12 nodes with _target_. A can't add a config group for each of them as other pytorch lightning modules that handle training for classifiers or segmentation networks.

Note that the specific model, lightning callbacks, lightning trainer flags, dataset, etc are separated in different config groups. So, I can do: python train.py model=yolov5 callbacks=default trainer=local_training task=object_detection dataset=coco to configure and launch the whole experiment.

builder:
  _target_: dltrain.tasks.bbox_detection.build_task
  jit: false

hparams:
  num_classes: ${model.dense_head.num_classes}

  anchors:
    generator:
      _target_: dltrain.tasks.models.detection.anchors.RetinaNetAnchorGenerator

    labeler: # Only used if needed. Data transforms can add them.
      _target_: dltrain.tasks.models.detection.anchors.AnchorLabeler
      num_classes: ${...num_classes}

  post_processing:
    _target_: dltrain.tasks.bbox_detection.OneStageDetectorPostProcessing
    # Not need to specify box_coder as the task past it for us.
    num_classes: ${..num_classes}
    box_coder:
      _target_: dltrain.tasks.models.detection.box_coders.FasterRcnnBoxCoder

  train:
    optimize:
      optimizer:
        # Default value in Rwightman EfficientDet repo.
        _target_: torch.optim.SGD
        lr: 0.01
        momentum: 0.9
        weight_decay: 4e-5
      lr_factor:
        backbone: 0.3
        dense_head: 1.0
        neck: 1.0

    schedule:  # Pytorch Lightning `lr_dict` for configure_optimizers
      _convert_: partial  # Lightning needs python dict and not omegaconf.DictConfig
      interval: step
      name: null
      scheduler:
        _target_: dltrain.tasks.optimizers.schedulers.SchedulerLR
        max_iterations: ${setup.trainer.max_steps}
        scheduler:
          _target_: dltrain.tasks.optimizers.schedulers.CompositeScheduler
          schedulers:
            - _target_: dltrain.tasks.optimizers.schedulers.LinearScheduler
              start_value: 1e-3
            - _target_: dltrain.tasks.optimizers.schedulers.CosineScheduler
              end_value: 0.01
          until_iterations:
            - 6000
            - ${setup.trainer.max_steps}

    losses:
      apply_ema_to_num_positive_anchors: True
      # YoloV5 uses [4.0, 1.0, 0.4] or [4.0, 1.0, 0.25, 0.06, .02] depending on the number of levels.
      features_levels_weights: [ 1., 1., 1., 1., 1. ]

      # In line of TF Object detection & Fast-RCNN detectron2. Other terms: regression, box
      localization:
        loss:
          _target_: dltrain.tasks.losses.detection_losses.HuberLoss
          delta: 0.1
        normalize_by_num_positive_anchors: true
        weight: 12.5

      classification:
        loss:
          _target_: dltrain.tasks.losses.detection_losses.FocalLoss
          alpha: 0.25
          gamma: 1.5
          num_classes: ${.....num_classes}
        normalize_by_num_positive_anchors: true
        weight: 1
@hal-314 hal-314 added the enhancement Enhanvement request label Sep 16, 2021
@hal-314 hal-314 changed the title [Feature Request] Be able to remove values in yamls without CLI [Feature Request] Be able to remove values and node in yamls without usnig CLI Sep 16, 2021
@Jasha10 Jasha10 changed the title [Feature Request] Be able to remove values and node in yamls without usnig CLI [Feature Request] Be able to remove values and nodes using the defaults list Sep 16, 2021
@Jasha10 Jasha10 changed the title [Feature Request] Be able to remove values and nodes using the defaults list [Feature Request] Be able to delete values and nodes using the defaults list Sep 16, 2021
@omry
Copy link
Collaborator

omry commented Sep 17, 2021

This was requested before.
Config composition is an additive process. There are no plans to extend the support for deleting through the command line.

You have two options:

  1. "My suggestion is to embrace that config composition is an additive process"
  2. Delete fields you don't want programmatically. (You may need to use open_dict, see OmegaConf docs).

I think 1 is the right way to go.
Instead of trying to morph a config node to be something else, just compose the right node to begin with.
You can take advantage of the defaults list to achieve that.

@omry omry closed this as completed Sep 17, 2021
@omry
Copy link
Collaborator

omry commented Sep 17, 2021

Feel free to followup here or in the Hydra chat if you want to discuss this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhanvement request
Projects
None yet
Development

No branches or pull requests

2 participants