Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added data augmentation methods and staged learning rate #110

Closed
KaiyangZhou opened this issue Feb 3, 2019 · 4 comments
Closed

Added data augmentation methods and staged learning rate #110

KaiyangZhou opened this issue Feb 3, 2019 · 4 comments
Labels
new_feature New feature (finished)

Comments

@KaiyangZhou
Copy link
Owner

Updates:

  1. Random erasing data augmentation can be enabled by adding argument --augdata-re.
  2. You can use staged learning rate, i.e. different learning rate for different layers. This can be managed by three arguments: (a) --staged-lr: called to use staged learning rate; (b) --new-layers: list of layer names (strings) indicating which layers use the default learning rate and the rest use a scaled learning rate; (c) --base-lr-mult: learning rate multiplier for base layers. For example, when you train resnet50, if you want the randomly initialized self.classifier to have --lr 0.1 and the rest with learning rate scaled by 0.1, you can add --staged-lr --new-layers classifier --base-lr-mult 0.1 to the argument list. See here for more details.
@KaiyangZhou KaiyangZhou added the new_feature New feature (finished) label Feb 3, 2019
@KaiyangZhou
Copy link
Owner Author

In addition, when you do --load-weights path_to_pth, load_pretrained_weights() can handle keys with module., i.e. weights previously saved with nn.DataParallel. Check the code for more details.

@KaiyangZhou
Copy link
Owner Author

p.s. feel free to suggest any features that would be useful to the pipeline.

@KaiyangZhou
Copy link
Owner Author

Flag to call random erasing is changed to --random-erase.

@KaiyangZhou KaiyangZhou changed the title Added random_erasing and staged_lr Added data augmentation methods and staged learning rate Feb 19, 2019
@KaiyangZhou
Copy link
Owner Author

New features in transforms.py:

--color-jiter: randomly change the brightness, contrast and saturation

--color-aug: randomly alter the intensities of RGB channels

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new_feature New feature (finished)
Projects
None yet
Development

No branches or pull requests

1 participant