Skip to content

Commit

Permalink
explain LR_SCHEDULE in configs (#1452)
Browse files Browse the repository at this point in the history
  • Loading branch information
ppwwyyxx committed Jun 5, 2020
1 parent b434887 commit 9c1b1b7
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion examples/FasterRCNN/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,10 +147,12 @@ def __ne__(self, _):
_C.TRAIN.STARTING_EPOCH = 1 # the first epoch to start with, useful to continue a training

# LR_SCHEDULE means equivalent steps when the total batch size is 8.
# It can be either a string like "3x" that refers to standard convention, or a list of int.
# LR_SCHEDULE=3x is the same as LR_SCHEDULE=[420000, 500000, 540000], which
# means to decrease LR at steps 420k and 500k and stop training at 540k.
# When the total bs!=8, the actual iterations to decrease learning rate, and
# the base learning rate are computed from BASE_LR and LR_SCHEDULE.
# Therefore, there is *no need* to modify the config if you only change the number of GPUs.

_C.TRAIN.LR_SCHEDULE = "1x" # "1x" schedule in detectron
_C.TRAIN.EVAL_PERIOD = 50 # period (epochs) to run evaluation
_C.TRAIN.CHECKPOINT_PERIOD = 20 # period (epochs) to save model
Expand Down

0 comments on commit 9c1b1b7

Please sign in to comment.