Skip to content

Commit

Permalink
Skip two LR schedulers with eager memory leaks in compiled optim tests (
Browse files Browse the repository at this point in the history
pytorch#126133)

SequentialLR and ChainedLR leak memory, so disable these two schedulers until pytorch#126131 is fixed.

Re-enables
pytorch#125925
pytorch#125925
pytorch#125924

Pull Request resolved: pytorch#126133
Approved by: https://github.com/yanboliang, https://github.com/aorenste
  • Loading branch information
mlazos authored and tinglvv committed May 14, 2024
1 parent f7eb109 commit ef729c7
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions test/inductor/test_compiled_optimizers.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,6 @@
OneCycleLR,
PolynomialLR,
ReduceLROnPlateau,
SequentialLR,
StepLR,
)

Expand All @@ -73,9 +72,11 @@
StepLR: {"step_size": 1, "gamma": 100},
MultiStepLR: {"milestones": [1, 2], "gamma": 100},
ExponentialLR: {"gamma": 100},
SequentialLR: {"schedulers": None, "milestones": [1, 2]},
CosineAnnealingLR: {"T_max": 7},
ChainedScheduler: {"schedulers": None},
# These schedulers have memory leaks in eager
# https://github.com/pytorch/pytorch/issues/126131
# SequentialLR: {"schedulers": None, "milestones": [1, 2]},
# ChainedScheduler: {"schedulers": None},
CyclicLR: {"base_lr": 0.001, "max_lr": 0.02, "cycle_momentum": False},
CosineAnnealingWarmRestarts: {"T_0": 1},
OneCycleLR: {
Expand Down

0 comments on commit ef729c7

Please sign in to comment.