Skip to content

[BUG] Unexpected behavior when CheckpointSaver's parameter max_history is 1 #387

@chenshen03

Description

@chenshen03

Describe the bug
When the parameter max_history in the CheckpointSaver.py is set to 1, the checkpoint is saved for each epoch.

To Reproduce
Steps to reproduce the behavior:

import torch
import timm
from timm.utils import CheckpointSaver

model = timm.models.xception()
optimizer = torch.optim.Adam(model.parameters())

saver = CheckpointSaver(model, optimizer, max_history=1)
saver.save_checkpoint(0, metric=0.5)
saver.save_checkpoint(1, metric=0.5)
saver.save_checkpoint(2, metric=0.6)
saver.save_checkpoint(3, metric=0.7)
saver.save_checkpoint(4, metric=0.8)
saver.save_checkpoint(5, metric=0.7)

Expected behavior
The checkpoint of each epoch will be saved.

Screenshots
image

Desktop (please complete the following information):

  • OS: Ubuntu 18.04
  • This repository version [e.g. pip 0.3.1 or commit ref]
  • PyTorch version w/ CUDA/cuDNN [e.g. from conda list, 1.7.0 py3.8_cuda11.0.221_cudnn8.0.3_0]

Additional context
Add any other context about the problem here.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions