Skip to content

whats the most idiomatic way to track the best validation accuracy and then fire events #764

@CDitzel

Description

@CDitzel

Following the common pattern

@trainer.on(Events.EPOCH_COMPLETED)
    def log_validation_results(trainer):
        print(evaluator.run(evaluation_loader))  # returns state

an evaluator is instantiated once but its run method is called for one epoch every time the trainer has completed an epoch and the evaluator.state gets reset after every epoch.

What is the idiomatic way to fire a custom Event in case the current evaluation accuracy surpasses all evaluation accuracies before?

I need keep state of evaluators best accuracy across runs, but its state member is nulled upon calling run, i.e. a new epoch. Is the best way here to store this value within the engine itself or rather define a whole new class like its done in https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/custom_events.py

I read #627 but there the evaluator result is stored within the state of the trainer, which seems like an ugly hack

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions