Skip to content
This repository has been archived by the owner on May 1, 2023. It is now read-only.

Perf tracker #427

Merged
merged 5 commits into from
Nov 27, 2019
Merged

Perf tracker #427

merged 5 commits into from
Nov 27, 2019

Conversation

nzmora
Copy link
Contributor

@nzmora nzmora commented Nov 14, 2019

Introduce a performance tracking class

This will help define and use different performance sorting schemes.
E.g. this will address the issue raised in issue #411

This will help define and use different performance sorting schemes.
E.g. this will address the issue raised in issue #411
Copy link
Contributor

@barrh barrh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea! I will probably find creative uses for this.
Left a few comments, nothing critical.

distiller/apputils/performance_tracker.py Show resolved Hide resolved
distiller/apputils/performance_tracker.py Outdated Show resolved Hide resolved
distiller/apputils/performance_tracker.py Outdated Show resolved Hide resolved
I don't think utility classes like this should log, because logging is
not a functional part of this class. It's fairly likely that someone
else will want to log in a different way (e.g. to tensorboard). Logging
is an aspect that introduces nuanced dependencies on
behaviors/expectations outside of this class.
def reset(self):
self.perf_scores_history = []

def step(self, model, top1, top5, epoch):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use a list of score-names and score-values pairs instead of hard coded top-1/5? I mean, one could always "abuse" this in non-classification tasks and pass different metrics, but that's not intuitive.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guyjacob I agree that enabling flexible API can benefit the user, but in this case, I feel that it has the potential to over-complicate the API for limited benefit.

e.g. If I were to come up with alternative params for ranking it would be: (top1, loss, epoch)

  • top1 - as before
  • loss - Distinguished from top5 due to lower-is-better attribute, and has to scope of the entire universe of classes.
  • epoch - for stable ordering

To allow to above params, the "pairs" API should first be upgraded to "trios" API: (name,value,asc/desc).
For conclusion, pairs don't benefit me, but trios do, then again - complicated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guyjacob, @barrh - please review the new API. It is now more generic.
@barrh - to control the "direction" of the ranking (what you refer to as asc/desc above) just pass the score multiplied by -1.
For example, in SparsityAccuracyTracker I want to prefer lower parameter NNZ count, so I sort by -params_nnz_cnt.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nzmora Wouldn't it reflect in the logging as negative value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@barrh the logging code is orthogonal and handles this:
self.performance_tracker.step(self.model, epoch, top1=top1, top5=top5) _log_best_scores(self.performance_tracker, msglogger)

_log_best_scores is a private function of image_classifier.py and is currently specific to apputils.SparsityAccuracyTracker (I just pushed a commit to make this explicit in code). If/when someone uses a different PerformanceTracker, they can generalize this logging function.


def step(self, model, top1, top5, epoch):
"""Update the list of top training scores achieved so far"""
raise NotImplementedError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not have a default implementation using just accuracy values?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@guyjacob - please review the new API. It is now more generic.

distiller/apputils/image_classifier.py Show resolved Hide resolved
As per the review comments.
Also change the name of the base class to TrainingPerformanceTracker
because it is now more generic.
@nzmora nzmora merged commit f45a29d into master Nov 27, 2019
@nzmora nzmora deleted the perf_tracker branch November 28, 2019 10:18
michaelbeale-IL pushed a commit that referenced this pull request Apr 24, 2023
This will help define and use different performance sorting schemes.
E.g. this will address the issue raised in issue #411
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants