Skip to content

Conversation

@AndreaCossu
Copy link
Collaborator

This PR adds a global iteration counter which starts from zero and it is increased by 1 at every training or eval iteration.
The counter is managed by the PluginMetric class. All metrics have been updated to use the global counter.
I also updated the examples with the recent StreamForgetting modifications.

The main modifications the user will perceive are:

  • longer distances between a metric x value and the next one (depending on how many iterations have been performed between the two)
  • larger absolute values for the x values (total number of iterations)

If you agree with this system we can merge the PR.

This closes #375

@AndreaCossu AndreaCossu added the Evaluation Related to the Evaluation module label Apr 1, 2021
@vlomonaco
Copy link
Member

vlomonaco commented Apr 1, 2021

it looks good to me! Feel free to merge when the actions complete.
Let's only make sure the doc is clear about what the x axis means.

@coveralls
Copy link

coveralls commented Apr 1, 2021

Pull Request Test Coverage Report for Build 709321480

  • 13 of 52 (25.0%) changed or added relevant lines in 11 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-0.008%) to 75.959%

Changes Missing Coverage Covered Lines Changed/Added Lines %
avalanche/evaluation/metrics/confusion_matrix.py 0 1 0.0%
avalanche/evaluation/metrics/accuracy.py 3 5 60.0%
avalanche/evaluation/metrics/loss.py 3 5 60.0%
avalanche/evaluation/metrics/mac.py 1 4 25.0%
avalanche/evaluation/metrics/disk_usage.py 0 4 0.0%
avalanche/evaluation/metrics/gpu_usage.py 0 4 0.0%
avalanche/evaluation/metrics/ram_usage.py 0 4 0.0%
avalanche/evaluation/metrics/cpu_usage.py 0 5 0.0%
avalanche/evaluation/metrics/timing.py 0 5 0.0%
avalanche/evaluation/metrics/forgetting.py 0 9 0.0%
Totals Coverage Status
Change from base Build 709286349: -0.008%
Covered Lines: 6474
Relevant Lines: 8523

💛 - Coveralls

@AndreaCossu AndreaCossu merged commit 8eaeac0 into ContinualAI:master Apr 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Evaluation Related to the Evaluation module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

logging with global "x_values"

3 participants