Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

periodic eval after eval_every iteration #843

Merged
merged 6 commits into from Dec 22, 2021

Conversation

AntonioCarta
Copy link
Collaborator

This PR enables the periodic evaluation after every eval_every iterations by setting peval_mode='iteration' in the BaseStrategy constructor. Also:

  • can be easily adapted to any counter in the future, if we find it useful
  • allow plugins to call eval during training
  • fix bug during the restore of train/eval modes

Closes #838.

@vlomonaco
Copy link
Member

wow nice @AntonioCarta! :)

@ContinualAI-bot
Copy link
Collaborator

Oh no! It seems there are some PEP8 errors! 😕
Don't worry, you can fix them! 💪
Here's a report about the errors and where you can find them:

tests/test_metrics.py:439:81: E501 line too long (81 > 80 characters)
tests/test_metrics.py:442:81: E501 line too long (81 > 80 characters)
2       E501 line too long (81 > 80 characters)

@AntonioCarta
Copy link
Collaborator Author

@AndreaCossu can you take a look? I have no idea what's going on with the metrics tests. I updated the file but I'm still getting the error. On my machine the tests are passing.

@AndreaCossu
Copy link
Collaborator

AndreaCossu commented Dec 13, 2021

Can you pass me the 3 pickle files you have on your machine?

@AndreaCossu
Copy link
Collaborator

I cloned your repo and ran tests on the peval_iterations branch. It's strange because tests are failing in my case also on my machine (FAST_TEST=true USE_GPU=true). If your local tests pass maybe you are using different files somehow? The ones you gave me are the same I found on the branch.

@AndreaCossu AndreaCossu merged commit b6e9239 into ContinualAI:master Dec 22, 2021
@AntonioCarta AntonioCarta deleted the peval_iterations branch January 19, 2022 16:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Initial evaluation fails because of missing model adaptation
4 participants