You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am wondering do you recalculate the loss for remaining unlabeled samples after retraining the model? From the paper algorithm description seems the method does not recalculate. That is, the training curriculum follows the very first-time model pattern that training only on the original label set? Could you help me confirm this?
The text was updated successfully, but these errors were encountered:
adamtupper
pushed a commit
to adamtupper/curriculum-labeling
that referenced
this issue
Feb 24, 2023
Hi, I am wondering do you recalculate the loss for remaining unlabeled samples after retraining the model? From the paper algorithm description seems the method does not recalculate. That is, the training curriculum follows the very first-time model pattern that training only on the original label set? Could you help me confirm this?
The text was updated successfully, but these errors were encountered: