You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The deeplabcut benchmark scripts currently do not check if a submission is complete, e.g. if predictions for all test images are returned, as noted by @n-poulsen .
Describe the solution you'd like
A good solution would be to add a few lines to this part of the evaluation code, sth along the lines of
predictions=self.get_predictions(name)
self._validate_predictions(name)
...
def_validate_predictions(self, predictions):
# check if predictions contains too many images not contained in the ground truth# check if predictions contains not all images in the ground truth# (other potential tests)pass
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
The deeplabcut benchmark scripts currently do not check if a submission is complete, e.g. if predictions for all test images are returned, as noted by @n-poulsen .
Describe the solution you'd like
A good solution would be to add a few lines to this part of the evaluation code, sth along the lines of
The text was updated successfully, but these errors were encountered: